Econometrics
Forough Esmaeily Sadrabadi; Maedeh Khanari
Abstract
This study explores the effects of artificial intelligence (AI) investment on total factor productivity (TFP) in Iranian industries from 1997 to 2020, utilizing a comprehensive dataset organized by four-digit International Standard Industrial Classification (ISIC) codes. We employ the generalized method ...
Read More
This study explores the effects of artificial intelligence (AI) investment on total factor productivity (TFP) in Iranian industries from 1997 to 2020, utilizing a comprehensive dataset organized by four-digit International Standard Industrial Classification (ISIC) codes. We employ the generalized method of moments (GMM) approach to address challenges such as endogeneity and collinearity within a dataset comprising over 200 cross-sectional variables.Our results reveal that both physical and intangible investments significantly influence TFP; a 1% increase in physical investment results in a 0.514% rise in TFP, while intangible investment leads to a 0.288% improvement. A key innovation of this research is the introduction of an AI measurement variable in the production function, employing the Corrado, Hulten, and Sichel (CHS) methodology for a clearer assessment of AI's productivity effects.Although AI investment positively correlates with TFP, its current impact is limited, reflecting the gradual adoption of advanced technologies in Iranian industries. This highlights the need for a comprehensive strategy to fully realize the productivity benefits of AI. We recommend policies aimed at facilitating technology integration and workforce specialization, including investing in training, providing incentives for AI adoption, and promoting collaboration between industry and educational institutions to enhance productivity and competitiveness in the global market.
Econometrics
Neda Esmaeeli; Maryam Ihami
Abstract
This paper aims to assess the existence of information asymmetry on the Iranian stock market and its impact on expected portfolio returns by applying Volume-Synchronized Probability of Informed Trading (VPIN) as a measurement tool. To this end, we used the actual data of 40 companies on the Tehran Stock ...
Read More
This paper aims to assess the existence of information asymmetry on the Iranian stock market and its impact on expected portfolio returns by applying Volume-Synchronized Probability of Informed Trading (VPIN) as a measurement tool. To this end, we used the actual data of 40 companies on the Tehran Stock Exchange (TSE) within the period from March 22, 2018 to March 19, 2020. The outcomes highlight the presence of moderate toxicity levels in the orders of these stocks. Since asymmetric information leads to a risk to investors, they may ask for a premium to trade riskier assets based on the information level, which means that market makers may incorporate the information risk into the pricing of assets. To check this, we investigated the effect of asymmetric information risk on the stock returns on the TSE by adding a factor about the level of order toxicity to the 3, 4, and 5-factor asset pricing models. According to our findings, we affirm that the Iranian stock market priced the asymmetric information risk during the time interval from March 22, 2018, to March 19, 2020. Therefore, it is essential to take into account the information risk factor besides a combination of factors such as market, size, profitability, and investment to obtain the most efficient explanation for the returns of portfolios.
Econometrics
Azam Ahmadyan
Abstract
This article explores the effects of inflation targeting and money growth rate targeting on the balance sheets of Iranian banks during the COVID-19 pandemic. To achieve this, the study focuses on three key objectives using annual macroeconomic and banking sector data from a developing oil-exporting country, ...
Read More
This article explores the effects of inflation targeting and money growth rate targeting on the balance sheets of Iranian banks during the COVID-19 pandemic. To achieve this, the study focuses on three key objectives using annual macroeconomic and banking sector data from a developing oil-exporting country, employing the Bayesian method and a DSGE model. First, the macroeconomic and balance sheet impacts of the COVID-19 pandemic are examined without the use of inflation targeting or money growth rate targeting policies. Second, the effects of COVID-19 are analyzed in conjunction with inflation targeting, and third, the effects of COVID-19 are studied alongside money growth rate targeting. The primary shocks examined in this paper are the COVID-19 shock, the inflation targeting shock, and the money growth rate targeting shock. To assess the impact of these shocks on macroeconomic variables and the balance sheets of banks, reaction function analysis has been employed.The key findings reveal that the negative effects of COVID-19 are exacerbated by targeting policies. During the spread of the epidemic, the use of these targeting strategies further decreases production and investment compared to scenarios without such policies. Moreover, inflation targeting, in particular, has a more pronounced effect on reducing production and investment than money growth rate targeting. Based on these findings, it is recommended that the central bank considers implementing interest rate targeting alongside inflation and money growth rate targeting policies in order to better support the balance sheets of banks and mitigate the negative effects on the broader economy.
Econometrics
Tofigh Beigi; Ahmad Sadraei Javaheri; Ali Hussein Samadi; Ebrahim Hadian
Abstract
Uncertainty is a controversial issue in the philosophy and methodology of economics. Since economic uncertainty is not directly observable, quantifying it is confronted with significant complexities. A common method in this context involves computing the proxy of uncertainty using time series models. ...
Read More
Uncertainty is a controversial issue in the philosophy and methodology of economics. Since economic uncertainty is not directly observable, quantifying it is confronted with significant complexities. A common method in this context involves computing the proxy of uncertainty using time series models. Within this framework, the conditional volatility of the unforecastable components of time series is considered as an uncertainty measure. In this regard, the basic forecasting model should be specified in a way that its forecast errors lack any predictable content. In previous studies, the focus has solely been on economic and financial variables in computing the uncertainty measure, while the role of institutional factors has been neglected in the forecasting model. Meanwhile, based on economic literature, institutions play an important role in controlling and reducing uncertainty. Therefore, in the present study, the economic uncertainty measure is extracted based on a Large-dimensional dynamic factor model, employing a set of 72 macroeconomic and institutional time series for the Iranian economy. The results indicate that overlooking institutional factors in the forecasting model can lead to an overestimation of economic uncertainty. Our perspective enhances the accuracy of uncertainty measurement and provides a more comprehensive understanding of the determinant factors of economic uncertainty.
Econometrics
Pegah Mahdavi; Mohammad Ali Ehsani
Abstract
The understanding of applied modeling in causal effects is of particular importance in econometrics, according to recent developments and research in causal inference applications. We also provide an outline of econometrics’ use of causal inference. The majority of economists would agree that the ...
Read More
The understanding of applied modeling in causal effects is of particular importance in econometrics, according to recent developments and research in causal inference applications. We also provide an outline of econometrics’ use of causal inference. The majority of economists would agree that the randomized controlled experiment is the gold standard for drawing conclusions, but actually, a significant portion of empirical work in econometrics relies on observational data, where, among other things, the possibility of confounding or loss of exogeneity must be taken into account. We focus in particular on two types of contemporary research: randomized experiments and observational studies. Our review of the dynamic causality study approach, the linear method, which includes LP and VAR, and nonlinear statistical modeling which includes BART, and their use in econometrics, are all reviewed in this paper. Modeling dynamic systems with linear parametric models usually suffer limitation which affects forecasting performance and policy implications. On the nonparametric framework, BART specifications can produce more precise tail forecasts than the VAR structure. Finally, BART has the lowest RMSE in linear and non-linear data generation processes, and also the performance of BART important variables in a set of macroeconomic data has an optimal performance than other regression estimators.
Econometrics
Amin Aminimehr; Ali Raoofi; Akbar Aminimehr; Amirhossein Aminimehr
Abstract
In this research, the impact of different preprocessing methods on the Long-Short term memory in predicting the financial time series was examined. At first, the model was implemented on the Tehran stock exchange index by utilizing the Principal Component Analysis (PCA) model on 78 technical indicators. ...
Read More
In this research, the impact of different preprocessing methods on the Long-Short term memory in predicting the financial time series was examined. At first, the model was implemented on the Tehran stock exchange index by utilizing the Principal Component Analysis (PCA) model on 78 technical indicators. Then, the same model was implemented by the advantage of the random forest to select features rather than the PCA to extract them. In the next step, other technical strategy dummy variables were added to the model to examine the changes in its performance. Finally, two deep learning methods with the advantage of only target lags were deployed to compare the accuracy to the other models. The first deep model was plain but the second one was with the advantage of the Wavelet denoising process. The results of the MSE, MAE, MAPE, and R2 score on unseen test sequences showed that applying the Long Short-Term Memory with its own deep feature extraction procedure and the wavelet’s denoising process leads to the best accuracy in prediction of the Tehran stock exchange index. Finally, the Diebold Mariano test exposed a significant difference between the accuracy of the best model and the rest. This result implied that although the application of deep learning gains accurate results, it can be alleviated by feeding the model with creatively extracted and denoised features.
Econometrics
Alireza Kamalian; Seyed Komail Tayebi; Alimorad Sharifi; Hadi Amiri
Abstract
Propensity score matching is extensively utilized in estimating the effects of policy interventions and programs for data observations. This method compares two treatment and control groups to make statistical inferences about the significance of the effects of these policies on target variables. Therefore, ...
Read More
Propensity score matching is extensively utilized in estimating the effects of policy interventions and programs for data observations. This method compares two treatment and control groups to make statistical inferences about the significance of the effects of these policies on target variables. Therefore, when using propensity score matching, it is significant to obtain the standard error to estimate the treatment effect. The precise estimations of variance and standard deviation facilitate more efficient statistical testing and more accurate confidence intervals. However, there is no agreement in the literature on the estimation method of standard error; some methods rely on resampling, while others do not. This study compares these methods using Monte Carlo simulation and calculating the Mean Squared Errors (MSE) of these estimators. Our results indicate that Jackknife and standard methods are superior to Abadie and Imbens (2006) bootstrap, and subsampling ones in terms of accuracy. Finally, reviewing Tayyebi et al. (2019) indicated that different methods of estimating variance in the matching estimator led to different statistical inferences in terms of statistical significance.