Econometrics
Pegah Mahdavi; Mohammad Ali Ehsani
Abstract
The understanding of applied modeling in causal effects is of particular importance in econometrics, according to recent developments and research in causal inference applications. We also provide an outline of econometrics’ use of causal inference. The majority of economists would agree that the ...
Read More
The understanding of applied modeling in causal effects is of particular importance in econometrics, according to recent developments and research in causal inference applications. We also provide an outline of econometrics’ use of causal inference. The majority of economists would agree that the randomized controlled experiment is the gold standard for drawing conclusions, but actually, a significant portion of empirical work in econometrics relies on observational data, where, among other things, the possibility of confounding or loss of exogeneity must be taken into account. We focus in particular on two types of contemporary research: randomized experiments and observational studies. Our review of the dynamic causality study approach, the linear method, which includes LP and VAR, and nonlinear statistical modeling which includes BART, and their use in econometrics, are all reviewed in this paper. Modeling dynamic systems with linear parametric models usually suffer limitation which affects forecasting performance and policy implications. On the nonparametric framework, BART specifications can produce more precise tail forecasts than the VAR structure. Finally, BART has the lowest RMSE in linear and non-linear data generation processes, and also the performance of BART important variables in a set of macroeconomic data has an optimal performance than other regression estimators.
Econometrics
Amin Aminimehr; Ali Raoofi; Akbar Aminimehr; Amirhossein Aminimehr
Abstract
In this research, the impact of different preprocessing methods on the Long-Short term memory in predicting the financial time series was examined. At first, the model was implemented on the Tehran stock exchange index by utilizing the Principal Component Analysis (PCA) model on 78 technical indicators. ...
Read More
In this research, the impact of different preprocessing methods on the Long-Short term memory in predicting the financial time series was examined. At first, the model was implemented on the Tehran stock exchange index by utilizing the Principal Component Analysis (PCA) model on 78 technical indicators. Then, the same model was implemented by the advantage of the random forest to select features rather than the PCA to extract them. In the next step, other technical strategy dummy variables were added to the model to examine the changes in its performance. Finally, two deep learning methods with the advantage of only target lags were deployed to compare the accuracy to the other models. The first deep model was plain but the second one was with the advantage of the Wavelet denoising process. The results of the MSE, MAE, MAPE, and R2 score on unseen test sequences showed that applying the Long Short-Term Memory with its own deep feature extraction procedure and the wavelet’s denoising process leads to the best accuracy in prediction of the Tehran stock exchange index. Finally, the Diebold Mariano test exposed a significant difference between the accuracy of the best model and the rest. This result implied that although the application of deep learning gains accurate results, it can be alleviated by feeding the model with creatively extracted and denoised features.
Econometrics
Alireza Kamalian; Seyed Komail Tayebi; Alimorad Sharifi; Hadi Amiri
Abstract
Propensity score matching is extensively utilized in estimating the effects of policy interventions and programs for data observations. This method compares two treatment and control groups to make statistical inferences about the significance of the effects of these policies on target variables. Therefore, ...
Read More
Propensity score matching is extensively utilized in estimating the effects of policy interventions and programs for data observations. This method compares two treatment and control groups to make statistical inferences about the significance of the effects of these policies on target variables. Therefore, when using propensity score matching, it is significant to obtain the standard error to estimate the treatment effect. The precise estimations of variance and standard deviation facilitate more efficient statistical testing and more accurate confidence intervals. However, there is no agreement in the literature on the estimation method of standard error; some methods rely on resampling, while others do not. This study compares these methods using Monte Carlo simulation and calculating the Mean Squared Errors (MSE) of these estimators. Our results indicate that Jackknife and standard methods are superior to Abadie and Imbens (2006) bootstrap, and subsampling ones in terms of accuracy. Finally, reviewing Tayyebi et al. (2019) indicated that different methods of estimating variance in the matching estimator led to different statistical inferences in terms of statistical significance.