1 / 16

CAUSALITY ANALYSES IN ECONOMETRICS

CAUSALITY ANALYSES IN ECONOMETRICS. Mansor H. Ibrahim Department of Economics International Islamic University Malaysia. OUTLINE. Introduction Definition of Causality Steps and Interpretation Objectives/Philosophy of Econometrics Conclusions. Introduction.

riva
Download Presentation

CAUSALITY ANALYSES IN ECONOMETRICS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CAUSALITY ANALYSES IN ECONOMETRICS Mansor H. Ibrahim Department of Economics International Islamic University Malaysia

  2. OUTLINE • Introduction • Definition of Causality • Steps and Interpretation • Objectives/Philosophy of Econometrics • Conclusions

  3. Introduction • Theories are rich in predictions of the relations between variables. • However, theories provide little guidelines on how to model the variables’ relations. • Moreover, theories tend to admit various causal patterns among variables of interest. • Example – relation between government spending (G) and real output (Y). - Keynesian theory: G  Y - Wagner hypothesis: Y  G - Classical Theory: Y and G are independent. • In this context, statistics/econometrics is used to make sense out of available data – which theory tends to describe reality.

  4. Introduction • Classical Regression: • Weaknesses: - It assumes certain data generation process. More specifically,the error term is assumed to be i.i.d. with mean zero and constant variance. This assumption, however, depends on the stochastic properties of the variables. - It can not address the issue of causality • Granger Causality framework has evolved to a standard statistical technique to uncover causal relations between variables. • Unfortunately, current practices do not go beyond stating significance or non-significance of the relations. Intuitive interpretation is lacking in most cases. • Applied Statisticians – must be aware of the theories: statistical theories as well as theories underlying the variable investigated.

  5. Definition of Causality • Essentially, the notion of causality in time series econometrics is based on the notion of “temporal-precedence” and “predictability”. • The strict rule employed is that only the past can predict the future but not vice versa. Thus, if A happens before B, it is not admissible that B causes A. However, A can potentially causes B. • In modeling this, it is first stated that information to predict the value of a variable (denoted Y) is contained in its past values, which is formulated as an autoregressive process as:

  6. Definition of Causality • If there exist past values of another variable (say X) that can improve the prediction of Y in the sense that then X is taken to Granger cause Y. • The test is carried out by first estimating the following equation: • Then, the restricted F test is computed to test the null hypothesis that . The rejection of the null indicates causality that runs from X to Y. • This formulation of causality is noted to be INADEQUATE as it does not capture stochastic properties of the variables.

  7. Steps and Interpretation • Classical Regression • Assumption: the error term is assumed to be i.i.d. with mean zero and constant variance. In time series context, the error term must be stationary or integrated of order 0. • This assumption will hold only if both X and Y are I(0) or both are I(1) and their linear combination is I(0). • It is well noted that macroeconomic variables are I(1) processes or are non-stationary.

  8. Steps and Interpretation • Note that the error term is basically a linear combination between X and Y: • Indeed, an even more interesting interpretation is attached to this linear combination. It represents a deviation from an equilibrium relationship. • Y is the actual value of the variable under investigation and the term in squared bracket is viewed as its equilibrium value as predicted by its determinant(s). • A positive error term mean that the value is above its equilibrium line and a negative value means that it is below its equilibrium line. • Implication: certain variables (either Y or X) must adjust towards the equilibrium  the error term reverts to its zero mean. • This stochastic properties of the variables have greatly extended our modeling of causality.

  9. Steps and Interpretation • Disequilibrium • Suppose the slope coefficient is positive. Then, if the error term is positive. It must be that Y declines or X increases for the error term to revert back to the zero mean. • This disequilibrium adjustment has been incorporated in the Granger causality framework to form a model, which is known as Vector Error Correction Model.

  10. Steps and Interpretation STANDARD STEPS • Stationarity Tests – to verify the integrated order of the variables. • Cointegration Tests – to establish the long-run co-movements among them. • Model Specification – depending on pre tests - Lag length - VECM vs. VAR - Diagnostics - bases for inferences: result presentation • Inferences

  11. Steps and Interpretation • Formulation: • Distinction is made between two channels of causality based on F test and t test. • we tend to be pre-occupied with “significance” or “non-significance.” This is not sufficient.

  12. Steps and Interpretation • Formulation: • The coefficient of the error correction term needs to be properly corrected in term of the variables adjusting toward equilibrium. • Example: Ibrahim(2007)

  13. Steps and Interpretation • Formulation: • Need to go beyond the F test of lagged first-differenced terms. • Normal Procedure: - Variance Decompositions - Impulse-Response Functions

  14. Objectives/Philosophy of Econometrics • Main objective of science – to learn from experience and data through description or induction. • Generalization/induction – making inferences from past experience to predict future. - Traditional deductive inference is based entirely on proof and disproof. - Deductive inference provides no guide on how to choose among alternative theories - inductive inference – to associate probabilities with propositions (unavoidable uncertainty). There are logical procedures for quantifying this uncertainty using probability (degree of belief). - Based on the notion: “all variation is random or nonsystematic unless shown otherwise” or “no effect unless shown otherwise.” Zellner, A. (2007), “Philosophy and Objectives of Econometrics,” Journal of Econometrics, 136, 331-339.

  15. Objectives/Philosophy of Econometrics • One basic problem in applied econometrics/statistics – trying to uncover facts that have explanation and labeling results that are not intuitive as puzzles and anomalies. • It is acknowledged that statistical results need to be explained, i.e. interpretation needs to be made. • However, the objective of applying statistical procedures is not to arrive at intuitive results. That is, it is not only to confirm or falsify existing theories. • There is a need to bring the applications of statistics to the next level of inductive inference – termed reductive inference. • This involves studying “unusual” or “surprising” facts and devising generalizations to explain them. • In short, USE STATISTICS TO GET UNUSUAL RESULTS. Zellner, A. (2007), “Philosophy and Objectives of Econometrics,” Journal of Econometrics, 136, 331-339.

  16. Conclusion • Understanding the objective of science is crucial for us to be a better researcher. • Statisticians provide the tools and quantitative frameworks for inferences. However, their applications should not lack contents. • Econometrics: it is the integration of intuition that we develop from theories and statistical methods of inferences. • In applications: avoid “significance trap”. Add more intuition to significant results. • In applications: the use of statistics is not merely to confirm inferences from deduction but also to uncover new knowledge through unusual facts.

More Related