Economic Statistics Essay

Cheap Custom Writing Service

Economic statistics is composed  of two interrelated fields, those  related  to data collection  and those  to data analysis. In fact, economic statistics is differentiated from other  fields of applied statistics due to its unique  data collection methods,  and because of the scope/scale of analysis.

Most economic data is collected by governmental or  large-scale  pseudo governmental agencies. These include  the  United  Nations,  the  World  Bank, and International Monetary Fund (IMF), and the various regional  development  banks. These are often  compilations  of data provided  by the various countries’ central banks. By its nature, macro level data is nearly impossible for individual researchers to collect. However, the governmental  provision of economic data is increasingly extended to microeconomic data. In the United States, for example, the most comprehensive individual, or micro level, data is compiled by the Census Bureau and the Bureau of Labor Statistics.

Economic statistics have traditionally been centered upon  directly  measurable  concepts,  or  to  concepts that  are potentially  well defined. For example, there is less ambiguity in the definition or proper measurement of “income” than there is in defining the concept of “happiness.” For this  reason,  the  statistical  problems implied by large measurement errors require less attention  in economics than in fields such as marketing or psychology.

Many economic  phenomena can be measured  in different ways; in fact, they are often defined in different  ways as well. Some  countries,  for  example, compute  the inflation rate by adjusting each good’s price by a quality improvement factor; others do not. For this reason, several agencies and companies have specialized in producing  datasets  that  are internationally comparable.  These include the Penn World Tables, the International Financial Statistics database compiled by the IMF, much of the data reported  by the  Organisation   for  Economic  Co-operation and Development, and several databases compiled by the United Nations.

Methods

Econometrics  is the  application  of statistical  techniques  to  the  analysis of economic  data  and  their interrelationships.   Physical   scientists,   and   some social scientists, can often rely upon carefully crafted, controlled  experiments  with  which  to  collect  data and test competing theories. Because large-scale controlled experiments  are not feasible on a national level, econometrics  required a unique set of tools.

In earlier statistical research  using the regression methodology,  the role of regression  was to estimate the correlation  between an exogenous variable X on an endogenous  variable Y, while holding  the  other exogenous  variables constant.  This is accomplished in a statistical sense, since controlled experiments are rare  in economics.  This is done  simultaneously  for many endogenous variables within a single equation: Y is a function of Xs.

The concept of General Equilibrium, however, required  that  the economic  variables of interest  are jointly determined.  That is, X causes Y, but  Y also causes X. This phenomenon is often termed  the “endogeneity problem.” Since all economic data in the United States, for example, are determined within the same national  economy, the analysis is significantly complicated.  Such endogeneity  is, in fact, the  cornerstone  of economics,  as embodied  in the  Supply and Demand graphs, a system of two, not one, equations. Market  prices and  quantities  are determined by the  interaction   (indeed,  intersection)   of supply and demand. Thus, one cannot hold price constant in order to isolate the effects of X on quantity.

Econometrics,  as a unique field of study, arguably began with the  formation  in the  early 1930s of the Cowles Commission,  the  Econometric  Society and their journal Econometrica, and in the 1940s with the Department of  Applied  Economics  at  Cambridge. As fitting the world’s preoccupation with the global macroeconomic problems of the time, economic theory and economic  statistics became understandably macro-oriented. Especially at Cowles, the aim was to study systems of equations much larger than the simple two-equation supply-and-demand system. Rather, dozens of such systems were incorporated into large-scale models  of economies,  with  each  sub-market influencing and being influenced by all other markets. During this time, the mainstream  economic school of thought was Keynesianism, according to which there is a large role for government in controlling the economy. Thus, measurement and analysis were prerequisites to control. The Keynesian macro econometricians sought to estimate the parameters of their many economic equations. These parameters  were thought to be constants  just as there are physical constants  in the hard sciences. Once all of the economies’ parameters were estimated, fine-tuned economic prediction and control could be exercised.

Governmental institutions became engaged in developing truly massive systems of hundreds  of equations  with which to model their  home  economies; this, in an attempt  to predict  the  likely outcome of proposed  economic  policies. This method of analysis remained  the dominant  technique  until the  1970s,  when  very  simple  time-series  models were found to outperform their large-scale brethren. These simple models, developed largely by George Box and  Gwilym Jenkins, were  usually univariate time-series  models  which  leveraged the  inertia  in economies by putting lagged dependent  variables as the key terms used for prediction.

In 1976, an influential paper by Nobel Prize–winning  economist  Robert  Lucas  introduced   what  is now known  as the  “Lucas Critique.”  This critique, in effect, pulled the theoretical  rug out from under large-scale  econometric   modeling.   Lucas  argued that even the estimated parameters  were the results of the economic  process; the parameters  were not unchanging  and structural,  they were also endogenous. From that point onward the systems approach has been largely abandoned  in favor of a return  to single-equation  models, though  these are considerably more  complex than  the univariate  time-series models of Box and Jenkins. (Interestingly, this occurred  at largely the same time that  other  social sciences turned from single-equation models to multiple “structural equations” models.)

Econometrics  is at the intersection  of economic theory and economic data, where the priority of one over the other  remains  in dispute.  For some economists, the primary  role of econometrics  is to test the  validity of economic  theories.  In  the  physical sciences, where theory is well established, the functional  forms  of the  equations  to  be estimated  are well defined. These well-defined forms have not been found in the social sciences. For many econometricians, proper  practice requires  developing a formal model with micro foundations (utility functions, production functions, etc.) as a necessary step prior to estimation.  If one theory, for example, maintains that  there  is a positive relationship  between X and Y, but it is estimated  that  the relationship  is negative, then it can be argued that the theory has been falsified. On the other  hand, a completely different conclusion can be drawn.

There is often little testing that can be done regarding whether the equation that is estimated is properly specified in the  first place. Thus, many researchers advocate using economic theory as a guide to model selection. In this vein, if an equation is estimated and it is found that there is a negative relationship between X and Y, this has not falsified the theory, but rather, it has called into question the equation that was said to  represent  the  theory.  Thus, a competing  use of econometrics  is the illustration,  not testing, of economic theory. These researchers  adopt a more intuitive approach to model selection. Finally, adherents to Chris Sims’ theory-free approach eschew theory altogether, preferring  the “data to speak for themselves.” Sims’ approach recognizes the endogeneity of all economic variables, and estimates  all of their interrelationships, without placing restrictions  on what these relationships  would be (regardless of what economic theory may imply).

While  much  of this  entry  has  been  devoted  to macro econometrics, this  is not  to  say that  micro level econometrics  was not practiced throughout this time. However, most of this data was at the industry level, and so the data were necessarily aggregated to some extent. Increasingly, truly micro level data—that is, data  collected  at the  individual  level—are being examined.  In the United  States, for example, popular micro level datasets are collected by the Bureau of Labor Statistics  and  the  Census  Bureau. Moreover, under  the  guidance  of Vernon  Smith, experimental economics  has  established  controlled   experiments as a valid means  of collecting microeconomic data. Increasingly, economists  have become  freed  of the governmental  macro level databases, and have begun generating their own micro level data, tailored to their own research needs.

From the 1930s to the present,  econometrics  has been shedding its macroeconomic roots, and is largely indistinguishable  from the other branches of applied statistics that use the regression approach. The differences lie in the questions  that  are asked, not in the techniques they use to answer these questions.

Bibliography:   

  1. John Abowd and Lars Vilhuber, “The Sensitivity of Economic Statistics to Coding Errors in Personal Identifiers,” Journal of Business and Economic Statistics (v.23/2, 2005);
  2. Bernard Baumohl, The Secrets of Economic Indicators: Hidden Clues to Future Economic Trends and Investment Opportunities (Wharton School, 2008);
  3. George E. P. Box and G. M. Jenkins, Time Series Analysis: Forecasting and Control (Holden-Day,  1970);
  4. Adrian Darnell and J. Lynne Evans, The Limits  of Econometrics (Edward Elgar, 1990);
  5. Norman Frumkin, Guide to Economic Indicators (M.E. Sharpe, 2006);
  6. David F. Hendry, Econometrics: Alchemy or  Science?: Essays in  Econometric  Methodology (Blackwell, 1993);
  7. Robert Lucas, “Econometric Policy Evaluation: A Critique.” Carnegie-Rochester Conference Series on Public Policy (n.1, 1976);
  8. Edward F. McKelvey, Understanding US  Economic Statistics  (Goldman  Sachs Economic Research Group, 2008);
  9. Chris A. Sims, “Macroeconomics and Reality” Econometrica (v.48, 1980).

This example Economic Statistics Essay is published for educational and informational purposes only. If you need a custom essay or research paper on this topic please use our writing services. EssayEmpire.com offers reliable custom essay writing services that can help you to receive high grades and impress your professors with the quality of each essay or research paper you hand in.

See also:

ORDER HIGH QUALITY CUSTOM PAPER


Always on-time

Plagiarism-Free

100% Confidentiality

Special offer!

GET 10% OFF WITH 24START DISCOUNT CODE