Editorial: Errors in the Variables, Unobserved Heterogeneity, and Other Ways of Hiding Statistical Error

Shugan, Steven M.
May 2006
Marketing Science;May/Jun2006, Vol. 25 Issue 3, p203
Academic Journal
One research function is proposing new scientific theories; another is testing the falsifiable predictions of those theories. Eventually, sufficient observations reveal valid predictions. For the impatient, behold statistical methods, which attribute inconsistent predictions to either faulty data (e.g., measurement error) or faulty theories. Testing theories, however, differs from estimating unknown parameters in known relationships. When testing theories, it is sufficiently dangerous to cure inconsistencies by adding observed explanatory variables (i.e., beyond the theory), let alone unobserved explanatory variables. Adding ad hoc explanatory variables mimics experimental controls when experiments are impractical. Assuming unobservable variables is different, partly because realizations of unobserved variables are unavailable for validating estimates. When different statistical assumptions about error produce dramatically different conclusions, we should doubt the theory, the data, or both. Theory tests should be insensitive to assumptions about error, particularly adjustments for error from unobserved variables. These adjustments can fallaciously inflate support for wrong theories, partly by implicitly under-weighting observations inconsistent with the theory. Inconsistent estimates often convey an important message-the data are inconsistent with the theory! Although adjustments for unobserved variables and ex post information are extraordinarily useful when estimating known relationships, when testing theories, requiring researchers to make these adjustments is inappropriate.


Related Articles

  • Confidence Intervals for lk,p,... Distances. Love, Robert F.; Walker, John H.; Tiku, Moti L. // Transportation Science;Feb95, Vol. 29 Issue 1, p93 

    Distance predicting functions have a number of uses when objects in space can be represented as points. When a predicted distance between two points is determined by a distance predicting function, the unknown distance between the points may be overestimated or underestimated due to the...

  • Bootstrap prediction regions for multivariate autoregressive processes. Grigoletto, Matteo // Statistical Methods & Applications;2005, Vol. 14 Issue 2, p179 

    Two new methods for improving prediction regions in the context of vector autoregressive (VAR) models are proposed. These methods, which are based on the bootstrap technique, take into account the uncertainty associated with the estimation of the model order and parameters. In particular, by...

  • A note regarding the problem of perfect multicollinearity. Lazaridis, Alexis // Quality & Quantity;1986, Vol. 20 Issue 2/3, p297 

    In this paper, the problem of multicollinearity is considered and a way to overcome such singular cases is analysed. The method is based upon some useful properties of the generalized inverse.

  • Asymptotic distribution of quasi-maximum likelihood estimation of dynamic panels using long difference transformation when both N and T are large. Hsiao, Cheng; Zhou, Qiankun // Statistical Methods & Applications;Nov2016, Vol. 25 Issue 4, p675 

    This note shows that the asymptotic properties of the quasi-maximum likelihood estimation for dynamic panel models can be easily derived by following the approach of Grassetti (Stat Methods Appl 20:221-240, 2011) to take the long difference to remove the time-invariant individual specific effects.

  • An example and exercise in statistical estimation...  // Armed Forces Comptroller;Winter98, Vol. 43 Issue 1, p28 

    Presents steps in statistical estimation. Prediction variable in predicting operating costs; Building a prediction model; Guessing with the model.

  • Selective Branch Inversion: Confidence Estimation for Branch Predictors. Klauser, Artur; Manne, Srilatha; Grunwald, Dirk // International Journal of Parallel Programming;Feb2001, Vol. 29 Issue 1, p81 

    This paper describes a family of branch predictors that use confidence estimation to improve the performance of an underlying branch predictor. This method, referred to as Selective Branch Inversion (SBI), uses a confidence estimator to determine when the branch direction prediction is likely to...

  • Predicting Response Speed in Mail Surveys. Huxley, Stephen J. // Journal of Marketing Research (JMR);Feb1980, Vol. 17 Issue 1, p63 

    All mail surveys encounter resistance when they arrive at their destination because they must compete with all other elements in the respondent's environment for his or her time and cooperation. This resistance varies in degree from survey to survey but the pattern of responses it generates over...

  • Semiparametric Estimation of Regression Models for Panel Data. Horowitz, Joel L.; Markatou, Marianthi // Review of Economic Studies;Jan96, Vol. 63 Issue 1, p145 

    Linear models with error components are widely used to analyse panel data. Some applications of these models require knowledge of the probability densities of the error components. Existing methods handle this requirement by assuming that the densities belong to known parametric families of...

  • Insights into latent class analysis of diagnostic test performance. Margaret Sullivan Pepe // Biostatistics;Apr2007, Vol. 8 Issue 2, p474 

    Latent class analysis is used to assess diagnostic test accuracy when a gold standard assessment of disease is not available but results of multiple imperfect tests are. We consider the simplest setting, where 3 tests are observed and conditional independence (CI) is assumed. Closed-form...


Read the Article


Sign out of this library

Other Topics