TITLE

IDENTIFYING MOST INFLUENTIAL OBSERVATIONS IN FACTOR ANALYSIS

AUTHOR(S)
Chatterjee, Sangit; Jamieson, Linda; Wiseman, Frederick
PUB. DATE
March 1991
SOURCE
Marketing Science;Spring91, Vol. 10 Issue 2, p145
SOURCE TYPE
Academic Journal
DOC. TYPE
Article
ABSTRACT
At the mathematical level, a factor or principal component of a factor analysis is simply a linear combination of variables under some constraints. Therefore, as in regression analysis, there are conditions under which individual or joint observations can be influential in the sense that their presence or absence significantly influences the obtained values of the estimated factor loadings. The nature of these effects as well as potential effects due to "gross errors" in the data set should be investigated in order to determine which observations, if any, need to be analyzed separately or excluded entirely. The purpose of this paper is (1) to propose a new technique for identifying influential observations and observations containing "gross errors" and (2) to discuss situations under which each is likely to significantly alter the results of a factor analysis.
ACCESSION #
4476481

 

Related Articles

  • Assimilating non-local observations with a local ensemble Kalman filter. Fertig, Elana J.; Hunt, Brian R.; Ott, Edward; Szunyogh, Istvan // Tellus: Series A;Oct2007, Vol. 59 Issue 5, p719 

    Many ensemble data assimilation schemes utilize spatial localization so that a small ensemble can capture the unstable degrees of freedom in the model state. These local ensemble-based schemes typically allow the analysis at a given location to depend only on observations near that location....

  • Inference for clustered data using the independence loglikelihood. Richard E. Chandler; Steven Bate // Biometrika;Mar2007, Vol. 94 Issue 1, p167 

    We use the properties of independence estimating equations to adjust the ‘independence’ loglikelihood function in the presence of clustering. The proposed adjustment relies on the robust sandwich estimator of the parameter covariance matrix, which is easily calculated. The...

  • ON MUTH�N'S MAXIMUM LIKELIHOOD FOR TWO-LEVEL COVARIANCE STRUCTURE MODELS. Ke-Hai Yuan; Hayashi, Kentaro // Psychometrika;Mar2005, Vol. 70 Issue 1, p147 

    Data in social and behavioral sciences are often hierarchically organized. Special statistical procedures that take into account the dependence of such observations have been developed. Among procedures for 2-level covariance structure analysis, Muth�n's maximum likelihood (MUML) has the...

  • ON RELATIONSHIPS BETWEEN UNCENTRED AND COLUMN-CENTRED PRINCIPAL COMPONENT ANALYSIS. Cadima, Jorge; Jolliffe, Ian // Pakistan Journal of Statistics;2009, Vol. 25 Issue 4, p473 

    Principal component analysis (PCA) can be seen as a singular value decomposition (SVD) of a column-centred data matrix. In a number of applications, no pre-processing of the data is carried out, and it is the uncentred data matrix that is subjected to an SVD, in what is often called an uncentred...

  • The Geometry of Statistical Efficiency and Matrix Statistics. Gustafson, K. // Journal of Applied Mathematics & Decision Sciences;2007, Vol. 2007 Issue 1, p1 

    We will place certain parts of the theory of statistical efficiency into the author's opera- tor trigonometry (1967), thereby providing new geometrical understanding of statistical efficiency. Important earlier results of Bloomfield and Watson, Durbin and Kendall, Rao and Rao, will be so...

  • Chapter 27: Analysis of Covariance. Palaniswamy, Usha Rani; Palaniswamy, Kodiveri Muniyappa // Handbook of Statistics for Teaching & Research in Plant & Crop S;2006, p519 

    Discusses several methods to prevent errors in the combines use of analysis of variance and regression analysis techniques. Principles of covariance analysis; Reasons for covariance analysis; Uses of covariance analysis.

  • Estimation of a covariance matrix with zeros. Sanjay Chaudhuri; Mathias Drton; Thomas S. Richardson // Biometrika;Mar2007, Vol. 94 Issue 1, p199 

    We consider estimation of the covariance matrix of a multivariate random vector under the constraint that certain covariances are zero. We first present an algorithm, which we call iterative conditional fitting, for computing the maximum likelihood estimate of the constrained covariance matrix,...

  • The Top Ten Reasons for Viewing Reliability as a Regression Coefficient. Stauffer, Joseph M. // Insights to a Changing World Journal;2013, Vol. 2013 Issue 4, p115 

    In the world of things essentially tau-equivalent, where the reliability of X is traditionally represented by rXX', it is far better to think of reliability as a regression coefficient rather than as a correlation. This note offers ten good reasons for doing so.

  • Students' Compendium-Statistics. Gosling, G. P. // Management Services;Nov79, Vol. 23 Issue 11, p24 

    This article focuses on the detailed explanation of the technique of multiple regression analysis (MRA) in a work measurement context. The regression surface in MRA can be defined by the equation: Ŷ = a[sub0] + a[sub1]x[sub1] + a[sub2]x[sub2] + a[sub3]x[sub3] + ........+ a[subp]x[subp], where...

Share

Read the Article

Courtesy of VIRGINIA BEACH PUBLIC LIBRARY AND SYSTEM

Sorry, but this item is not currently available from your library.

Try another library?
Sign out of this library

Other Topics