Results 1  10
of
28
A Unifying Review of Linear Gaussian Models
, 1999
"... Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observa ..."
Abstract

Cited by 263 (17 self)
 Add to MetaCart
Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observations and derivations made by many previous authors and introducing a new way of linking discrete and continuous state models using a simple nonlinearity. Through the use of other nonlinearities, we show how independent component analysis is also a variation of the same basic generative model. We show that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive observation noise. We also review some of the literature involving global and local mixtures of the basic models and provide pseudocode for inference and learning for all the basic models.
Bayesian Model Assessment In Factor Analysis
, 2004
"... Factor analysis has been one of the most powerful and flexible tools for assessment of multivariate dependence and codependence. Loosely speaking, it could be argued that the origin of its success rests in its very exploratory nature, where various kinds of datarelationships amongst the variable ..."
Abstract

Cited by 57 (8 self)
 Add to MetaCart
Factor analysis has been one of the most powerful and flexible tools for assessment of multivariate dependence and codependence. Loosely speaking, it could be argued that the origin of its success rests in its very exploratory nature, where various kinds of datarelationships amongst the variables at study can be iteratively verified and/or refuted. Bayesian inference in factor analytic models has received renewed attention in recent years, partly due to computational advances but also partly to applied focuses generating factor structures as exemplified by recent work in financial time series modeling. The focus of our current work is on exploring questions of uncertainty about the number of latent factors in a multivariate factor model, combined with methodological and computational issues of model specification and model fitting. We explore reversible jump MCMC methods that build on sets of parallel Gibbs samplingbased analyses to generate suitable empirical proposal distributions and that address the challenging problem of finding e#cient proposals in highdimensional models. Alternative MCMC methods based on bridge sampling are discussed, and these fully Bayesian MCMC approaches are compared with a collection of popular model selection methods in empirical studies.
Factor analysis using deltarule wakesleep learning
 Neural Computation
, 1997
"... We describe a linear network that models correlations between realvalued visible variables using one or more realvalued hidden variables — a factor analysis model. This model can be seen as a linear version of the “Helmholtz machine”, and its parameters can be learned using the “wakesleep ” metho ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
We describe a linear network that models correlations between realvalued visible variables using one or more realvalued hidden variables — a factor analysis model. This model can be seen as a linear version of the “Helmholtz machine”, and its parameters can be learned using the “wakesleep ” method, in which learning of the primary “generative” model is assisted by a “recognition ” model, whose role is to fill in the values of hidden variables based on the values of visible variables. The generative and recognition models are jointly learned in “wake ” and “sleep ” phases, using just the delta rule. This learning procedure is comparable in simplicity to Oja’s version of Hebbian learning, which produces a somewhat different representation of correlations in terms of principal components. We argue that the simplicity of wakesleep learning makes factor analysis a plausible alternative to Hebbian learning as a model of activitydependent cortical plasticity. 1
International stock return comovements
 Journal of Finance
, 2009
"... We examine international stock return comovements using countryindustry and countrystyle portfolios as the base portfolios. We first establish that parsimonious riskbased factor models capture the covariance structure of the data better than the popular HestonRouwenhorst (1994) model. We then es ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
We examine international stock return comovements using countryindustry and countrystyle portfolios as the base portfolios. We first establish that parsimonious riskbased factor models capture the covariance structure of the data better than the popular HestonRouwenhorst (1994) model. We then establish the following stylized facts regarding stock return comovements. First, we do not find evidence for an upward trend in return correlations, except for the European stock markets. Second, the increasing importance of industry factors relative to country factors was a shortlived, temporary phenomenon. Third, we find that large growth stocks are more correlated across countries than are small value stocks, and that the difference has increased over time. JEL Classification: C52, G11, G12.
Fitting vast dimensional timevarying covariance models, Oxford Financial Research Centre, Financial Economics Working Paper n
, 2008
"... Building models for high dimensional portfolios is important in risk management and asset allocation. Here we propose a novel and fast way of estimating models of timevarying covariances that overcome an undiagnosed incidental parameter problem which has troubled existing methods when applied to hu ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
Building models for high dimensional portfolios is important in risk management and asset allocation. Here we propose a novel and fast way of estimating models of timevarying covariances that overcome an undiagnosed incidental parameter problem which has troubled existing methods when applied to hundreds or even thousands of assets. Indeed we can handle the case where the crosssectional dimension is larger than the time series one. The theory of this new strategy is developed in some detail, allowing formal hypothesis testing to be carried out on these models. Simulations are used to explore the performance of this inference strategy while empirical examples are reported which show the strength of this method. The out of sample hedging performance of various models estimated using this method are compared.
Single factor analysis by MML estimation
 Journal of the Royal Statistical Society (Series B
, 1992
"... The Minimum Message Length (MML) technique is applied to the problem of estimating the parameters of a multivariate Gaussian model in which the correlation structure is modelled by a single common factor. Implicit estimator equations are derived and compared with those obtained from a Maximum Likeli ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
The Minimum Message Length (MML) technique is applied to the problem of estimating the parameters of a multivariate Gaussian model in which the correlation structure is modelled by a single common factor. Implicit estimator equations are derived and compared with those obtained from a Maximum Likelihood (ML) analysis. Unlike ML, the MML estimators remain consistent when used to estimate both the factor loadings and factor scores. Tests on simulated data show the MML estimates to be on av erage more accurate than the ML estimates when the former exist. If the data show little evidence for a factor, the MML estimate collapses. It is shown that the condition for the existence of an MML estimate is essentially that the log likelihood ratio in favour of the factor model exceed the value expected under the null (nofactor) hypotheses.
Analysis of covariance structures under elliptical distributions
 Journal of the American Statistical Association
, 1987
"... This article examines the adjustment of normal theory methods for the analysis of covariance structures to make them applicable under the class of elliptical distributions. It is shown that if the model satisfies a mild scale invariance condition and the data have an elliptical distribution, the asy ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
This article examines the adjustment of normal theory methods for the analysis of covariance structures to make them applicable under the class of elliptical distributions. It is shown that if the model satisfies a mild scale invariance condition and the data have an elliptical distribution, the asymptotic covariance matrix of sample covariances has a structure that results in the retention of many of the asymptotic properties of normal theory methods. If a scale adjustment is applied, the likelihood ratio tests of fit have the usual asymptotic chisquared distributions. Difference tests retain their property of asymptotic independence, and maximum likelihood estimators retain their relative asymptotic efficiency within the class of estimators based on the sample covariance matrix. An adjustment to the asymptotic covariance matrix of normal theory maximum likelihood estimators for elliptical distributions is provided. This adjustment is particularly simple in models for patterned covariance or correlation matrices. These results apply not only to normal theory maximum likelihood methods but also to a class of minimum discrepancy methods. Similar results also apply when certain robust estimators of the covariance matrix are employed.
Probabilistic Modelbased Multisensor Image Fusion
, 1999
"... : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : xii 1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1.1 Single Sensor Computational Vision System . . . . . . . . . . . . . . . . . . 3 1.2 Multisensor Image Fusion System . ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
: : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : xii 1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1.1 Single Sensor Computational Vision System . . . . . . . . . . . . . . . . . . 3 1.2 Multisensor Image Fusion System . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Application of fusion for navigation guidance in aviation . . . . . . . 7 1.3 Issues in Fusing Imagery from Multiple Sensors . . . . . . . . . . . . . . . . 10 1.3.1 Mismatch in features . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.3.2 Combination of multisensor images . . . . . . . . . . . . . . . . . . . 11 1.3.3 Geometric representations of imagery from different sensors . . . . . 12 1.3.4 Registration of misaligned image features . . . . . . . . . . . . . . . 12 1.3.5 Spatial resolution of different sensors . . . . . . . . . . . . . . . . . . 13 1.3.6 Differences in frame rate . . . . . . . . . . . . . . . . . . . . . ....
Unsupervised learning
 In The MIT Encyclopedia of the Cognitive Sciences
, 1999
"... Adaptation is a ubiquitous neural and psychological phenomenon, with a wealth of instantiations and implications. Although a basic form of plasticity, it has, bar some notable exceptions, attracted computational theory of only one main variety. In this paper, we study adaptation from the perspective ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Adaptation is a ubiquitous neural and psychological phenomenon, with a wealth of instantiations and implications. Although a basic form of plasticity, it has, bar some notable exceptions, attracted computational theory of only one main variety. In this paper, we study adaptation from the perspective of factor analysis, a paradigmatic technique of unsupervised learning. We use factor analysis to reinterpret a standard view of adaptation, and apply our new model to some recent data on adaptation in the domain of face discrimination. 1
Maximum Likelihood Estimation of Factor Analysis Using the ECME Algorithm with Complete and Incomplete Data
 Statist. Sinica
, 1998
"... Factor analysis is a standard tool in educational testing contexts, which can be fit using the EM algorithm (Dempster, Laird, and Rubin, 1977). An extension of EM, called the ECME algorithm (Liu and Rubin, 1994), can be used to obtain ML estimates more efficiently in factor analysis models. ECME has ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Factor analysis is a standard tool in educational testing contexts, which can be fit using the EM algorithm (Dempster, Laird, and Rubin, 1977). An extension of EM, called the ECME algorithm (Liu and Rubin, 1994), can be used to obtain ML estimates more efficiently in factor analysis models. ECME has an Estep, identical to the Estep of EM, but instead of EM's Mstep, it has a sequence of CM (conditional maximization) steps, each of which maximizes Either the constrained expected completedata loglikelihood, as with the ECM algorithm (Meng and Rubin, 1993), or the constrained actual loglikelihood. For factor analysis, we use two CM steps: the first maximizes the expected completedata loglikelihood over the factor loadings given fixed uniquenesses, and the second maximizes the actual likelihood over the uniquenesses given fixed factor loadings. We also describe EM and ECME for ML estimation of factor analysis from incomplete data, which arise in applications of factor analysis in educational testing contexts.