Results 1 
4 of
4
Bayesian Estimation and Testing of Structural Equation Models
 Psychometrika
, 1999
"... The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameter ..."
Abstract

Cited by 27 (8 self)
 Add to MetaCart
The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameters can be computed from these samples. If the prior distribution over the parameters is uninformative, the posterior is proportional to the likelihood, and asymptotically the inferences based on the Gibbs sample are the same as those based on the maximum likelihood solution, e.g., output from LISREL or EQS. In small samples, however, the likelihood surface is not Gaussian and in some cases contains local maxima. Nevertheless, the Gibbs sample comes from the correct posterior distribution over the parameters regardless of the sample size and the shape of the likelihood surface. With an informative prior distribution over the parameters, the posterior can be used to make inferences about the parameters of underidentified models, as we illustrate on a simple errorsinvariables model.
Maximum Likelihood Estimation of Factor Analysis Using the ECME Algorithm with Complete and Incomplete Data
 Statist. Sinica
, 1998
"... Factor analysis is a standard tool in educational testing contexts, which can be fit using the EM algorithm (Dempster, Laird, and Rubin, 1977). An extension of EM, called the ECME algorithm (Liu and Rubin, 1994), can be used to obtain ML estimates more efficiently in factor analysis models. ECME has ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Factor analysis is a standard tool in educational testing contexts, which can be fit using the EM algorithm (Dempster, Laird, and Rubin, 1977). An extension of EM, called the ECME algorithm (Liu and Rubin, 1994), can be used to obtain ML estimates more efficiently in factor analysis models. ECME has an Estep, identical to the Estep of EM, but instead of EM's Mstep, it has a sequence of CM (conditional maximization) steps, each of which maximizes Either the constrained expected completedata loglikelihood, as with the ECM algorithm (Meng and Rubin, 1993), or the constrained actual loglikelihood. For factor analysis, we use two CM steps: the first maximizes the expected completedata loglikelihood over the factor loadings given fixed uniquenesses, and the second maximizes the actual likelihood over the uniquenesses given fixed factor loadings. We also describe EM and ECME for ML estimation of factor analysis from incomplete data, which arise in applications of factor analysis in educational testing contexts.
The dynamic ECME algorithm
, 2009
"... Summary. The Expectation/Conditional Maximisation Either (ECME) algorithm has proven to be an effective way of accelerating the Expectation Maximisation (EM) algorithm for many problems. Recognising the limitation of using prefixed acceleration subspaces in ECME, we propose a Dynamic ECME (DECME) al ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Summary. The Expectation/Conditional Maximisation Either (ECME) algorithm has proven to be an effective way of accelerating the Expectation Maximisation (EM) algorithm for many problems. Recognising the limitation of using prefixed acceleration subspaces in ECME, we propose a Dynamic ECME (DECME) algorithm which allows the acceleration subspaces to be chosen dynamically. The simplest DECME implementation is what we call DECME1, which uses the line determined by the two most recent estimates as the acceleration subspace. The investigation of DECME1 leads to an efficient, simple, stable, and widely applicable DECME implementation, which uses twodimensional acceleration subspaces and is referred to as DECME2s. The fast convergence of DECME2s is established by the theoretical result that in a small neighbourhood of the maximum likelihood estimate (MLE), it is equivalent to a conjugate direction method. The remarkable accelerating effect of DECME2s and its variant is also demonstrated with multiple numerical examples.
Are Visual Cortex Maps Optimised for Coverage?
, 2002
"... The elegant regularity of maps of variables such as ocular dominance, orientation and spatial frequency in primary visual cortex have prompted many people to suggest their structure could be explained by an optimisation principle. Up to now, the standard way to test this hypothesis has been to gen ..."
Abstract
 Add to MetaCart
The elegant regularity of maps of variables such as ocular dominance, orientation and spatial frequency in primary visual cortex have prompted many people to suggest their structure could be explained by an optimisation principle. Up to now, the standard way to test this hypothesis has been to generate artificial maps by optimising an hypothesised objective function, and then to compare these artificial maps with real maps using a variety of quantitative criteria. If the artificial maps are similar to the real maps, this provides some evidence that the real cortex may be optimising a similar function to the one hypothesised. However, recently a more direct method has been proposed for testing whether real maps represent local optima of an objective function (Swindale et al., 2000). In this approach, the value of the hypothesised function is calculated for a real map, and then the real map is perturbed in certain ways and the function recalculated. If each of these perturbations leads to a worsening of the function, it is tempting to conclude that the real map is quite likely to represent a local optimum of that function. In the current paper we argue that such perturbation results provide only weak evidence in favour of the optimisation hypothesis.