Results 1  10
of
703
Trajectory Clustering with Mixtures of Regression Models
 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
, 1999
"... In this paper we address the problem of clustering trajectories, namely sets of short sequences of data measured as a function of a dependent variable such as time. Examples include storm path trajectories, longitudinal data such as drug therapy response, functional expression data in computational ..."
Abstract

Cited by 123 (8 self)
 Add to MetaCart
learning is carried out using maximum likelihood principles. Specifically, the EM algorithm is used to cope with the hidden data problem (i.e., the cluster memberships). We also develop generalizations of the method to handle nonparametric (kernel) regression components as well as multi
Parameter expansion to accelerate EM: The PXEM algorithm
, 1998
"... The EM algorithm and its extensions are popular tools for modal estimation but are often criticised for their slow convergence. We propose a new method that can often make EM much faster. The intuitive idea is to use a 'covariance adjustment ' to correct the analysis of the M step, capital ..."
Abstract

Cited by 67 (7 self)
 Add to MetaCart
the simplicity and stability of ordinary EM, but has a faster rate of convergence since its M step performs a more efficient analysis. The PXEM algorithm is illustrated for the multivariate t distribution, a random effects model, factor analysis, probit regression and a Poisson imaging model.
Bayesian Parameter Estimation Via Variational Methods
, 1999
"... We consider a logistic regression model with a Gaussian prior distribution over the parameters. We show that an accurate variational transformation can be used to obtain a closed form approximation to the posterior distribution of the parameters thereby yielding an approximate posterior predictiv ..."
Abstract

Cited by 159 (7 self)
 Add to MetaCart
of the regression problem gives a latent variable density model, the variational formulation of which leads to exactly solvable EM updates.
Online EM Algorithm for the Normalized Gaussian Network
, 1999
"... A Normalized Gaussian Network (NGnet) (Moody and Darken 1989) is a network of local linear regression units. The model softly partitions the input space by normalized Gaussian functions and each local unit linearly approximates the output within the partition. In this article, we propose a new on ..."
Abstract

Cited by 90 (6 self)
 Add to MetaCart
A Normalized Gaussian Network (NGnet) (Moody and Darken 1989) is a network of local linear regression units. The model softly partitions the input space by normalized Gaussian functions and each local unit linearly approximates the output within the partition. In this article, we propose a new
An importance sampling EM algorithm for latent regression models
 Journal of Educational and Behavioral Statistics
, 2007
"... Reporting methods used in largescale assessments such as the National Assessment of Educational Progress (NAEP) rely on latent regression models. To fit the latent regression model using the maximum likelihood estimation technique, multivariate integrals must be evaluated. In the computer program ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
analysis show that the importance sampling EM method provides a viable alternative to CGROUP for fitting multivariate latent regression models.
Application of stochastic EM method to latent regression models
 ETS Research Report
, 2004
"... Abstract The reporting methods used in large scale assessments such as the National Assessment of Educational Progress (NAEP) rely on a latent regression model. The first component of the model consists of a pscale IRT measurement model that defines the response probabilities on a set of cognitive ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
comparison of CGROUP with a promising implementation of the stochastic EM algorithm that utilizes importance sampling. Simulation studies and real data analysis show that the stochastic EM method provides a viable alternative to CGROUP for fitting multivariate latent regression models.
Note on the EM Algorithm in Linear Regression Model
"... Linear regression model has been used extensively in the fields of information processing and data analysis. In the present paper, we consider the linear model with missing data. Using the EM (Expectation and Maximization) algorithm, the asymptotic variances and the standard errors for the MLE of ..."
Abstract
 Add to MetaCart
Linear regression model has been used extensively in the fields of information processing and data analysis. In the present paper, we consider the linear model with missing data. Using the EM (Expectation and Maximization) algorithm, the asymptotic variances and the standard errors for the MLE
A variational approach to Bayesian logistic regression models and their extensions
, 1996
"... We consider a logistic regression model with a Gaussian prior distribution over the parameters. We show that accurate variational techniques can be used to obtain a closed form posterior distribution over the parameters given the data thereby yielding a posterior predictive model. The results are st ..."
Abstract

Cited by 76 (2 self)
 Add to MetaCart
We consider a logistic regression model with a Gaussian prior distribution over the parameters. We show that accurate variational techniques can be used to obtain a closed form posterior distribution over the parameters given the data thereby yielding a posterior predictive model. The results
Alternative Algorithms for the Estimation of Dynamic Factor, MIMIC, and Varying Coefficient Regression Models
 Journal of Econometrics
, 1983
"... This paper provides a general approach to the formulation and estimation of dynamic unobserved component models. After introducing the general model, two methods for estimating the unknown parameters are presented. Both are algorithms for maximizing the likelihood function. The first is based on the ..."
Abstract

Cited by 64 (4 self)
 Add to MetaCart
on the method of Scoring. The second is the EM algorithm, a derivativefree method. Each iteration of EM requires a Kalman filter and smoother followed by straightforward regression calculations. The paper suggests using the EM methods to quickly locate a neighborhood of the maximum. Scoring can then be used
Highdimensional regression with noisy and missing data: Provable guarantees with nonconvexity
, 2011
"... Although the standard formulations of prediction problems involve fullyobserved and noiseless data drawn in an i.i.d. manner, many applications involve noisy and/or missing data, possibly involving dependencies. We study these issues in the context of highdimensional sparse linear regression, and ..."
Abstract

Cited by 75 (10 self)
 Add to MetaCart
Although the standard formulations of prediction problems involve fullyobserved and noiseless data drawn in an i.i.d. manner, many applications involve noisy and/or missing data, possibly involving dependencies. We study these issues in the context of highdimensional sparse linear regression
Results 1  10
of
703