Results 1 
7 of
7
A LeastSquares Framework for Component Analysis
, 2009
"... ... (SC) have been extensively used as a feature extraction step for modeling, clustering, classification, and visualization. CA techniques are appealing because many can be formulated as eigenproblems, offering great potential for learning linear and nonlinear representations of data in closedfo ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
... (SC) have been extensively used as a feature extraction step for modeling, clustering, classification, and visualization. CA techniques are appealing because many can be formulated as eigenproblems, offering great potential for learning linear and nonlinear representations of data in closedform. However, the eigenformulation often conceals important analytic and computational drawbacks of CA techniques, such as solving generalized eigenproblems with rank deficient matrices (e.g., small sample size problem), lacking intuitive interpretation of normalization factors, and understanding commonalities and differences between CA methods. This paper proposes a unified leastsquares framework to formulate many CA methods. We show how PCA, LDA, CCA, LE, SC, and their kernel and regularized extensions, correspond to a particular instance of leastsquares weighted kernel reduced rank regression (LSWKRRR). The LSWKRRR formulation of CA methods has several benefits: (1) provides a clean connection between many CA techniques and an intuitive framework to understand normalization factors; (2) yields efficient numerical schemes to solve CA techniques; (3) overcomes the small sample size problem; (4) provides a framework to easily extend CA methods. We derive new weighted generalizations of PCA, LDA, CCA and SC, and several novel CA techniques.
Face Active Appearance Modeling and speech acoustic information to recover articulation
 IEEE Transactions on Audio, Speech and Language Processing
, 2009
"... Abstract—We are interested in recovering aspects of vocal tract’s geometry and dynamics from speech, a problem referred to as speech inversion. Traditional audioonly speech inversion techniques are inherently illposed since the same speech acoustics can be produced by multiple articulatory configu ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
(Show Context)
Abstract—We are interested in recovering aspects of vocal tract’s geometry and dynamics from speech, a problem referred to as speech inversion. Traditional audioonly speech inversion techniques are inherently illposed since the same speech acoustics can be produced by multiple articulatory configurations. To alleviate the illposedness of the audioonly inversion process, we propose an inversion scheme which also exploits visual information from the speaker’s face. The complex audiovisualtoarticulatory mapping is approximated by an adaptive piecewise linear model. Model switching is governed by a Markovian discrete process which captures articulatory dynamic information. Each constituent linear mapping is effectively estimated via canonical correlation analysis. In the described multimodal context, we investigate alternative fusion schemes which allow interaction between the audio and visual modalities at various synchronization levels. For facial analysis, we employ active appearance models (AAMs) and demonstrate fully automatic face tracking and visual feature extraction. Using the AAM features in conjunction with audio features such as Mel frequency cepstral coefficients (MFCCs) or line spectral frequencies (LSFs) leads to effective estimation of the trajectories followed by certain points of interest in the speech production system. We report experiments on the QSMT and MOCHA databases which contain audio, video, and electromagnetic articulography data recorded in parallel. The results show that exploiting both audio and visual modalities in a multistream hidden Markov model based scheme clearly improves performance relative to either audio or visualonly estimation. Index Terms—Active appearance models (AAMs) , audiovisualtoarticulatory speech inversion, canonical correlation analysis (CCA), multimodal fusion. I.
Interpreting canonical correlation analysis through biplots of structural correlations and weights
 Psychometrika
, 1990
"... This paper extends the biplot technique to canonical correlation analysis and redundancy analysis, The plot of structure correlations is shown to be optimal for displaying the pairwise correlations between the variables of the one set and those of the second. The link between multivariate regression ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
This paper extends the biplot technique to canonical correlation analysis and redundancy analysis, The plot of structure correlations is shown to be optimal for displaying the pairwise correlations between the variables of the one set and those of the second. The link between multivariate regression and canonical correlation analysis/redundancy analysis is exploited for producing an optimal biplot that displays a matrix of regression coefficients. This plot can be made from the canonical weights of the predictors and the structure correlations of the criterion variables. An example is used to show how the proposed biptots may be interpreted. Key words: biplot, canonical correlation analysis, canonical weight, interbattery factor analysis, partial analysis, redundancy analysis, regression coefficient, reduced rank regression, structure correlations.
Biplots in reducedrank regression
 Biom. J
, 1994
"... SUMMARY Regression problems with a number of related response variables are typically analyzed by separate multiple regressions. This paper shows how these regressions can be visualized jointly in a biplot based on reducedrank regression. Reducedrank regression combines multiple regression and pri ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
SUMMARY Regression problems with a number of related response variables are typically analyzed by separate multiple regressions. This paper shows how these regressions can be visualized jointly in a biplot based on reducedrank regression. Reducedrank regression combines multiple regression and principal components analysis and can therefore be carried out with standard statistical packages. The proposed biplot highlights the major aspects of the regressions by displaying the leastsquares approximation of fitted values, regression coefficients and associated tratio's. The utility and interpretation of the reducedrank regression biplot is demonstrated with an example using public health data that were previously analyzed by separate multiple regressions.
Multivariate Reduced Rank Regression in nonGaussian Contexts, Using Copulas
, 2004
"... ... canonical correlations, principal component analysis. We propose a new procedure to perform Reduced Rank Regression (RRR) in nonGaussian contexts, based on Multivariate Dispersion Models. ReducedRank Multivariate Dispersion Models (RRMDM) generalise RRR to a very large class of distributions, ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
... canonical correlations, principal component analysis. We propose a new procedure to perform Reduced Rank Regression (RRR) in nonGaussian contexts, based on Multivariate Dispersion Models. ReducedRank Multivariate Dispersion Models (RRMDM) generalise RRR to a very large class of distributions, which include continuous distributions like the normal, Gamma, Inverse Gaussian, and discrete distributions like the Poisson and the binomial. A multivariate distribution is created with the help of the Gaussian copula and estimation is performed using maximum likelihood. We show how this method can be amended to deal with the case of discrete data. We perform Monte Carlo simulations and show that our estimator is more efficient than the traditional Gaussian RRR. In the framework of MDM’s we introduce a procedure analogous to canonical correlations, which takes into account the distribution of the data.