Results 1  10
of
18
Sparse probabilistic projections
"... We present a generative model for performing sparse probabilistic projections, which includes sparse principal component analysis and sparse canonical correlation analysis as special cases. Sparsity is enforced by means of automatic relevance determination or by imposing appropriate prior distributi ..."
Abstract

Cited by 37 (5 self)
 Add to MetaCart
(Show Context)
We present a generative model for performing sparse probabilistic projections, which includes sparse principal component analysis and sparse canonical correlation analysis as special cases. Sparsity is enforced by means of automatic relevance determination or by imposing appropriate prior distributions, such as generalised hyperbolic distributions. We derive a variational ExpectationMaximisation algorithm for the estimation of the hyperparameters and show that our novel probabilistic approach compares favourably to existing techniques. We illustrate how the proposed method can be applied in the context of cryptoanalysis as a preprocessing tool for the construction of template attacks. 1
Multilabel Prediction via Sparse Infinite CCA
"... Canonical Correlation Analysis (CCA) is a useful technique for modeling dependencies between two (or more) sets of variables. Building upon the recently suggested probabilistic interpretation of CCA, we propose a nonparametric, fully Bayesian framework that can automatically select the number of cor ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
(Show Context)
Canonical Correlation Analysis (CCA) is a useful technique for modeling dependencies between two (or more) sets of variables. Building upon the recently suggested probabilistic interpretation of CCA, we propose a nonparametric, fully Bayesian framework that can automatically select the number of correlation components, and effectively capture the sparsity underlying the projections. In addition, given (partially) labeled data, our algorithm can also be used as a (semi)supervised dimensionality reduction technique, and can be applied to learn useful predictive features in the context of learning a set of related tasks. Experimental results demonstrate the efficacy of the proposed approach for both CCA as a standalone problem, and when applied to multilabel prediction. 1
Dependency detection with similarity constraints
 In Proc. MLSP’09 IEEE International Workshop on Machine Learning for Signal Processing
, 2009
"... ar ..."
(Show Context)
Variational Bayesian matching
 In Proceedings of Asian Conference on Machine Learning
"... Matching of samples refers to the problem of inferring unknown cooccurrence or alignment between observations in two data sets. Given two sets of equally many samples, the task is to find for each sample a representative sample in the other set, without prior knowledge on a distance measure between ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Matching of samples refers to the problem of inferring unknown cooccurrence or alignment between observations in two data sets. Given two sets of equally many samples, the task is to find for each sample a representative sample in the other set, without prior knowledge on a distance measure between the sets. Recently a few alternative solutions have been suggested, based on maximization of joint likelihood or various measures of betweendata statistical dependency. In this work we present an variational Bayesian solution for the problem, learning a Bayesian canonical correlation analysis model with a permutation parameter for reordering the samples in one of the sets. We approximate the posterior over the permutations, and demonstrate that the resulting matching algorithm clearly outperforms all of the earlier solutions.
Bayesian exponential family projections for coupled data sources
"... Exponential family extensions of principal component analysis (EPCA) have received a considerable amount of attention in recent years, demonstrating the growing need for basic modeling tools that do not assume the squared loss or Gaussian distribution. We extend the EPCA model toolbox by presenting ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Exponential family extensions of principal component analysis (EPCA) have received a considerable amount of attention in recent years, demonstrating the growing need for basic modeling tools that do not assume the squared loss or Gaussian distribution. We extend the EPCA model toolbox by presenting the first exponential family multiview learning methods of the partial least squares and canonical correlation analysis, based on a unified representation of EPCA as matrix factorization of the natural parameters of exponential family. The models are based on a new family of priors that are generally usable for all such factorizations. We also introduce new inference strategies, and demonstrate how the methods outperform earlier ones when the Gaussianity assumption does not hold. 1
FAST DEPENDENT COMPONENTS FOR FMRI ANALYSIS
"... in Taipei, Taiwan Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
in Taipei, Taiwan Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works, must be obtained from the IEEE. Contact: Manager,
MultiWay, MultiView Learning
"... We extend multiway, multivariate ANOVAtype analysis to cases where one covariate is the view, with features of each view coming from different, highdimensional domains. The different views are assumed to be connected by having paired samples; this is common in our main application, biological expe ..."
Abstract
 Add to MetaCart
(Show Context)
We extend multiway, multivariate ANOVAtype analysis to cases where one covariate is the view, with features of each view coming from different, highdimensional domains. The different views are assumed to be connected by having paired samples; this is common in our main application, biological experiments integrating data from different sources. Such experiments typically also include a controlled multiway experimental setup where disease status, medical treatment groups, gender and time of the measurement are usual covariates. We introduce a multiway latent variable model for this new task, by extending the generative model of Bayesian canonical correlation analysis (CCA) both to take multiway covariate information into account as population priors, and by reducing the dimensionality by an integrated factor analysis that assumes the features to come in correlated groups. 1
Smart PCA
"... PCA can be smarter and makes more sensible projections. In this paper, we propose smart PCA, an extension to standard PCA to regularize and incorporate external knowledge into model estimation. Based on the probabilistic interpretation of PCA, the inverse Wishart distribution can be used as the info ..."
Abstract
 Add to MetaCart
PCA can be smarter and makes more sensible projections. In this paper, we propose smart PCA, an extension to standard PCA to regularize and incorporate external knowledge into model estimation. Based on the probabilistic interpretation of PCA, the inverse Wishart distribution can be used as the informative conjugate prior for the population covariance, and useful knowledge is carried by the prior hyperparameters. We design the hyperparameters to smoothly combine the information from both the domain knowledge and the data itself. The Bayesian point estimation of principal components is in closed form. In empirical studies, smart PCA shows clear improvement on three different criteria: image reconstruction errors, the perceptual quality of the reconstructed images, and the pattern recognition performance. 1
A Bayesian Framework for MultiModality Analysis of Mental Health
"... We develop statistical methods for multimodality assessment of mental health, based on four forms of data: (i) selfreported answers to a set of classical questionnaires, (ii) singlenucleotide polymorphism (SNP) data, (iii) fMRI data measured in response to visual stimuli, and (iv) scores for psy ..."
Abstract
 Add to MetaCart
We develop statistical methods for multimodality assessment of mental health, based on four forms of data: (i) selfreported answers to a set of classical questionnaires, (ii) singlenucleotide polymorphism (SNP) data, (iii) fMRI data measured in response to visual stimuli, and (iv) scores for psychiatric disorders. The data were acquired from hundreds of college students. We utilize the data and model to ask a timely and novel clinical question: can one predict brain activity associated with risk for mental illness and treatment response based on knowledge of how the subject answers questionnaires, and using genetic (SNP) data? Also, in another direction: can one predict an individual’s fundamental propensity for psychopathology based on observed selfreport, SNP and fMRI data (separately or in combination)? The data are analyzed with a multimodality factor model, with sparsity imposed on the factor loadings, linked to the particular type of data modality. The analysis framework encompasses a wide range of problems, such as matrix completion and clustering, leveraging information in all the data sources. We use an efficient variational inference algorithm to fit the model, which is especially flexible in dealing with ordinalvalued views (selfreport answers and SNP data). The variational inference is validated with slower but rigorous sampling methods. We demonstrate the effectiveness of the model to perform accurate predictions for clinically relevant brain activity relative to baseline models, and to identify meaningful associations between data views.
DOI 10.1007/s1099401353574 Bayesian object matching
"... Abstract Matching of object refers to the problem of inferring unknown cooccurrence or alignment between observations or samples in two data sets. Given two sets of equally many samples, the task is to find for each sample a representative sample in the other set, without prior knowledge on a dista ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Matching of object refers to the problem of inferring unknown cooccurrence or alignment between observations or samples in two data sets. Given two sets of equally many samples, the task is to find for each sample a representative sample in the other set, without prior knowledge on a distance measure between the sets. Given a distance measure, the problem would correspond to a linear assignment problem, the problem of finding a permutation that reorders samples in one set to minimize the total distance. When no such measure is available, we need to consider more complex solutions. Typical approaches maximize statistical dependency between the two sets, whereas in this work we present a Bayesian solution that builds a joint model for the two sources. We learn a Bayesian canonical correlation analysis model that includes a permutation parameter for reordering the samples in one of the sets. We provide both variational and samplingbased inference for approximative Bayesian analysis, and demonstrate on three data sets that the resulting methods outperform the earlier solutions.