Results 1 
9 of
9
Inductive regularized learning of kernel functions
"... In this paper we consider the fundamental problem of semisupervised kernel function learning. We first propose a general regularized framework for learning a kernel matrix, and then demonstrate an equivalence between our proposed kernel matrix learning framework and a general linear transformatio ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
In this paper we consider the fundamental problem of semisupervised kernel function learning. We first propose a general regularized framework for learning a kernel matrix, and then demonstrate an equivalence between our proposed kernel matrix learning framework and a general linear transformation learning problem. Our result shows that the learned kernel matrices parameterize a linear transformation kernel function and can be applied inductively to new data points. Furthermore, our result gives a constructive method for kernelizing most existing Mahalanobis metric learning formulations. To make our results practical for largescale data, we modify our framework to limit the number of parameters in the optimization process. We also consider the problem of kernelized inductive dimensionality reduction in the semisupervised setting. To this end, we introduce a novel method for this problem by considering a special case of our general kernel learning framework where we select the trace norm function as the regularizer. We empirically demonstrate that our framework learns useful kernel functions, improving the kNN classification accuracy significantly in a variety of domains. Furthermore, our kernelized dimensionality reduction technique significantly reduces the dimensionality of the feature space while achieving competitive classification accuracies.
Localized sliced inverse regression
, 2008
"... We develop an extension of sliced inverse regression (SIR) that we call localized sliced inverse regression (LSIR). This method allows for supervised dimension reduction by projection onto a linear subspace that captures the nonlinear subspace relevant to predicting the response. The method is also ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
(Show Context)
We develop an extension of sliced inverse regression (SIR) that we call localized sliced inverse regression (LSIR). This method allows for supervised dimension reduction by projection onto a linear subspace that captures the nonlinear subspace relevant to predicting the response. The method is also extended to the semisupervised setting where one is given labeled and unlabeled data. We introduce a simple algorithm that implements this method and illustrate its utility on real and simulated data.
Domain Generalization via Invariant Feature Representation
"... This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose DomainInvariant Component Analysis (DICA), a kernelbased optimization algorithm that learns an invariant transformation by ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
This paper investigates domain generalization: How to take knowledge acquired from an arbitrary number of related domains and apply it to previously unseen domains? We propose DomainInvariant Component Analysis (DICA), a kernelbased optimization algorithm that learns an invariant transformation by minimizing the dissimilarity across domains, whilst preserving the functional relationship between input and output variables. A learningtheoretic analysis shows that reducing dissimilarity improves the expected generalization ability of classifiers on new domains, motivating the proposed algorithm. Experimental results on synthetic and realworld datasets demonstrate that DICA successfully learns invariant features and improves classifier performance in practice. 1.
FAST SEMISUPERVISED DISCRIMINATIVE COMPONENT ANALYSIS
"... We introduce a method that learns a classdiscriminative subspace or discriminative components of data. Such a subspace is useful for visualization, dimensionality reduction, feature extraction, and for learning a regularized distance metric. We learn the subspace by optimizing a probabilistic semip ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
We introduce a method that learns a classdiscriminative subspace or discriminative components of data. Such a subspace is useful for visualization, dimensionality reduction, feature extraction, and for learning a regularized distance metric. We learn the subspace by optimizing a probabilistic semiparametric model, a mixture of Gaussians, of classes in the subspace. The semiparametric modeling leads to fast computation (O(N) for N samples) in each iteration of optimization, in contrast to recent nonparametric methods that take O(N 2) time, but with equal accuracy. Moreover, we learn the subspace in a semisupervised manner from three kinds of data: labeled and unlabeled samples, and unlabeled samples with pairwise constraints, with a unified objective. 1.
Kernel Dimensionality Reduction on Sleep Stage Classification using ECG Signal
"... The purpose of this study is to apply Kernel Dimensionality Reduction (KDR) to classify sleep stage from electrocardiogram (ECG) signal. KDR is supervised dimensionality reduction method that retains statistical relationship between input variables and target class. KDR was chosen to reduce dimensio ..."
Abstract
 Add to MetaCart
(Show Context)
The purpose of this study is to apply Kernel Dimensionality Reduction (KDR) to classify sleep stage from electrocardiogram (ECG) signal. KDR is supervised dimensionality reduction method that retains statistical relationship between input variables and target class. KDR was chosen to reduce dimensionality of features extracted from ECG signal because this method doesn’t need special assumptions regarding the conditional distribution, the marginal distribution, or both. In this study we extract 9 time and frequency domain heart rate variability (HRV) features from ECG signal of Polysomnographic Database from Physionet. To evaluate KDR performance, we perform sleep stage classification using kNN, Random Forest and SVM method, and then compare the classification performance before and after dimensionality reduction using KDR. Experimental result suggested KDR implementation on sleep stage classification using SVM could reduce dimensionality of feature vector into 2 without affecting the classification performance. KDR performance on Random Forest and k Nearest Neighbour classification only show slight advantage compared to without implementing KDR.
Two models for Bayesian supervised dimension reduction
"... We study and develop two Bayesian frameworks for supervised dimension reduction that apply to nonlinear manifolds: Bayesian mixtures of inverse regressions and gradient based methods. Formal probabilistic models with likelihoods and priors are given for both methods and efficient posterior estimates ..."
Abstract
 Add to MetaCart
(Show Context)
We study and develop two Bayesian frameworks for supervised dimension reduction that apply to nonlinear manifolds: Bayesian mixtures of inverse regressions and gradient based methods. Formal probabilistic models with likelihoods and priors are given for both methods and efficient posterior estimates of the effective dimension reduction space and predictive factors can be obtained by a Gibbs sampling procedure. In the case of the gradient based methods estimates of conditional dependence between covariates predictive of the response can also be inferred. Relations to manifold learning and Bayesian factor models are made explicit. The utility of the approach is illustrated on simulated and real examples.
unknown title
"... c○2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other w ..."
Abstract
 Add to MetaCart
(Show Context)
c○2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE. FAST SEMISUPERVISED DISCRIMINATIVE COMPONENT ANALYSIS
Largemargin Weakly Supervised Dimensionality Reduction
"... This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage ..."
Abstract
 Add to MetaCart
(Show Context)
This paper studies dimensionality reduction in a weakly supervised setting, in which the preference relationship between examples is indicated by weak cues. A novel framework is proposed that integrates two aspects of the large margin principle (angle and distance), which simultaneously encourage angle consistency between preference pairs and maximize the distance between examples in preference pairs. Two specific algorithms are developed: an alternating direction method to learn a linear transformation matrix and a gradient boosting technique to optimize a nonlinear transformation directly in the function space. Theoretical analysis demonstrates that the proposed large margin optimization criteria can strengthen and improve the robustness and generalization performance of preference learning algorithms on the obtained lowdimensional subspace. Experimental results on realworld datasets demonstrate the significance of studying dimensionality reduction in the weakly supervised setting and the effectiveness of the proposed framework. 1.