Results 11  20
of
25
Estimating labels from label proportions
 Proceedings of the 25th Annual International Conference on Machine Learning
, 2008
"... Consider the following problem: given sets of unlabeled observations, each set with known label proportions, predict the labels of another set of observations, also with known label proportions. This problem appears in areas like ecommerce, spam filtering and improper content detection. We present ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Consider the following problem: given sets of unlabeled observations, each set with known label proportions, predict the labels of another set of observations, also with known label proportions. This problem appears in areas like ecommerce, spam filtering and improper content detection. We present consistent estimators which can reconstruct the correct labels with high probability in a uniform convergence sense. Experiments show that our method works well in practice. 1
Statistical Learning and Kernel Methods in Bioinformatics
 in Bioinformatics,” Artificial Intelligence and Heuristic Methods in Bioinformatics 183, (Eds.) P. Frasconi und R. Shamir, IOS
, 2000
"... We briefly describe the main ideas of statistical learning theory, support vector machines, and kernel feature spaces. In addition, we present an overview of applications of kernel methods in bioinformatics. ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We briefly describe the main ideas of statistical learning theory, support vector machines, and kernel feature spaces. In addition, we present an overview of applications of kernel methods in bioinformatics.
Constructing Facial Identity Surfaces for Recognition
, 2003
"... We present a novel approach to face recognition by constructing facial identity structures across views and over time, referred to as identity surfaces, in a Kernel Discriminant Analysis (KDA) feature space. This approach is aimed at addressing three challenging problems in face recognition: modelli ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We present a novel approach to face recognition by constructing facial identity structures across views and over time, referred to as identity surfaces, in a Kernel Discriminant Analysis (KDA) feature space. This approach is aimed at addressing three challenging problems in face recognition: modelling faces across multiple views, extracting nonlinear discriminatory features, and recognising faces over time. First, a multiview face model is designed which can be automatically fitted to face images and sequences to extract the normalised facial texture patterns. This model is capable of dealing with faces with large pose variation. Second, KDA is developed to compute the most significant nonlinear basis vectors with the intention of maximising the betweenclass variance and minimising the withinclass variance. We applied KDA to the problem of multiview face recognition, and a significant improvement has been achieved in reliability and accuracy. Third, identity surfaces are constructed in a poseparameterised discriminatory feature space. Dynamic face recognition is then performed by matching the object trajectory computed from a video input and model trajectories constructed on the identity surfaces. These two types of trajectories encode the spatiotemporal dynamics of moving faces.
A short introduction to learning with kernels
 IN ADVANCED LECTURES ON MACHINE LEARNING, S.MENDELSON
, 2002
"... We briefly describe the main ideas of statistical learning theory, support vector ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We briefly describe the main ideas of statistical learning theory, support vector
Support Vector Machines for Phoneme Classification
, 2001
"... In this thesis, Support Vector Machines (SVMs) are applied to the problem of phoneme classification. Given a sequence of acoustic observations and 40 phoneme targets, the task is to classify each observation to one of these targets. Since this task involves multiple classes, one of the main hurdles ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
In this thesis, Support Vector Machines (SVMs) are applied to the problem of phoneme classification. Given a sequence of acoustic observations and 40 phoneme targets, the task is to classify each observation to one of these targets. Since this task involves multiple classes, one of the main hurdles SVMs must overcome is to extend the inherently binary SVMs to the multiclass case. To do this, several methods are proposed, and their generalisation abilities are measured. It is found that even though some generalisation is lost in the transition, this can still lead to effective classifiers. In addition, a refinement to the SVMs is made to derive estimated posterior probabilities from classifications. Since almost all speech recognition systems are based on statistical models, this is necessary if SVMs are to be used in a full speech recognition system. The best accuracy found was 71.4%, which is competitive with the best results found in literature.
SVM and Boosting: One Class
"... We show via an equivalence of mathematical programs that a Support Vector (SV) algorithm can be translated into an equivalent boostinglike algorithm and vice versa. We exemplify this translation procedure for a new algorithm oneclass Leveraging starting from the oneclass Support Vector Machine ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We show via an equivalence of mathematical programs that a Support Vector (SV) algorithm can be translated into an equivalent boostinglike algorithm and vice versa. We exemplify this translation procedure for a new algorithm oneclass Leveraging starting from the oneclass Support Vector Machines (1SVM) . This is a first step towards unsupervised learning in a Boosting framework.
N I V E R S
"... In this thesis, Support Vector Machines (SVMs) are applied to the problem of phoneme classification. Given a sequence of acoustic observations and 40 phoneme targets, the task is to classify each observation to one of these targets. Since this task involves multiple classes, one of the main hurdles ..."
Abstract
 Add to MetaCart
In this thesis, Support Vector Machines (SVMs) are applied to the problem of phoneme classification. Given a sequence of acoustic observations and 40 phoneme targets, the task is to classify each observation to one of these targets. Since this task involves multiple classes, one of the main hurdles SVMs must overcome is to extend the inherently binary SVMs to the multiclass case. To do this, several methods are proposed, and their generalisation abilities are measured. It is found that even though some generalisation is lost in the transition, this can still lead to effective classifiers. In addition, a refinement to the SVMs is made to derive estimated posterior probabilities from classifications. Since almost all speech recognition systems are based on statistical models, this is necessary if SVMs are to be used in a full speech recognition system. The best accuracy found was 71.4%, which is competitive with the best results found in literature. i Acknowledgements I would like to thank Simon King for his guidance and support throughout this project.