Results 11  20
of
24
A Tutorial on νsupport vector machines
 APPLIED STOCHASTIC MODELS IN BUSINESS AND INDUSTRY
, 2005
"... We briefly describe the main ideas of statistical learning theory, support vector machines (SVMs), and kernel feature spaces. We place particular emphasis on a description of the socalled νSVM, including details of the algorithm and its implementation, theoretical results, and practical applicatio ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We briefly describe the main ideas of statistical learning theory, support vector machines (SVMs), and kernel feature spaces. We place particular emphasis on a description of the socalled νSVM, including details of the algorithm and its implementation, theoretical results, and practical applications.
Statistical Learning and Kernel Methods in Bioinformatics
 in Bioinformatics,” Artificial Intelligence and Heuristic Methods in Bioinformatics 183, (Eds.) P. Frasconi und R. Shamir, IOS
, 2000
"... We briefly describe the main ideas of statistical learning theory, support vector machines, and kernel feature spaces. In addition, we present an overview of applications of kernel methods in bioinformatics. ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
We briefly describe the main ideas of statistical learning theory, support vector machines, and kernel feature spaces. In addition, we present an overview of applications of kernel methods in bioinformatics.
Support Vector Machines for Phoneme Classification
, 2001
"... In this thesis, Support Vector Machines (SVMs) are applied to the problem of phoneme classification. Given a sequence of acoustic observations and 40 phoneme targets, the task is to classify each observation to one of these targets. Since this task involves multiple classes, one of the main hurdles ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
In this thesis, Support Vector Machines (SVMs) are applied to the problem of phoneme classification. Given a sequence of acoustic observations and 40 phoneme targets, the task is to classify each observation to one of these targets. Since this task involves multiple classes, one of the main hurdles SVMs must overcome is to extend the inherently binary SVMs to the multiclass case. To do this, several methods are proposed, and their generalisation abilities are measured. It is found that even though some generalisation is lost in the transition, this can still lead to effective classifiers. In addition, a refinement to the SVMs is made to derive estimated posterior probabilities from classifications. Since almost all speech recognition systems are based on statistical models, this is necessary if SVMs are to be used in a full speech recognition system. The best accuracy found was 71.4%, which is competitive with the best results found in literature.
A short introduction to learning with kernels
 IN ADVANCED LECTURES ON MACHINE LEARNING, S.MENDELSON
, 2002
"... We briefly describe the main ideas of statistical learning theory, support vector ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
We briefly describe the main ideas of statistical learning theory, support vector
Constructing Facial Identity Surfaces for Recognition
, 2003
"... We present a novel approach to face recognition by constructing facial identity structures across views and over time, referred to as identity surfaces, in a Kernel Discriminant Analysis (KDA) feature space. This approach is aimed at addressing three challenging problems in face recognition: modelli ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We present a novel approach to face recognition by constructing facial identity structures across views and over time, referred to as identity surfaces, in a Kernel Discriminant Analysis (KDA) feature space. This approach is aimed at addressing three challenging problems in face recognition: modelling faces across multiple views, extracting nonlinear discriminatory features, and recognising faces over time. First, a multiview face model is designed which can be automatically fitted to face images and sequences to extract the normalised facial texture patterns. This model is capable of dealing with faces with large pose variation. Second, KDA is developed to compute the most significant nonlinear basis vectors with the intention of maximising the betweenclass variance and minimising the withinclass variance. We applied KDA to the problem of multiview face recognition, and a significant improvement has been achieved in reliability and accuracy. Third, identity surfaces are constructed in a poseparameterised discriminatory feature space. Dynamic face recognition is then performed by matching the object trajectory computed from a video input and model trajectories constructed on the identity surfaces. These two types of trajectories encode the spatiotemporal dynamics of moving faces.
SVM and Boosting: One Class
"... We show via an equivalence of mathematical programs that a Support Vector (SV) algorithm can be translated into an equivalent boostinglike algorithm and vice versa. We exemplify this translation procedure for a new algorithm oneclass Leveraging starting from the oneclass Support Vector Machine ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
We show via an equivalence of mathematical programs that a Support Vector (SV) algorithm can be translated into an equivalent boostinglike algorithm and vice versa. We exemplify this translation procedure for a new algorithm oneclass Leveraging starting from the oneclass Support Vector Machines (1SVM) . This is a first step towards unsupervised learning in a Boosting framework.
Introduction to Kernel Methods
 In Raffaele Cerulli Nello Cristianini and John ShaweTaylor, editors, The Analysis of Patterns
"... ..."
(Show Context)
N I V E R S
"... In this thesis, Support Vector Machines (SVMs) are applied to the problem of phoneme classification. Given a sequence of acoustic observations and 40 phoneme targets, the task is to classify each observation to one of these targets. Since this task involves multiple classes, one of the main hurdles ..."
Abstract
 Add to MetaCart
(Show Context)
In this thesis, Support Vector Machines (SVMs) are applied to the problem of phoneme classification. Given a sequence of acoustic observations and 40 phoneme targets, the task is to classify each observation to one of these targets. Since this task involves multiple classes, one of the main hurdles SVMs must overcome is to extend the inherently binary SVMs to the multiclass case. To do this, several methods are proposed, and their generalisation abilities are measured. It is found that even though some generalisation is lost in the transition, this can still lead to effective classifiers. In addition, a refinement to the SVMs is made to derive estimated posterior probabilities from classifications. Since almost all speech recognition systems are based on statistical models, this is necessary if SVMs are to be used in a full speech recognition system. The best accuracy found was 71.4%, which is competitive with the best results found in literature. i Acknowledgements I would like to thank Simon King for his guidance and support throughout this project.