Results 1  10
of
47
The 2005 pascal visual object classes challenge
, 2006
"... Abstract. The PASCAL Visual Object Classes Challenge ran from February to March 2005. The goal of the challenge was to recognize objects from a number of visual object classes in realistic scenes (i.e. not presegmented objects). Four object classes were selected: motorbikes, bicycles, cars and peop ..."
Abstract

Cited by 374 (18 self)
 Add to MetaCart
Abstract. The PASCAL Visual Object Classes Challenge ran from February to March 2005. The goal of the challenge was to recognize objects from a number of visual object classes in realistic scenes (i.e. not presegmented objects). Four object classes were selected: motorbikes, bicycles, cars and people. Twelve teams entered the challenge. In this chapter we provide details of the datasets, algorithms used by the teams, evaluation criteria, and results achieved. 1
Penalized Discriminant Analysis
 Annals of Statistics
, 1995
"... Fisher's linear discriminant analysis (LDA) is a popular dataanalytic tool for studying the relationship between a set of predictors and a categorical response. In this paper we describe a penalized version of LDA. It is designed for situations in which there are many highly correlated predictors, ..."
Abstract

Cited by 131 (9 self)
 Add to MetaCart
Fisher's linear discriminant analysis (LDA) is a popular dataanalytic tool for studying the relationship between a set of predictors and a categorical response. In this paper we describe a penalized version of LDA. It is designed for situations in which there are many highly correlated predictors, such as those obtained by discretizing a function, or the greyscale values of the pixels in a series of images. In cases such as these it is natural, efficient, and sometimes essential to impose a spatial smoothness constraint on the coefficients, both for improved prediction performance and interpretability. We cast the classification problem into a regression framework via optimal scoring. Using this, our proposal facilitates the use of any penalized regression technique in the classification setting. The technique is illustrated with examples in speech recognition and handwritten character recognition. AMS 1991 Classifications: Primary 62H30, Secondary 62G07 1 Introduction Linear discrim...
Overview and recent advances in partial least squares
 in ‘Subspace, Latent Structure and Feature Selection Techniques’, Lecture Notes in Computer Science
, 2006
"... Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS method ..."
Abstract

Cited by 54 (4 self)
 Add to MetaCart
Partial Least Squares (PLS) is a wide class of methods for modeling relations between sets of observed variables by means of latent variables. It comprises of regression and classification tasks as well as dimension reduction techniques and modeling tools. The underlying assumption of all PLS methods is that the
Kernel PLSSVC for Linear and Nonlinear Classification
 Proceedings of the Twentieth International Conference on Machine Learning
, 2003
"... A new method for classification is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by a support vector classifier. Unlike principal component analysis (PCA), which has previously served as a dimension reductio ..."
Abstract

Cited by 27 (4 self)
 Add to MetaCart
A new method for classification is proposed. This is based on kernel orthonormalized partial least squares (PLS) dimensionality reduction of the original data space followed by a support vector classifier. Unlike principal component analysis (PCA), which has previously served as a dimension reduction step for discrimination problems, orthonormalized PLS is closely related to Fisher’s approach to linear discrimination or equivalently to canonical correlation analysis. For this reason orthonormalized PLS is preferable to PCA for discrimination. Good behavior of the proposed method is demonstrated on 13 different benchmark data sets and on the real world problem of classifying finger movement periods from nonmovement periods based on electroencephalograms. 1.
Partial least squares: A versatile tool for the analysis of highdimensional genomic data
 Briefings in Bioinformatics
, 2007
"... Partial Least Squares (PLS) is a highly efficient statistical regression technique that is well suited for the analysis of highdimensional genomic data. In this paper we review the theory and applications of PLS both under methodological and biological points of view. Focusing on microarray express ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Partial Least Squares (PLS) is a highly efficient statistical regression technique that is well suited for the analysis of highdimensional genomic data. In this paper we review the theory and applications of PLS both under methodological and biological points of view. Focusing on microarray expression data we provide a systematic comparison of the PLS approaches currently employed, and discuss problems as different as tumor classification, identification of relevant genes, survival analysis and modeling of gene networks. 2 1
Improving “bagofkeypoints” image categorisation
, 2005
"... In this paper we propose two distinct enhancements to the basic “bagofkeypoints ” image categorisation scheme proposed in [4]. In this approach images are represented as a variable sized set of local image features (keypoints). Thus, we require machine learning tools which can operate on sets of v ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
In this paper we propose two distinct enhancements to the basic “bagofkeypoints ” image categorisation scheme proposed in [4]. In this approach images are represented as a variable sized set of local image features (keypoints). Thus, we require machine learning tools which can operate on sets of vectors. In [4] this is achieved by representing the set as a histogram over bins found by kmeans. We show how this approach can be improved and generalised using Gaussian Mixture Models (GMMs). Alternatively, the set of keypoints can be represented directly as a probability density function, over which a kernel can be defined. This approach is shown to give state of the art categorisation performance.
Learning via Linear Operators: Maximum Margin Regression
 In Proceedings of 2001 IEEE International Conference on Data Mining
, 2005
"... We introduce a maximum margin framework realizing a regression type learning in an arbitrary Hilbert space whilst the corresponding dual problem preserving the structure and, therefore, the complexity that of the binary Support Vector Machine(SVM). We demonstrate via some examples this learning fram ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
We introduce a maximum margin framework realizing a regression type learning in an arbitrary Hilbert space whilst the corresponding dual problem preserving the structure and, therefore, the complexity that of the binary Support Vector Machine(SVM). We demonstrate via some examples this learning framework is broadly applicable in several seemingly different problems. One example is the multiclass classification problem which, in this way, can be implemented with the complexity of a binary SVM. The reduction of the complexity does not involve diminishing performance but, in some cases this approach can improve the classification accuracy. The multiclass classification is realized where the output labels are vector valued. Other examples implement multiview learning problems.
Efficiently learn the metric with side information
 Lecture Notes in Artificial Intelligence, 2842:175 – 189
, 2003
"... Abstract. A crucial problem in machine learning is to choose an appropriate representation of data, in a way that emphasizes the relations we are interested in. In many cases this amounts to finding a suitable metric in the data space. In the supervised case, Linear Discriminant Analysis (LDA) can b ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Abstract. A crucial problem in machine learning is to choose an appropriate representation of data, in a way that emphasizes the relations we are interested in. In many cases this amounts to finding a suitable metric in the data space. In the supervised case, Linear Discriminant Analysis (LDA) can be used to find an appropriate subspace in which the data structure is apparent. Other ways to learn a suitable metric are found in [6] and [11]. However recently significant attention has been devoted to the problem of learning a metric in the semisupervised case. In particular the work by Xing et al. [15] has demonstrated how semidefinite programming (SDP) can be used to directly learn a distance measure that satisfies constraints in the form of sideinformation. They obtain a significant increase in clustering performance with the new representation. The approach is very interesting, however, the computational complexity of the method severely limits its applicability to real machine learning tasks. In this paper we present an alternative solution for dealing with the problem of incorporating sideinformation. This sideinformation specifies pairs of examples belonging to the same class. The approach is based on LDA, and is solved by the efficient eigenproblem. The performance reached is very similar, but the complexity is only O(d 3) instead of O(d 6) where d is the dimensionality of the data. We also show how our method can be extended to deal with more general types of sideinformation. 1
GaborBased Kernel PartialLeastSquares Discrimination Features for Face Recognition ∗
, 2007
"... Abstract. The paper presents a novel method for the extraction of facial features based on the Gaborwavelet representation of face images and the kernel partialleastsquares discrimination (KPLSD) algorithm. The proposed featureextraction method, called the Gaborbased kernel partialleastsquare ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Abstract. The paper presents a novel method for the extraction of facial features based on the Gaborwavelet representation of face images and the kernel partialleastsquares discrimination (KPLSD) algorithm. The proposed featureextraction method, called the Gaborbased kernel partialleastsquares discrimination (GKPLSD), is performed in two consecutive steps. In the first step a set of forty Gabor wavelets is used to extract discriminative and robust facial features, while in the second step the kernel partialleastsquares discrimination technique is used to reduce the dimensionality of the Gabor feature vector and to further enhance its discriminatory power. For optimal performance, the KPLSDbased transformation is implemented using the recently proposed fractionalpowerpolynomial models. The experimental results based on the XM2VTS and ORL databases show that the GKPLSD approach outperforms featureextraction methods such as principal component analysis (PCA), linear discriminant analysis (LDA), kernel principal component analysis (KPCA) or generalized discriminant analysis (GDA) as well as combinations of these methods with Gabor representations of the face images. Furthermore, as the KPLSD algorithm is derived from the kernel partialleastsquares regression (KPLSR) model it does not suffer from the smallsamplesize problem, which is regularly encountered in the field of face recognition.
Eigenproblems in Pattern Recognition
 Handbook of Geometric Computing: Applications in Pattern Recognition, Computer Vision, Neuralcomputing, and Robotics
, 2005
"... The task of studying the properties of configurations of points embedded in a metric space has long been a central task in pattern recognition, but has acquired even greater importance after the recent introduction of kernelbased learning methods. These methods work by virtually embedding general ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
The task of studying the properties of configurations of points embedded in a metric space has long been a central task in pattern recognition, but has acquired even greater importance after the recent introduction of kernelbased learning methods. These methods work by virtually embedding general