Results 1  10
of
1,251
SupportVector Networks
 Machine Learning
, 1995
"... The supportvector network is a new learning machine for twogroup classification problems. The machine conceptually implements the following idea: input vectors are nonlinearly mapped to a very highdimension feature space. In this feature space a linear decision surface is constructed. Special pr ..."
Abstract

Cited by 3621 (35 self)
 Add to MetaCart
The supportvector network is a new learning machine for twogroup classification problems. The machine conceptually implements the following idea: input vectors are nonlinearly mapped to a very highdimension feature space. In this feature space a linear decision surface is constructed. Special
Supportvector machines for histogrambased image classification
 IEEE Transactions on Neural Networks
, 1999
"... Abstract — Traditional classification approaches generalize poorly on image classification tasks, because of the high dimensionality of the feature space. This paper shows that support vector machines (SVM’s) can generalize well on difficult image classification problems where the only features are ..."
Abstract

Cited by 221 (1 self)
 Add to MetaCart
Abstract — Traditional classification approaches generalize poorly on image classification tasks, because of the high dimensionality of the feature space. This paper shows that support vector machines (SVM’s) can generalize well on difficult image classification problems where the only features are high dimensional histograms. Heavytailed RBF kernels of the form K(x;y) = e jx y j with a 1 and b 2 are evaluated on the classification of images extracted from the Corel stock photo collection and shown to far outperform traditional polynomial or Gaussian radial basis function (RBF) kernels. Moreover, we observed that a simple remapping of the input x i! x
Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2000
"... We present a unifying framework for studying the solution of multiclass categorization problems by reducing them to multiple binary problems that are then solved using a marginbased binary learning algorithm. The proposed framework unifies some of the most popular approaches in which each class ..."
Abstract

Cited by 560 (20 self)
 Add to MetaCart
given the empirical loss of the individual binary learning algorithms. The scheme and the corresponding bounds apply to many popular classification learning algorithms including supportvector machines, AdaBoost, regression, logistic regression and decisiontree algorithms. We also give a multiclass
BoosTexter: A Boostingbased System for Text Categorization
"... This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. Our approach is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text catego ..."
Abstract

Cited by 658 (20 self)
 Add to MetaCart
This work focuses on algorithms which learn from examples to perform multiclass text and speech categorization tasks. Our approach is based on a new and improved family of boosting algorithms. We describe in detail an implementation, called BoosTexter, of the new boosting algorithms for text categorization tasks. We present results comparing the performance of BoosTexter and a number of other textcategorizationalgorithms on a variety of tasks. We conclude by describing the application of our system to automatic calltype identification from unconstrained spoken customer responses.
An Efficient Boosting Algorithm for Combining Preferences
, 1999
"... The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting ..."
Abstract

Cited by 707 (18 self)
 Add to MetaCart
The problem of combining preferences arises in several applications, such as combining the results of different search engines. This work describes an efficient algorithm for combining multiple preferences. We first give a formal framework for the problem. We then describe and analyze a new boosting algorithm for combining preferences called RankBoost. We also describe an efficient implementation of the algorithm for certain natural cases. We discuss two experiments we carried out to assess the performance of RankBoost. In the first experiment, we used the algorithm to combine different WWW search strategies, each of which is a query expansion for a given domain. For this task, we compare the performance of RankBoost to the individual search strategies. The second experiment is a collaborativefiltering task for making movie recommendations. Here, we present results comparing RankBoost to nearestneighbor and regression algorithms.
Multiplicative Updatings for SupportVector Learning
, 1998
"... Support Vector machines find maximal margin hyperplanes in a high dimensional feature space. Theoretical results exist which guarantee a high generalization performance when the margin is large or when the number of support vectors is small. MultiplicativeUpdating algorithms are a new tool for perc ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Support Vector machines find maximal margin hyperplanes in a high dimensional feature space. Theoretical results exist which guarantee a high generalization performance when the margin is large or when the number of support vectors is small. MultiplicativeUpdating algorithms are a new tool for perceptron learning whose theoretical properties are well studied. In this work we present a MultiplicativeUpdating algorithm for learning Support Vector machines which exploits the particular structure of highgeneralization hypotheses, by achieving fast rate of convergence just in those situations where high generalization can be obtained, namely small number of support vectors or large margin. Keywords: Theory, support vector machines 1 Introduction Support Vector (SV) machines are a class of algorithms introduced by Vapnik and coworkers [5] for implementing nonlinear decision rules in terms of hyperplanes in highdimensional feature spaces. MultiplicativeUpdating algorithms, are a relativ...
A Bayesian networks approach for predicting proteinprotein interactions from genomic data
 SCIENCE
, 2003
"... We developed an approach using Bayesian networks to predict proteinprotein interactions genomewide in yeast. Our method naturally weights and combines into reliable predictions genomic features only weakly associated with interaction (e.g., mRNA coexpression, coessentiality and colocalization). ..."
Abstract

Cited by 285 (11 self)
 Add to MetaCart
We developed an approach using Bayesian networks to predict proteinprotein interactions genomewide in yeast. Our method naturally weights and combines into reliable predictions genomic features only weakly associated with interaction (e.g., mRNA coexpression, coessentiality and co
Efficient face detection by a cascaded supportvector machine expansion
 Royal Society of London Proceedings Series A, 460:3283–3297
, 2004
"... We describe a fast system for the detection and localization of human faces in images using a nonlinear Support Vector Machine. We approximate the decision surface in terms of a reduced set of expansion vectors and propose a cascaded evaluation which has the property that the full support vectors ex ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
We describe a fast system for the detection and localization of human faces in images using a nonlinear Support Vector Machine. We approximate the decision surface in terms of a reduced set of expansion vectors and propose a cascaded evaluation which has the property that the full support vectors expansion is only evaluated on the facelike parts of the image, while the largest part of typical images is classified using a single expansion vector (simpler and more efficient classifier). The cascaded evaluation offers a thirtyfold speedup over an evaluation using the full set of reduced set vectors which itself already is thirty times faster than classification using all the support vectors.
Structural Risk Minimization over DataDependent Hierarchies
, 1996
"... The paper introduces some generalizations of Vapnik's method of structural risk minimisation (SRM). As well as making explicit some of the details on SRM, it provides a result that allows one to trade off errors on the training sample against improved generalization performance. It then conside ..."
Abstract

Cited by 281 (71 self)
 Add to MetaCart
The paper introduces some generalizations of Vapnik's method of structural risk minimisation (SRM). As well as making explicit some of the details on SRM, it provides a result that allows one to trade off errors on the training sample against improved generalization performance. It then considers the more general case when the hierarchy of classes is chosen in response to the data. A result is presented on the generalization performance of classifiers with a "large margin". This theoretically explains the impressive generalization performance of the maximal margin hyperplane algorithm of Vapnik and coworkers (which is the basis for their support vector machines). The paper concludes with a more general result in terms of "luckiness" functions, which provides a quite general way for exploiting serendipitous simplicity in observed data to obtain better prediction accuracy from small training sets. Four examples are given of such functions, including the VC dimension measured on the samp...
Results 1  10
of
1,251