Results 1  10
of
24
Online passiveaggressive algorithms
 JMLR
, 2006
"... We present a unified view for online classification, regression, and uniclass problems. This view leads to a single algorithmic framework for the three problems. We prove worst case loss bounds for various algorithms for both the realizable case and the nonrealizable case. The end result is new alg ..."
Abstract

Cited by 293 (22 self)
 Add to MetaCart
We present a unified view for online classification, regression, and uniclass problems. This view leads to a single algorithmic framework for the three problems. We prove worst case loss bounds for various algorithms for both the realizable case and the nonrealizable case. The end result is new algorithms and accompanying loss bounds for hingeloss regression and uniclass. We also get refined loss bounds for previously studied classification algorithms. 1
Classification using Intersection Kernel Support Vector Machines is Efficient ∗
"... Straightforward classification using kernelized SVMs requires evaluating the kernel for a test vector and each of the support vectors. For a class of kernels we show that one can do this much more efficiently. In particular we show that one can build histogram intersection kernel SVMs (IKSVMs) with ..."
Abstract

Cited by 121 (10 self)
 Add to MetaCart
Straightforward classification using kernelized SVMs requires evaluating the kernel for a test vector and each of the support vectors. For a class of kernels we show that one can do this much more efficiently. In particular we show that one can build histogram intersection kernel SVMs (IKSVMs) with runtime complexity of the classifier logarithmic in the number of support vectors as opposed to linear for the standard approach. We further show that by precomputing auxiliary tables we can construct an approximate classifier with constant runtime and space requirements, independent of the number of support vectors, with negligible loss in classification accuracy on various tasks. This approximation also applies to 1 − χ2 and other kernels of similar form. We also introduce novel features based on a multilevel histograms of oriented edge energy and present experiments on various detection datasets. On the INRIA pedestrian dataset an approximate IKSVM classifier based on these features has the current best performance, with a miss rate 13 % lower at 10−6 False Positive Per Window than the linear SVM detector of Dalal & Triggs. On the Daimler Chrysler pedestrian dataset IKSVM gives comparable accuracy to the best results (based on quadratic SVM), while being 15 × faster. In these experiments our approximate IKSVM is up to 2000 × faster than a standard implementation and requires 200 × less memory. Finally we show that a 50 × speedup is possible using approximate IKSVM based on spatial pyramid features on the Caltech 101 dataset with negligible loss of accuracy. 1.
Learning the kernel function via regularization
 Journal of Machine Learning Research
, 2005
"... We study the problem of finding an optimal kernel from a prescribed convex set of kernels K for learning a realvalued function by regularization. We establish for a wide variety of regularization functionals that this leads to a convex optimization problem and, for square loss regularization, we ch ..."
Abstract

Cited by 96 (7 self)
 Add to MetaCart
We study the problem of finding an optimal kernel from a prescribed convex set of kernels K for learning a realvalued function by regularization. We establish for a wide variety of regularization functionals that this leads to a convex optimization problem and, for square loss regularization, we characterize the solution of this problem. We show that, although K may be an uncountable set, the optimal kernel is always obtained as a convex combination of at most m+2 basic kernels, where m is the number of data examples. In particular, our results apply to learning the optimal radial kernel or the optimal dot product kernel. 1.
Large Margin Hierarchical Classification
 In Proceedings of the TwentyFirst International Conference on Machine Learning
"... We present an algorithmic framework for supervised classification learning where the set of labels is organized in a predefined hierarchical structure. This structure is encoded by a rooted tree which induces a metric over the label set. ..."
Abstract

Cited by 67 (7 self)
 Add to MetaCart
We present an algorithmic framework for supervised classification learning where the set of labels is organized in a predefined hierarchical structure. This structure is encoded by a rooted tree which induces a metric over the label set.
Online and batch learning of pseudometrics
 In ICML
, 2004
"... We describe and analyze an online algorithm for supervised learning of pseudometrics. The algorithm receives pairs of instances and predicts their similarity according to a pseudometric. The pseudometrics we use are quadratic forms parameterized by positive semidefinite matrices. The core of the ..."
Abstract

Cited by 54 (5 self)
 Add to MetaCart
We describe and analyze an online algorithm for supervised learning of pseudometrics. The algorithm receives pairs of instances and predicts their similarity according to a pseudometric. The pseudometrics we use are quadratic forms parameterized by positive semidefinite matrices. The core of the algorithm is an update rule that is based on successive projections onto the positive semidefinite cone and onto halfspace constraints imposed by the examples. We describe an efficient procedure for performing these projections, derive a worst case mistake bound on the similarity predictions, and discuss a dual version of the algorithm in which it is simple to incorporate kernel operators. The online algorithm also serves as a building block for deriving a largemargin batch algorithm. We demonstrate the merits of the proposed approach by conducting experiments on MNIST dataset and on document filtering. 1.
Online learning over graphs
 Proc. 22nd Int. Conf. Machine Learning
, 2005
"... We apply classic online learning techniques similar to the perceptron algorithm to the problem of learning a function defined on a graph. The benefit of our approach includes simple algorithms and performance guarantees that we naturally interpret in terms of structural properties of the graph, such ..."
Abstract

Cited by 32 (9 self)
 Add to MetaCart
We apply classic online learning techniques similar to the perceptron algorithm to the problem of learning a function defined on a graph. The benefit of our approach includes simple algorithms and performance guarantees that we naturally interpret in terms of structural properties of the graph, such as the algebraic connectivity or the diameter of the graph. We also discuss how these methods can be modified to allow active learning on a graph. We present preliminary experiments with encouraging results. 1.
MaxMargin Additive Classifiers for Detection
 ICCV
"... We present methods for training high quality object detectors very quickly. The core contribution is a pair of fast training algorithms for piecewise linear classifiers, which can approximate arbitrary additive models. The classifiers are trained in a maxmargin framework and significantly outperfo ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
We present methods for training high quality object detectors very quickly. The core contribution is a pair of fast training algorithms for piecewise linear classifiers, which can approximate arbitrary additive models. The classifiers are trained in a maxmargin framework and significantly outperform linear classifiers on a variety of vision datasets. We report experimental results quantifying training time and accuracy on image classification tasks and pedestrian detection, including detection results better than the best previous on the INRIA dataset with faster training. 1.
Accurate Online Support Vector Regression
 Neural Computation
, 2003
"... Conventional batch implementations of Support Vector Regression (SVR) are inefficient when used for applications such as online learning or leaveoneout crossvalidation, because they must be retrained from scratch every time the training set is modified. An Accurate Online Support Vector Regressio ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
Conventional batch implementations of Support Vector Regression (SVR) are inefficient when used for applications such as online learning or leaveoneout crossvalidation, because they must be retrained from scratch every time the training set is modified. An Accurate Online Support Vector Regression (AOSVR) algorithm is introduced, which efficiently updates a trained SVR function whenever a sample is added to or removed from the training set. The updated SVR function is identical to the one that would be produced by a batch algorithm. Applications of AOSVR both in an online and in a cross validation scenario are presented. In both scenarios, experiments demonstrate that AOSVR outperforms batch SVR algorithms with both cold and warm start.
Online algorithm for hierarchical phoneme classification
 in Workshop on Multimodal Interaction and Related Machine Learning Algorithms; Lecture Notes in Computer Science
, 2004
"... Abstract. We present an algorithmic framework for phoneme classification where the set of phonemes is organized in a predefined hierarchical structure. This structure is encoded via a rooted tree which induces a metric over the set of phonemes. Our approach combines techniques from large margin kern ..."
Abstract

Cited by 19 (10 self)
 Add to MetaCart
Abstract. We present an algorithmic framework for phoneme classification where the set of phonemes is organized in a predefined hierarchical structure. This structure is encoded via a rooted tree which induces a metric over the set of phonemes. Our approach combines techniques from large margin kernel methods and Bayesian analysis. Extending the notion of large margin to hierarchical classification, we associate a prototype with each individual phoneme and with each phonetic group which corresponds to a node in the tree. We then formulate the learning task as an optimization problem with margin constraints over the phoneme set. In the spirit of Bayesian methods, we impose similarity requirements between the prototypes corresponding to adjacent phonemes in the phonetic hierarchy. We describe a new online algorithm for solving the hierarchical classification problem and provide worstcase loss analysis for the algorithm. We demonstrate the merits of our approach by applying the algorithm to synthetic data and as well as speech data. 1
Learning to align polyphonic music
 In Proceedings of the 5th International Conference on Music Information Retrieval
, 2004
"... We describe an efficient learning algorithm for aligning a symbolic representation of a musical piece with its acoustic counterpart. Our method employs a supervised learning approach by using a training set of aligned symbolic and acoustic representations. The alignment function we devise is based o ..."
Abstract

Cited by 18 (11 self)
 Add to MetaCart
We describe an efficient learning algorithm for aligning a symbolic representation of a musical piece with its acoustic counterpart. Our method employs a supervised learning approach by using a training set of aligned symbolic and acoustic representations. The alignment function we devise is based on mapping the input acousticsymbolic representation along with the target alignment into an abstract vectorspace. Building on techniques used for learning support vector machines (SVM), our alignment function distills to a classifier in the abstract vectorspace which separates correct alignments from incorrect ones. We describe a simple iterative algorithm for learning the alignment function and discuss its formal properties. We use our method for aligning MIDI and MP3 representations of polyphonic recordings of piano music. We also compare our discriminative approach to a generative method based on a generalization of hidden Markov models. In all of our experiments, the discriminative method outperforms the HMMbased method. 1.