Results 1 - 10
of
14
Tabula rasa: Model transfer for object category detection
- In Proc. ICCV
, 2011
"... Our objective is transfer training of a discriminatively trained object category detector, in order to reduce the number of training images required. To this end we propose three transfer learning formulations where a template learnt previously for other categories is used to regularize the training ..."
Abstract
-
Cited by 56 (1 self)
- Add to MetaCart
(Show Context)
Our objective is transfer training of a discriminatively trained object category detector, in order to reduce the number of training images required. To this end we propose three transfer learning formulations where a template learnt previously for other categories is used to regularize the training of a new category. All the formulations result in convex optimization problems. Experiments (on PASCAL VOC) demonstrate significant performance gains by transfer learning from one class to another (e.g. motorbike to bicycle), including one-shot learning, specialization from class to a subordinate class (e.g. from quadruped to horse) and transfer using multiple components. In the case of multiple training samples it is shown that a detection performance approaching that of the state of the art can be achieved with substantially fewer training samples. 1.
Efficient learning of domain-invariant image representations
- In Proc. ICLR
"... We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of t ..."
Abstract
-
Cited by 25 (10 self)
- Add to MetaCart
(Show Context)
We present an algorithm that learns representations which explicitly compensate for domain mismatch and which can be efficiently realized as linear classifiers. Specifically, we form a linear transformation that maps features from the target (test) domain to the source (training) domain as part of training the classifier. We optimize both the transformation and classifier parameters jointly, and introduce an efficient cost function based on misclassification loss. Our method combines several features previously unavailable in a single algorithm: multi-class adaptation through representation learning, ability to map across heterogeneous feature spaces, and scalability to large datasets. We present experiments on several image datasets that demonstrate improved accuracy and computational advantages compared to previous approaches. 1
Natural Language Processing Tools for Reading Level Assessment and Text Simplification for . . .
, 2007
"... ..."
RATIO SEMI-DEFINITE CLASSIFIERS
"... We present a novel classification model that is formulated as a ratio of semi-definite polynomials. We derive an efficient learning algorithm for this classifier, and apply it to two separate phoneme classification corpora. Results show that our disciminatively trained model can achieve accuracies c ..."
Abstract
-
Cited by 7 (6 self)
- Add to MetaCart
(Show Context)
We present a novel classification model that is formulated as a ratio of semi-definite polynomials. We derive an efficient learning algorithm for this classifier, and apply it to two separate phoneme classification corpora. Results show that our disciminatively trained model can achieve accuracies comparable with state-of-the-art techniques such as multi-layer perceptrons, but does not posses the overconfident bias often found in models based on ratios of exponentials. Index Terms — Pattern recognition, Speech recognition 1.
Semi-supervised domain adaptation with instance constraints
- in IEEE International Conference on Computer Vision
, 2013
"... Most successful object classification and detection meth-ods rely on classifiers trained on large labeled datasets. However, for domains where labels are limited, simply bor-rowing labeled data from existing datasets can hurt per-formance, a phenomenon known as “dataset bias. ” We propose a general ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
Most successful object classification and detection meth-ods rely on classifiers trained on large labeled datasets. However, for domains where labels are limited, simply bor-rowing labeled data from existing datasets can hurt per-formance, a phenomenon known as “dataset bias. ” We propose a general framework for adapting classifiers from “borrowed ” data to the target domain using a combination of available labeled and unlabeled examples. Specifically, we show that imposing smoothness constraints on the clas-sifier scores over the unlabeled data can lead to improved adaptation results. Such constraints are often available in the form of instance correspondences, e.g. when the same object or individual is observed simultaneously from multi-ple views, or tracked between video frames. In these cases, the object labels are unknown but can be constrained to be the same or similar. We propose techniques that build on existing domain adaptation methods by explicitly mod-eling these relationships, and demonstrate empirically that they improve recognition accuracy in two scenarios, multi-category image classification and object detection in video. 1.
On the semi-supervised learning of multi-layered perceptrons
- In Proc. Annual Conference of the International Speech Communication Association (INTERSPEECH
, 2009
"... We present a novel approach for training a multi-layered perceptron (MLP) in a semi-supervised fashion. Our objective function, when optimized, balances training set accuracy with fidelity to a graph-based manifold over all points. Additionally, the objective favors smoothness via an entropy regular ..."
Abstract
-
Cited by 3 (1 self)
- Add to MetaCart
(Show Context)
We present a novel approach for training a multi-layered perceptron (MLP) in a semi-supervised fashion. Our objective function, when optimized, balances training set accuracy with fidelity to a graph-based manifold over all points. Additionally, the objective favors smoothness via an entropy regularizer over classifier outputs as well as straightforward ℓ2 regularization. Our approach also scales well enough to enable large-scale training. The results demonstrate significant improvement on several phone classification tasks over baseline MLPs. Index Terms: semi-supervised learning, neural networks, phone classification
MULTI-LAYER RATIO SEMI-DEFINITE CLASSIFIERS
"... We develop a novel extension to the Ratio Semi-definite Classifier, a discriminative model formulated as a ratio of semi-definite polynomials. By adding a hidden layer to the model, we can efficiently train the model, while achieving higher accuracy than the original version. Results on artificial 2 ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
We develop a novel extension to the Ratio Semi-definite Classifier, a discriminative model formulated as a ratio of semi-definite polynomials. By adding a hidden layer to the model, we can efficiently train the model, while achieving higher accuracy than the original version. Results on artificial 2-D data as well as two separate phone classification corpora show that our multi-layer model still avoids the overconfidence bias found in models based on ratios of exponentials, while remaining competitive with state-of-the-art techniques such as multi-layer perceptrons. Index Terms — Pattern recognition, Speech recognition 1.
Graphical Models for Integrating Syllabic Information
, 2009
"... We present graphical models that enhance a speech recognizer with information about syllabic segmentations. The segmentations are specified by locations of syllable nuclei, and the graphical models are able to use these locations to specify a “soft ” segmentation of the speech data. The graphs give ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
We present graphical models that enhance a speech recognizer with information about syllabic segmentations. The segmentations are specified by locations of syllable nuclei, and the graphical models are able to use these locations to specify a “soft ” segmentation of the speech data. The graphs give improved discrimination between speech and noise when compared to a baseline model. When using locations derived from oracle information an overall improvement is given, and when the oracle syllable nuclei are augmented with information about lexical stress it gives additional improvements over locations alone. 1
A semi-supervised learning algorithm for multi-layered perceptrons
, 2009
"... We address the issue of learning multi-layered perceptrons (MLPs) in a discriminative, inductive, multiclass, parametric, and semi-supervised fashion. We introduce a novel objective function that, when optimized, simultane-ously encourages 1) accuracy on the labeled points, 2) respect for an underly ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
We address the issue of learning multi-layered perceptrons (MLPs) in a discriminative, inductive, multiclass, parametric, and semi-supervised fashion. We introduce a novel objective function that, when optimized, simultane-ously encourages 1) accuracy on the labeled points, 2) respect for an underlying graph-represented manifold on all points, 3) smoothness via an entropic regularizer of the classifier outputs, and 4) simplicity via an `2 regularizer. Our approach provides a simple, elegant, and computationally efficient way to bring the benefits of semi-supervised learning (and what is typically an enormous amount of unlabeled training data) to MLPs, which are one of the most widely used pattern classifiers in practice. Our objective has the property that efficient learning is possible using stochastic gradient descent even on large datasets. Results demonstrate significant improvements compared both to a baseline supervised MLP, and also to a previous non-parametric manifold-regularized reproducing kernel Hilbert space classifier. 1