Results 1  10
of
11
Discriminative learning of mixture of Bayesian Network Classifiers for sequence classification
 In CVPR 06
, 2006
"... A mixture of Bayesian Network Classifiers(BNC) has a potential to yield superior classification and generative performance to a single BNC model. We introduce novel discriminative learning methods for mixtures of BNCs. Unlike a single BNC model where the discriminative learning resorts to a gradient ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
A mixture of Bayesian Network Classifiers(BNC) has a potential to yield superior classification and generative performance to a single BNC model. We introduce novel discriminative learning methods for mixtures of BNCs. Unlike a single BNC model where the discriminative learning resorts to a gradient search, we can exploit the properties of a mixture to alleviate the complex learning task. The proposed method adds mixture components recursively via functional gradient boosting while maximizing the conditional likelihood. This method is highly efficient as it reduces to generative learning of a base BNC model on weighed data. The proposed approach is particularly suited to sequence classification problems where the kernels in the base model are usually too complex for effective gradient search. We demonstrate the improved classification performance of the proposed methods in an extensive set of evaluations on timeseries sequence data, including human motion classification problems. 1.
DISCRIMINATIVE FEATURE SELECTION FOR HIDDEN MARKOV MODELS USING SEGMENTAL BOOSTING
"... We address the feature selection problem for hidden Markov models (HMMs) in sequence classification. Temporal correlation in sequences often causes difficulty in applying feature selection techniques. Inspired by segmental kmeans segmentation (SKS) [1], we propose Segmentally Boosted HMMs (SBHMMs), ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
We address the feature selection problem for hidden Markov models (HMMs) in sequence classification. Temporal correlation in sequences often causes difficulty in applying feature selection techniques. Inspired by segmental kmeans segmentation (SKS) [1], we propose Segmentally Boosted HMMs (SBHMMs), where the stateoptimized features are constructed in a segmental and discriminative manner. The contributions are twofold. First, we introduce a novel feature selection algorithm, where the temporal dynamics are decoupled from the static learning procedure by assuming that the sequential data are piecewise independent and identically distributed. Second, we show that the SBHMM consistently improves traditional HMM recognition in various domains. The reduction of error compared to traditional HMMs ranges from 17 % to 70 % in American Sign Language recognition, human gait identification, lip reading, and speech recognition.
A Recursive Method for Discriminative Mixture Learning
"... We consider the problem of learning density mixture models for classification. Traditional learning of mixtures for density estimation focuses on models that correctly represent the density at all points in the sample space. Discriminative learning, on the other hand, aims at representing the dens ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
We consider the problem of learning density mixture models for classification. Traditional learning of mixtures for density estimation focuses on models that correctly represent the density at all points in the sample space. Discriminative learning, on the other hand, aims at representing the density at the decision boundary. We introduce a novel discriminative learning method for mixtures of generative models. Unlike traditional discriminative learning methods that often resort to computationally demanding gradient search optimization, the proposed method is highly efficient as it reduces to generative learning of individual mixture components on weighted data. Hence it is particularly suited to domains with complex component models, such as hidden Markov models or Bayesian networks in general, that are usually too complex for effective gradient search. We demonstrate the benefits of the proposed method in a comprehensive set of evaluations on timeseries sequence classification problems. 1.
Neural Systems with Numerically Matched InputOutput Statistic: Isotonic Bivariate Statistical Modeling
, 2007
"... Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Bivariate statistical modeling from incomplete data is a useful statistical tool that allows to discover the model underlying two data sets when the data in the two sets do not correspond in size nor in ordering. Such situation may occur when the sizes of the two data sets do not match (i.e., there are “holes ” in the data) or when the data sets have been acquired independently. Also, statistical modeling is useful when the amount of available data is enough to show relevant statistical features of the phenomenon underlying the data. We propose to tackle the problem of statistical modeling via a neural (nonlinear) system that is able to match its inputoutput statistic to the statistic of the available data sets. A key point of the new implementation proposed here is that it is based on lookuptable (LUT) neural systems, which guarantee a computationally advantageous way of implementing neural systems. A number of numerical experiments, performed on both synthetic and realworld data sets, illustrate the features of the proposed modeling procedure.
ing of G
"... (GMMs) calle problems in o to increase th densities follo healthy infant mature infants sional MelFr method for tra based reestim ..."
Abstract
 Add to MetaCart
(Show Context)
(GMMs) calle problems in o to increase th densities follo healthy infant mature infants sional MelFr method for tra based reestim
SemiSupervised Recursive Learning of Discriminative Mixture Models for TimeSeries Classification
"... We pose pattern classification as a density estimation problem where we consider mixtures of generative models under partially labeled data setups. Unlike traditional approaches that estimate density everywhere in data space, we focus on the density along the decision boundary that can yield more di ..."
Abstract
 Add to MetaCart
(Show Context)
We pose pattern classification as a density estimation problem where we consider mixtures of generative models under partially labeled data setups. Unlike traditional approaches that estimate density everywhere in data space, we focus on the density along the decision boundary that can yield more discriminative models with superior classification performance. We extend our earlier work on the recursive estimation method for discriminative mixture models to semisupervised learning setups where some of the data points lack class labels. Our model exploits the mixture structure in the functional gradient framework: it searches for the base mixture component model in a greedy fashion, maximizing the conditional class likelihoods for the labeled data and at the same time minimizing the uncertainty of class label prediction for unlabeled data points. The objective can be effectively imposed as individual mixture component learning on weighted data, hence our mixture learning typically becomes highly efficient for popular base generative models like Gaussians or hidden Markov models. Moreover, apart from the expectationmaximization algorithm, the proposed recursive estimation has several advantages including the lack of need for a predetermined mixture order and robustness to the choice of initial parameters. We demonstrate the benefits of the proposed approach on a comprehensive set of evaluations consisting of diverse timeseries classification problems in semisupervised scenarios.
by
, 2005
"... A mixture of Bayesian Network Classifiers(BNC) has a potential to yield superior classification and generative performance to a single BNC model. We introduce novel discriminative learning methods for mixtures of BNCs. Unlike a single BNC model where the discriminative learning resorts to a gradient ..."
Abstract
 Add to MetaCart
(Show Context)
A mixture of Bayesian Network Classifiers(BNC) has a potential to yield superior classification and generative performance to a single BNC model. We introduce novel discriminative learning methods for mixtures of BNCs. Unlike a single BNC model where the discriminative learning resorts to a gradient search, we can exploit the properties of a mixture to alleviate the complex learning task. The proposed method adds mixture components recursively via functional gradient boosting while maximizing the conditional likelihood. This method is highly efficient as it reduces to generative learning of a base BNC model on weighed data. The proposed approach is particularly suited to sequence classification problems where the kernels in the base model are usually too complex for effective gradient search. We demonstrate the improved classification performance of the proposed methods in an extensive set of evaluations on timeseries sequence data, including human motion classification problems. 1
Efficient Discriminative Learning of Mixture of Bayesian Networks for Sequence Classification
"... Recently, it has been shown that Bayesian Network Classifier (BNC), a generative model applied to a classification task, yields comparable performance to sophisticated discriminative methods such as SVMs and C4.5 [1]. Improved classification performance of BNCs is due, in part, to optimization of th ..."
Abstract
 Add to MetaCart
(Show Context)
Recently, it has been shown that Bayesian Network Classifier (BNC), a generative model applied to a classification task, yields comparable performance to sophisticated discriminative methods such as SVMs and C4.5 [1]. Improved classification performance of BNCs is due, in part, to optimization of the conditional likelihood (CML) [2]. Unfortunately, the CML optimization, often implemented as a gradient search, is computationally demanding, which is especially prohibitive for sequence domains with complex models such as Hidden Markov Models. As an alternative to a single BNC model, one may benefit from an ensemblebased approach. In [3], AdaBoost [4] is successfully applied to parameter boosting of a set of MLoptimized BNCs to minimize the exponential loss. However, the resulting model is not a generative model which may limit its domain of applications to classification tasks only. In this paper, we propose a fully generative mixture of BNCs. A mixture model has the potential to yield superior classification performance to a single BNC model, while remaining a rich density estimator. Its strength was previously proven for the sequence clustering problems via recursive generative estimation of mixture models [5]. In this paper, we formulate a theoretically sound approach to discriminative mixture learning for (sequence) classification problems. Unlike a single BNC model where the discriminative learning resorts to a gradient search, the proposed method takes advantage of the recursive additivity property of mixtures to alleviate the complex learning task. The mixture
Pose Embeddings: A Deep Architecture for Learning to Match Human Poses
"... We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a ..."
Abstract
 Add to MetaCart
(Show Context)
We present a method for learning an embedding that places images of humans in similar poses nearby. This embedding can be used as a direct method of comparing images based on human pose, avoiding potential challenges of estimating body joint positions. Pose embedding learning is formulated under a tripletbased distance criterion. A deep architecture is used to allow learning of a representation capable of making distinctions between different poses. Experiments on human pose matching and retrieval from video data demonstrate the potential of the method. 1.