Results 1 - 10
of
1,754
Exact Train Pathing
"... Suppose we are given a schedule of train movements over a rail network into which a new train is to be included. The origin and the destination are specified for the new train; it is required that a schedule (including the path) be determined for it so as to minimize the time taken without affecting ..."
Abstract
- Add to MetaCart
Suppose we are given a schedule of train movements over a rail network into which a new train is to be included. The origin and the destination are specified for the new train; it is required that a schedule (including the path) be determined for it so as to minimize the time taken without
A Learning Algorithm for Continually Running Fully Recurrent Neural Networks
, 1989
"... The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a precis ..."
Abstract
-
Cited by 534 (4 self)
- Add to MetaCart
The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have: (1) the advantage that they do not require a
Real-time human pose recognition in parts from single depth images
- IN CVPR
, 2011
"... We propose a new method to quickly and accurately predict 3D positions of body joints from a single depth image, using no temporal information. We take an object recognition approach, designing an intermediate body parts representation that maps the difficult pose estimation problem into a simpler p ..."
Abstract
-
Cited by 568 (17 self)
- Add to MetaCart
per-pixel classification problem. Our large and highly varied training dataset allows the classifier to estimate body parts invariant to pose, body shape, clothing, etc. Finally we generate confidence-scored 3D proposals of several body joints by reprojecting the classification result and finding
CoNLL-X shared task on multilingual dependency parsing
- In Proc. of CoNLL
, 2006
"... Each year the Conference on Computational Natural Language Learning (CoNLL) 1 features a shared task, in which participants train and test their systems on exactly the same data sets, in order to better compare systems. The tenth CoNLL (CoNLL-X) saw a shared task on Multilingual Dependency Parsing. ..."
Abstract
-
Cited by 344 (2 self)
- Add to MetaCart
Each year the Conference on Computational Natural Language Learning (CoNLL) 1 features a shared task, in which participants train and test their systems on exactly the same data sets, in order to better compare systems. The tenth CoNLL (CoNLL-X) saw a shared task on Multilingual Dependency Parsing
Training structural SVMs when exact inference is intractable
- IN: PROC. INTL. CONF. ON MACHINE LEARNING (ICML
, 2008
"... While discriminative training (e.g., CRF, structural SVM) holds much promise for machine translation, image segmentation, and clustering, the complex inference these applications require make exact training intractable. This leads to a need for approximate training methods. Unfortunately, knowledge ..."
Abstract
-
Cited by 138 (7 self)
- Add to MetaCart
While discriminative training (e.g., CRF, structural SVM) holds much promise for machine translation, image segmentation, and clustering, the complex inference these applications require make exact training intractable. This leads to a need for approximate training methods. Unfortunately, knowledge
Incremental and Decremental Support Vector Machine Learning
, 2000
"... An on-line recursive algorithm for training support vector machines, one vector at a time, is presented. Adiabatic increments retain the KuhnTucker conditions on all previously seen training data, in a number of steps each computed analytically. The incremental procedure is reversible, and decrement ..."
Abstract
-
Cited by 251 (4 self)
- Add to MetaCart
An on-line recursive algorithm for training support vector machines, one vector at a time, is presented. Adiabatic increments retain the KuhnTucker conditions on all previously seen training data, in a number of steps each computed analytically. The incremental procedure is reversible
Semi-Markov conditional random fields for information extraction
- In Advances in Neural Information Processing Systems 17
, 2004
"... We describe semi-Markov conditional random fields (semi-CRFs), a conditionally trained version of semi-Markov chains. Intuitively, a semi-CRF on an input sequence x outputs a “segmentation ” of x, in which labels are assigned to segments (i.e., subsequences) of x rather than to individual elements x ..."
Abstract
-
Cited by 254 (10 self)
- Add to MetaCart
We describe semi-Markov conditional random fields (semi-CRFs), a conditionally trained version of semi-Markov chains. Intuitively, a semi-CRF on an input sequence x outputs a “segmentation ” of x, in which labels are assigned to segments (i.e., subsequences) of x rather than to individual elements
Gaussian process latent variable models for visualisation of high dimensional data
- Adv. in Neural Inf. Proc. Sys
, 2004
"... We introduce a variational inference framework for training the Gaussian process latent variable model and thus performing Bayesian nonlinear dimensionality reduction. This method allows us to variationally integrate out the input variables of the Gaussian process and compute a lower bound on the ex ..."
Abstract
-
Cited by 230 (13 self)
- Add to MetaCart
on the exact marginal likelihood of the nonlinear latent variable model. The maximization of the variational lower bound provides a Bayesian training procedure that is robust to overfitting and can automatically select the dimensionality of the nonlinear latent space. We demonstrate our method on real world
Semi-supervised support vector machines
- In Proc. NIPS
, 1998
"... We introduce a semi-supervised support vector machine (S3yM) method. Given a training set of labeled data and a working set of unlabeled data, S3YM constructs a support vector machine us-ing both the training and working sets. We use S3YM to solve the transduction problem using overall risk minimiza ..."
Abstract
-
Cited by 223 (6 self)
- Add to MetaCart
We introduce a semi-supervised support vector machine (S3yM) method. Given a training set of labeled data and a working set of unlabeled data, S3YM constructs a support vector machine us-ing both the training and working sets. We use S3YM to solve the transduction problem using overall risk
Blind Identification and Equalization Based on Second-Order Statistics: A Time Domain Approach
- IEEE Trans. Inform. Theory
, 1994
"... A new blind channel identification and equalization method is proposed that exploits the cyclostationarity of oversampled communication signals to achieve identification and equalization of possibly nonminimum phase (multipath) channels without using training signals. Unlike most adaptive blind equa ..."
Abstract
-
Cited by 208 (7 self)
- Add to MetaCart
A new blind channel identification and equalization method is proposed that exploits the cyclostationarity of oversampled communication signals to achieve identification and equalization of possibly nonminimum phase (multipath) channels without using training signals. Unlike most adaptive blind
Results 1 - 10
of
1,754