• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 30,385
Next 10 →

Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms

by Michael Collins , 2002
"... We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modific ..."
Abstract - Cited by 660 (13 self) - Add to MetaCart
We describe new algorithms for training tagging models, as an alternative to maximum-entropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a

Coupled hidden Markov models for complex action recognition

by Matthew Brand, Nuria Oliver, Alex Pentland , 1996
"... We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling and ..."
Abstract - Cited by 501 (22 self) - Add to MetaCart
We present algorithms for coupling and training hidden Markov models (HMMs) to model interacting processes, and demonstrate their superiority to conventional HMMs in a vision task classifying two-handed actions. HMMs are perhaps the most successful framework in perceptual computing for modeling

A gentle tutorial on the EM algorithm and its application to parameter estimation for gaussian mixture and hidden markov models

by Jeff A. Bilmes , 1997
"... We describe the maximum-likelihood parameter estimation problem and how the Expectation-form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) fi ..."
Abstract - Cited by 693 (4 self) - Add to MetaCart
) finding the parameters of a hidden Markov model (HMM) (i.e., the Baum-Welch algorithm) for both discrete and Gaussian mixture observation models. We derive the update equations in fairly explicit detail but we do not prove any convergence properties. We try to emphasize intuition rather than mathematical

Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm

by Yongyue Zhang, Michael Brady, Stephen Smith - IEEE TRANSACTIONS ON MEDICAL. IMAGING , 2001
"... The finite mixture (FM) model is the most commonly used model for statistical segmentation of brain magnetic resonance (MR) images because of its simple mathematical form and the piecewise constant nature of ideal brain MR images. However, being a histogram-based model, the FM has an intrinsic limi ..."
Abstract - Cited by 639 (15 self) - Add to MetaCart
-based methods produce unreliable results. In this paper, we propose a novel hidden Markov random field (HMRF) model, which is a stochastic process generated by a MRF whose state sequence cannot be observed directly but which can be indirectly estimated through observations. Mathematically, it can be shown

Regularization paths for generalized linear models via coordinate descent

by Jerome Friedman, Trevor Hastie, Rob Tibshirani , 2009
"... We develop fast algorithms for estimation of generalized linear models with convex penalties. The models include linear regression, twoclass logistic regression, and multinomial regression problems while the penalties include ℓ1 (the lasso), ℓ2 (ridge regression) and mixtures of the two (the elastic ..."
Abstract - Cited by 724 (15 self) - Add to MetaCart
elastic net). The algorithms use cyclical coordinate descent, computed along a regularization path. The methods can handle large problems and can also deal efficiently with sparse features. In comparative timings we find that the new algorithms are considerably faster than competing methods.

The Small-World Phenomenon: An Algorithmic Perspective

by Jon Kleinberg - in Proceedings of the 32nd ACM Symposium on Theory of Computing , 2000
"... Long a matter of folklore, the “small-world phenomenon ” — the principle that we are all linked by short chains of acquaintances — was inaugurated as an area of experimental study in the social sciences through the pioneering work of Stanley Milgram in the 1960’s. This work was among the first to m ..."
Abstract - Cited by 824 (5 self) - Add to MetaCart
to explain the striking algorithmic component of Milgram’s original findings: that individuals using local information are collectively very effective at actually constructing short paths between two points in a social network. Although recently proposed network models are rich in short paths, we prove

A fast learning algorithm for deep belief nets

by Geoffrey E. Hinton, Simon Osindero - Neural Computation , 2006
"... We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in densely-connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a ..."
Abstract - Cited by 970 (49 self) - Add to MetaCart
We show how to use “complementary priors ” to eliminate the explaining away effects that make inference difficult in densely-connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer

Theoretical improvements in algorithmic efficiency for network flow problems

by Jack Edmonds, Richard M. Karp - , 1972
"... This paper presents new algorithms for the maximum flow problem, the Hitchcock transportation problem, and the general minimum-cost flow problem. Upper bounds on ... the numbers of steps in these algorithms are derived, and are shown to compale favorably with upper bounds on the numbers of steps req ..."
Abstract - Cited by 560 (0 self) - Add to MetaCart
required by earlier algorithms. First, the paper states the maximum flow problem, gives the Ford-Fulkerson labeling method for its solution, and points out that an improper choice of flow augmenting paths can lead to severe computational difficulties. Then rules of choice that avoid these difficulties

Fibonacci Heaps and Their Uses in Improved Network optimization algorithms

by Michael L. Fredman, Robert Endre Tarjan , 1987
"... In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, Fibonacci heaps (abbreviated F-heaps), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an n-item heap in qlogn) amortized tim ..."
Abstract - Cited by 739 (18 self) - Add to MetaCart
time and all other standard heap operations in o ( 1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms. In particular, we obtain the following worst-case bounds, where n is the number of vertices and m the number of edges

A comparative analysis of selection schemes used in genetic algorithms

by David E. Goldberg, Kalyanmoy Deb - Foundations of Genetic Algorithms , 1991
"... This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, ranking selection, tournament selection, and Genitor (or «steady state") selection are compared on the basis of solutions to deterministic difference or d ..."
Abstract - Cited by 531 (31 self) - Add to MetaCart
This paper considers a number of selection schemes commonly used in modern genetic algorithms. Specifically, proportionate reproduction, ranking selection, tournament selection, and Genitor (or «steady state") selection are compared on the basis of solutions to deterministic difference
Next 10 →
Results 1 - 10 of 30,385
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University