Results 1 - 10
of
57,946
Approximate Riemann Solvers, Parameter Vectors, and Difference Schemes
- J. COMP. PHYS
, 1981
"... Several numerical schemes for the solution of hyperbolic conservation laws are based on exploiting the information obtained by considering a sequence of Riemann problems. It is argued that in existing schemes much of this information is degraded, and that only certain features of the exact solution ..."
Abstract
-
Cited by 1010 (2 self)
- Add to MetaCart
Several numerical schemes for the solution of hyperbolic conservation laws are based on exploiting the information obtained by considering a sequence of Riemann problems. It is argued that in existing schemes much of this information is degraded, and that only certain features of the exact solution
Where the REALLY Hard Problems Are
- IN J. MYLOPOULOS AND R. REITER (EDS.), PROCEEDINGS OF 12TH INTERNATIONAL JOINT CONFERENCE ON AI (IJCAI-91),VOLUME 1
, 1991
"... It is well known that for many NP-complete problems, such as K-Sat, etc., typical cases are easy to solve; so that computationally hard cases must be rare (assuming P != NP). This paper shows that NP-complete problems can be summarized by at least one "order parameter", and that the hard p ..."
Abstract
-
Cited by 683 (1 self)
- Add to MetaCart
It is well known that for many NP-complete problems, such as K-Sat, etc., typical cases are easy to solve; so that computationally hard cases must be rare (assuming P != NP). This paper shows that NP-complete problems can be summarized by at least one "order parameter", and that the hard
A gentle tutorial on the EM algorithm and its application to parameter estimation for gaussian mixture and hidden markov models
, 1997
"... We describe the maximum-likelihood parameter estimation problem and how the Expectation-form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2) fi ..."
Abstract
-
Cited by 693 (4 self)
- Add to MetaCart
We describe the maximum-likelihood parameter estimation problem and how the Expectation-form of the EM algorithm as it is often given in the literature. We then develop the EM parameter estimation procedure for two applications: 1) finding the parameters of a mixture of Gaussian densities, and 2
Choosing multiple parameters for support vector machines
- MACHINE LEARNING
, 2002
"... The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing para ..."
Abstract
-
Cited by 470 (17 self)
- Add to MetaCart
The problem of automatically tuning multiple parameters for pattern recognition Support Vector Machines (SVMs) is considered. This is done by minimizing some estimates of the generalization error of SVMs using a gradient descent algorithm over the set of parameters. Usual methods for choosing
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
- IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING
, 2007
"... Many problems in signal processing and statistical inference involve finding sparse solutions to under-determined, or ill-conditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a spa ..."
Abstract
-
Cited by 539 (17 self)
- Add to MetaCart
-constrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the Barzilai-Borwein method. Computational experiments show that these GP approaches perform well in a wide range
Learning low-level vision
- International Journal of Computer Vision
, 2000
"... We show a learning-based method for low-level vision problems. We set-up a Markov network of patches of the image and the underlying scene. A factorization approximation allows us to easily learn the parameters of the Markov network from synthetic examples of image/scene pairs, and to e ciently prop ..."
Abstract
-
Cited by 579 (30 self)
- Add to MetaCart
We show a learning-based method for low-level vision problems. We set-up a Markov network of patches of the image and the underlying scene. A factorization approximation allows us to easily learn the parameters of the Markov network from synthetic examples of image/scene pairs, and to e ciently
Hierarchical mixtures of experts and the EM algorithm
, 1993
"... We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM’s). Learning is treated as a max-imum likelihood ..."
Abstract
-
Cited by 885 (21 self)
- Add to MetaCart
problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parame-ters of the architecture. We also develop an on-line learning algorithm in which the pa-rameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.
A View Of The Em Algorithm That Justifies Incremental, Sparse, And Other Variants
- Learning in Graphical Models
, 1998
"... . The EM algorithm performs maximum likelihood estimation for data in which some variables are unobserved. We present a function that resembles negative free energy and show that the M step maximizes this function with respect to the model parameters and the E step maximizes it with respect to the d ..."
Abstract
-
Cited by 993 (18 self)
- Add to MetaCart
estimation problem. A variant of the algorithm that exploits sparse conditional distributions is also described, and a wide range of other variant algorithms are also seen to be possible. 1. Introduction The Expectation-Maximization (EM) algorithm finds maximum likelihood parameter estimates in problems
Reversible jump Markov chain Monte Carlo computation and Bayesian model determination
- Biometrika
, 1995
"... Markov chain Monte Carlo methods for Bayesian computation have until recently been restricted to problems where the joint distribution of all variables has a density with respect to some xed standard underlying measure. They have therefore not been available for application to Bayesian model determi ..."
Abstract
-
Cited by 1345 (23 self)
- Add to MetaCart
Markov chain Monte Carlo methods for Bayesian computation have until recently been restricted to problems where the joint distribution of all variables has a density with respect to some xed standard underlying measure. They have therefore not been available for application to Bayesian model
A training algorithm for optimal margin classifiers
- PROCEEDINGS OF THE 5TH ANNUAL ACM WORKSHOP ON COMPUTATIONAL LEARNING THEORY
, 1992
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract
-
Cited by 1865 (43 self)
- Add to MetaCart
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters
Results 1 - 10
of
57,946