Results 1  10
of
525
The PATH Solver: A NonMonotone Stabilization Scheme for Mixed Complementarity Problems
 OPTIMIZATION METHODS AND SOFTWARE
, 1995
"... The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptan ..."
Abstract

Cited by 179 (35 self)
 Add to MetaCart
(Show Context)
The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptance criterion and a nonmonotone pathsearch are then used to choose the next iterate. The algorithm is shown to be globally convergent under assumptions which generalize those required to obtain similar results in the smooth case. Several implementation issues are discussed, and extensive computational results obtained from problems commonly found in the literature are given.
Locally Weighted Learning for Control
, 1996
"... Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We ex ..."
Abstract

Cited by 172 (17 self)
 Add to MetaCart
(Show Context)
Lazy learning methods provide useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of complex systems. This paper surveys ways in which locally weighted learning, a type of lazy learning, has been applied by us to control tasks. We explain various forms that control tasks can take, and how this affects the choice of learning paradigm. The discussion section explores the interesting impact that explicitly remembering all previous experiences has on the problem of learning to control.
Direct search methods: Once scorned, now respectable
 Numerical analysis 1995, Vol.344, Pittman research notes
, 1996
"... ..."
(Show Context)
Recursive Markov chains, stochastic grammars, and monotone systems of nonlinear equations
 IN STACS
, 2005
"... We define Recursive Markov Chains (RMCs), a class of finitely presented denumerable Markov chains, and we study algorithms for their analysis. Informally, an RMC consists of a collection of finitestate Markov chains with the ability to invoke each other in a potentially recursive manner. RMCs offer ..."
Abstract

Cited by 72 (11 self)
 Add to MetaCart
(Show Context)
We define Recursive Markov Chains (RMCs), a class of finitely presented denumerable Markov chains, and we study algorithms for their analysis. Informally, an RMC consists of a collection of finitestate Markov chains with the ability to invoke each other in a potentially recursive manner. RMCs offer a natural abstract model for probabilistic programs with procedures. They generalize, in a precise sense, a number of well studied stochastic models, including Stochastic ContextFree Grammars (SCFG) and MultiType Branching Processes (MTBP). We focus on algorithms for reachability and termination analysis for RMCs: what is the probability that an RMC started from a given state reaches another target state, or that it terminates? These probabilities are in general irrational, and they arise as (least) fixed point solutions to certain (monotone) systems of nonlinear equations associated with RMCs. We address both the qualitative problem of determining whether the probabilities are 0, 1 or inbetween, and
Successive Overrelaxation for Support Vector Machines
 IEEE Transactions on Neural Networks
, 1998
"... Successive overrelaxation (SOR) for symmetric linear complementarity problems and quadratic programs [11, 12, 9] is used to train a support vector machine (SVM) [20, 3] for discriminating between the elements of two massive datasets, each with millions of points. Because SOR handles one point at a t ..."
Abstract

Cited by 71 (15 self)
 Add to MetaCart
(Show Context)
Successive overrelaxation (SOR) for symmetric linear complementarity problems and quadratic programs [11, 12, 9] is used to train a support vector machine (SVM) [20, 3] for discriminating between the elements of two massive datasets, each with millions of points. Because SOR handles one point at a time, similar to Platt's sequential minimal optimization (SMO) algorithm [18] which handles two constraints at a time, it can process very large datasets that need not reside in memory. The algorithm converges linearly to a solution. Encouraging numerical results are presented on datasets with up to 10 million points. Such massive discrimination problems cannot be processed by conventional linear or quadratic programming methods, and to our knowledge have not been solved by other methods. 1 Introduction Successive overrelaxation, originally developed for the solution of large systems of linear equations [16, 15] has been successfully applied to mathematical programming problems [4, 11, 12, 1...
Approximate Solutions to Markov Decision Processes
, 1999
"... One of the basic problems of machine learning is deciding how to act in an uncertain world. For example, if I want my robot to bring me a cup of coffee, it must be able to compute the correct sequence of electrical impulses to send to its motors to navigate from the coffee pot to my office. In fact, ..."
Abstract

Cited by 69 (9 self)
 Add to MetaCart
One of the basic problems of machine learning is deciding how to act in an uncertain world. For example, if I want my robot to bring me a cup of coffee, it must be able to compute the correct sequence of electrical impulses to send to its motors to navigate from the coffee pot to my office. In fact, since the results of its actions are not completely predictable, it is not enough just to compute the correct sequence; instead the robot must sense and correct for deviations from its intended path. In order for any machine learner to act reasonably in an uncertain environment, it must solve problems like the above one quickly and reliably. Unfortunately, the world is often so complicated that it is difficult or impossible to find the optimal sequence of actions to achieve a given goal. So, in order to scale our learners up to realworld problems, we usually must settle for approximate solutions. One representation for a learner's environment and goals is a Markov decision process or MDP. ...
Solving LargeScale Linear Programs by InteriorPoint Methods Under the MATLAB Environment
 Optimization Methods and Software
, 1996
"... In this paper, we describe our implementation of a primaldual infeasibleinteriorpoint algorithm for largescale linear programming under the MATLAB 1 environment. The resulting software is called LIPSOL  Linearprogramming InteriorPoint SOLvers. LIPSOL is designed to take the advantages of M ..."
Abstract

Cited by 63 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we describe our implementation of a primaldual infeasibleinteriorpoint algorithm for largescale linear programming under the MATLAB 1 environment. The resulting software is called LIPSOL  Linearprogramming InteriorPoint SOLvers. LIPSOL is designed to take the advantages of MATLAB's sparsematrix functions and external interface facilities, and of existing Fortran sparse Cholesky codes. Under the MATLAB environment, LIPSOL inherits a high degree of simplicity and versatility in comparison to its counterparts in Fortran or C language. More importantly, our extensive computational results demonstrate that LIPSOL also attains an impressive performance comparable with that of efficient Fortran or C codes in solving largescale problems. In addition, we discuss in detail a technique for overcoming numerical instability in Cholesky factorization at the endstage of iterations in interiorpoint algorithms. Keywords: Linear programming, PrimalDual infeasibleinteriorp...
On Asynchronous Iterations
, 2000
"... Asynchronous iterations arise naturally parallel computers wants minimize times. This paper reviews certain models asynchronous iterations, using a common theoretical framework. The corresponding convergence theory and various domains applications presented. These include nonsingular linear systems, ..."
Abstract

Cited by 61 (11 self)
 Add to MetaCart
Asynchronous iterations arise naturally parallel computers wants minimize times. This paper reviews certain models asynchronous iterations, using a common theoretical framework. The corresponding convergence theory and various domains applications presented. These include nonsingular linear systems, nonlinear systems, initial value problems.
Tree Consistency and Bounds on the Performance of the MaxProduct Algorithm and Its Generalizations
, 2002
"... Finding the maximum a posteriori (MAP) assignment of a discretestate distribution specified by a graphical model requires solving an integer program. The maxproduct algorithm, also known as the maxplus or minsum algorithm, is an iterative method for (approximately) solving such a problem on gr ..."
Abstract

Cited by 59 (5 self)
 Add to MetaCart
Finding the maximum a posteriori (MAP) assignment of a discretestate distribution specified by a graphical model requires solving an integer program. The maxproduct algorithm, also known as the maxplus or minsum algorithm, is an iterative method for (approximately) solving such a problem on graphs with cycles.
Constrained iterative speech enhancement with application to automatic speech recognition
 in Proc. 1988 IEEE ICASSP
"... AbstractIn this paper, an improved form of iterative speech enhancement for single channel inputs is formulated. The basis of the procedure is sequential maximum a posteriori estimation of the speech waveform and its allpole parameters as originally formulated by Lim and Oppenheim, followed by im ..."
Abstract

Cited by 56 (24 self)
 Add to MetaCart
AbstractIn this paper, an improved form of iterative speech enhancement for single channel inputs is formulated. The basis of the procedure is sequential maximum a posteriori estimation of the speech waveform and its allpole parameters as originally formulated by Lim and Oppenheim, followed by imposition of constraints upon the sequence of speech spectra. The new approaches impose intraframe and interframe constaints on the input speech signal to ensure more speechlike formant trajectories, reduce frametoframe pole jitter, and effectively introduce a relaxation parameter to the iterative scheme. Recently discovered properties of the line spectral pair representation of speech allow for an efficient and direct procedure for application of many of the constraint requirements. Substantial improvement over the unconstrained method has been observed in a variety of domains. First, informal listener quality evaluation tests and objective speech quality measures demonstrate the technique's effectiveness for additive white Gaussian noise. A consistent terminating point for the iterative technique is also shown. Second, the algorithms have been generalized and successfully tested for noise which is nonwhite and slowly varying in characteristics. The current systems result in substantially improved speech quality and LPC parameter estimation in this context with only a minor increase in computational requirements. Third, the algorithms were evaluated with respect to improving automatic recognition of speech in the presence of additive noise, and shown to outperform other enhancement methods in this application. I.