Results 1  10
of
34
Efficient Memorybased Learning for Robot Control
, 1990
"... This dissertation is about the application of machine learning to robot control. A system which has no initial model of the robot/world dynamics should be able to construct such a model using data received through its sensorsan approach which is formalized here as the $AB (StateActionBehaviour) ..."
Abstract

Cited by 108 (2 self)
 Add to MetaCart
This dissertation is about the application of machine learning to robot control. A system which has no initial model of the robot/world dynamics should be able to construct such a model using data received through its sensorsan approach which is formalized here as the $AB (StateActionBehaviour) control cycle. A method of learning is presented in which all the experiences in the lifetime of the robot are explicitly remembered. The experiences are stored in a manner which permits fast recall of the closest previous experience to any new situation, thus permitting very quick predictions of the effects of proposed actions and, given a goal behaviour, permitting fast generation of a candidate action. The learning can take place in highdimensional nonlinear control spaces with realvalued ranges of variables. Furthermore, the method avoids a number of shortcomings of earlier learning methods in which the controller can become trapped in inadequate performance which does not improve. Also considered is how the system is made resistant to noisy inputs and how it adapts to environmental changes. A well founded mechanism for choosing actions is introduced which solves the experiment/perform dilemma for this domain with adequate computational efficiency, and with fast convergence to the goal behaviour. The dissertation explefins in detail how the $AB control cycle can be integrated into both low and high complexity tasks. The methods and algorithms are evaluated with numerous experiments using both real and simulated robot domefins. The final experiment also illustrates how a compound learning task can be structured into a hierarchy of simple learning tasks.
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is descr ..."
Abstract

Cited by 74 (11 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
Modifying a Sparse Cholesky Factorization
, 1997
"... Given a sparse symmetric positive definite matrix AA T and an associated sparse Cholesky factorization LL T , we develop sparse techniques for obtaining the new factorization associated with either adding a column to A or deleting a column from A. Our techniques are based on an analysis and mani ..."
Abstract

Cited by 42 (14 self)
 Add to MetaCart
Given a sparse symmetric positive definite matrix AA T and an associated sparse Cholesky factorization LL T , we develop sparse techniques for obtaining the new factorization associated with either adding a column to A or deleting a column from A. Our techniques are based on an analysis and manipulation of the underlying graph structure and on ideas of Gill, Golub, Murray, and Saunders for modifying a dense Cholesky factorization. Our algorithm involves a new sparse matrix concept, the multiplicity of an entry in L. The multiplicity is essentially a measure of the number of times an entry is modified during symbolic factorization. We show that our methods extend to the general case where an arbitrary sparse symmetric positive definite matrix is modified. Our methods are optimal in the sense that they take time proportional to the number of nonzero entries in L that change. This work was supported by National Science Foundation grants DMS9404431 and DMS9504974. y davis@cise.uf...
Meanshift analysis using quasinewton methods
 Proceedings of the International Conference on Image Processing 3 (2003) 447 – 450
, 2003
"... Meanshift analysis is a general nonparametric clustering technique based on density estimation for the analysis of complex feature spaces. The algorithm consists of a simple iterative procedure that shifts each of the feature points to the nearest stationary point along the gradient directions of t ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Meanshift analysis is a general nonparametric clustering technique based on density estimation for the analysis of complex feature spaces. The algorithm consists of a simple iterative procedure that shifts each of the feature points to the nearest stationary point along the gradient directions of the estimated density function. It has been successfully applied to many applications such as segmentation and tracking. However, despite its promising performance, there are applications for which the algorithm converges too slowly to be practical. We propose and implement an improved version of the meanshift algorithm using quasiNewton methods to achieve higher convergence rates. Another benefit of our algorithm is its ability to achieve clustering even for very complex and irregular featurespace topography. Experimental results demonstrate the efficiency and effectiveness of our algorithm. 1.
Design and Performance of Parallel and Distributed Approximation Algorithms for Maxcut
, 1995
"... We develop and experiment with a new parallel algorithm to approximate the maximum weight cut in a weighted undirected graph. Our implementation starts with the recent (serial) algorithm of Goemans and Williamson for this problem. We consider several different versions of this algorithm, varying the ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We develop and experiment with a new parallel algorithm to approximate the maximum weight cut in a weighted undirected graph. Our implementation starts with the recent (serial) algorithm of Goemans and Williamson for this problem. We consider several different versions of this algorithm, varying the interiorpoint part of the algorithm in order to optimize the parallel efficiency of our method. Our work aims for an efficient, practical formulation of the algorithm with closeto optimal parallelization. We analyze our parallel algorithm in the LogP model and predict linear speedup for a wide range of the parameters. We have implemented the algorithm using the message passing interface (MPI) and run it on several parallel machines. In particular, we present performance measurements on the IBM SP2, the Connection Machine CM5, and a cluster of workstations. We observe that the measured speedups are predicted well by our analysis in the LogP model. Finally, we test our implementation on s...
An overview of unconstrained optimization
 Online]. Available: citeseer.ist.psu.edu/fletcher93overview.html 150
, 1993
"... bundle filter method for nonsmooth nonlinear ..."
Application of a New Adjoint Newton Algorithm to the 3D ARPS Storm Scale Model Using Simulated Data
, 1997
"... The adjoint Newton algorithm (ANA) is based on the first and secondorder adjoint techniques allowing one to obtain the "Newton line search direction" by integrating a "tangent linear model" backward in time (with negative time steps). Moreover, the ANA provides a new technique to find "Newton line ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
The adjoint Newton algorithm (ANA) is based on the first and secondorder adjoint techniques allowing one to obtain the "Newton line search direction" by integrating a "tangent linear model" backward in time (with negative time steps). Moreover, the ANA provides a new technique to find "Newton line search direction" without using gradient information. The error present in approximating the Hessian (the matrix of second order derivatives) of the cost function with respect to the control variables in the quasiNewton type algorithm is thus completely eliminated, while the storage problem related to storing the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The ANA is applied here, for the first time, in the framework of 4D variational data assimilation to the adiabatic version of the Advanced Regional Prediction System (ARPS), a 3dimensional, compressible, nonhydrostatic stormscale model. The purpose is to assess the feasibility and efficiency ...
Newton and quasiNewton methods for a class of nonsmooth equations and related problems
 SIAM J. Optim
, 1997
"... Abstract. The paper presents concrete realizations of quasiNewton methods for solving several standard problems including complementarity problems, special variational inequality problems, and the Karush–Kuhn–Tucker (KKT) system of nonlinear programming. A new approximation idea is introduced in th ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Abstract. The paper presents concrete realizations of quasiNewton methods for solving several standard problems including complementarity problems, special variational inequality problems, and the Karush–Kuhn–Tucker (KKT) system of nonlinear programming. A new approximation idea is introduced in this paper. The Qsuperlinear convergence of the Newton method and the quasiNewton method are established under suitable assumptions, in which the existence of F ′(x∗)isnot assumed. The new algorithms only need to solve a linear equation in each step. For complementarity problems, the QR factorization on the quasiNewton method is discussed.
Second Order Information in Data Assimilation
, 2000
"... In variational data assimilation (VDA) for meteorological and/or oceanic models, the assimilated fields are deduced by combining the model and the gradient of a cost functional measuring discrepancy between model solution and observation, via a first order optimality system. However existence and un ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
In variational data assimilation (VDA) for meteorological and/or oceanic models, the assimilated fields are deduced by combining the model and the gradient of a cost functional measuring discrepancy between model solution and observation, via a first order optimality system. However existence and uniqueness of the VDA problem along with convergence of the algorithms for its implementation depend on the convexity of the cost function. Properties of local convexity can be deduced by studying the Hessian of the cost function in the vicinity of the optimum thus the necessity of second order information to ensure a unique solution to the VDA problem. In this paper we present a comprehensive review of issues related to second order analysis of the problem of VDA along with many important issues closely connected to it. In particular we study issues of existence, uniqueness and regularization through second order properties. We then focus on second order information related to statistical properties and on issues related to preconditioning and optimization methods and second order VDA analysis. Predictability and its relation to the structure of the Hessian of the cost functional is then discussed along with issues of sensitivity analysis in the presence of data being assimilated. Computational complexity issues are also addressed and discussed. Automatic differentiation issues related to second order information are also discussed along with the computational complexity of deriving the second order adjoint. Finally
Application of the QuasiInverse Method for Data Assimilation
, 1999
"... Introduction Using a theoretical relationship (i.e., physical laws), for given values of model parameters, a `direct' (or forward) problem aims at predicting the values of some observable quantities. In contrast, for given measurements of observable quantities, an `inverse' problem aims at obtainin ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Introduction Using a theoretical relationship (i.e., physical laws), for given values of model parameters, a `direct' (or forward) problem aims at predicting the values of some observable quantities. In contrast, for given measurements of observable quantities, an `inverse' problem aims at obtaining the values of model parameters (Tarantola 1987). Over the past two decades, many inverse problems in meteorology have been solved using the adjoint models of corresponding meteorological prediction systems. These include the generation of singular vectors for ensemble prediction (e.g., Molteni et al. 1996); fourdimensional variational data assimilation (e.g., Lewis and Derber 1985; Le Dimet and Talagrand 1986; Courtier et al. 1994); forecast sensitivity to the initial conditions (Rabier et al. 1996; Pu et al. 1997a); and targeted observations (e.g., Rohaly et al. 1998; Pu et al. 1998). The fourdimensional variational data assimilation (4DVar) using the adjoint model, in which