Results 1  10
of
62
Efficient Memorybased Learning for Robot Control
, 1990
"... This dissertation is about the application of machine learning to robot control. A system which has no initial model of the robot/world dynamics should be able to construct such a model using data received through its sensorsan approach which is formalized here as the $AB (StateActionBehaviour) ..."
Abstract

Cited by 121 (3 self)
 Add to MetaCart
This dissertation is about the application of machine learning to robot control. A system which has no initial model of the robot/world dynamics should be able to construct such a model using data received through its sensorsan approach which is formalized here as the $AB (StateActionBehaviour) control cycle. A method of learning is presented in which all the experiences in the lifetime of the robot are explicitly remembered. The experiences are stored in a manner which permits fast recall of the closest previous experience to any new situation, thus permitting very quick predictions of the effects of proposed actions and, given a goal behaviour, permitting fast generation of a candidate action. The learning can take place in highdimensional nonlinear control spaces with realvalued ranges of variables. Furthermore, the method avoids a number of shortcomings of earlier learning methods in which the controller can become trapped in inadequate performance which does not improve. Also considered is how the system is made resistant to noisy inputs and how it adapts to environmental changes. A well founded mechanism for choosing actions is introduced which solves the experiment/perform dilemma for this domain with adequate computational efficiency, and with fast convergence to the goal behaviour. The dissertation explefins in detail how the $AB control cycle can be integrated into both low and high complexity tasks. The methods and algorithms are evaluated with numerous experiments using both real and simulated robot domefins. The final experiment also illustrates how a compound learning task can be structured into a hierarchy of simple learning tasks.
LARGESCALE LINEARLY CONSTRAINED OPTIMIZATION
, 1978
"... An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is ..."
Abstract

Cited by 108 (21 self)
 Add to MetaCart
An algorithm for solving largescale nonlinear ' programs with linear constraints is presented. The method combines efficient sparsematrix techniques as in the revised simplex method with stable quasiNewton methods for handling the nonlinearities. A generalpurpose production code (MINOS) is described, along with computational experience on a wide variety of problems.
Hybrid methods using genetic algorithms for global optimization
 IEEE Trans. System Man Cybernet
, 1996
"... AbstractThis paper discusses the tradeoff between accuracy, reliability and computing time in global optimization. Particular compromises provided by traditional methods (QuasiNewton and NelderMead’s Simplex methods) and Genetic Algorithms are addressed and illustrated by a particular applicatio ..."
Abstract

Cited by 64 (0 self)
 Add to MetaCart
(Show Context)
AbstractThis paper discusses the tradeoff between accuracy, reliability and computing time in global optimization. Particular compromises provided by traditional methods (QuasiNewton and NelderMead’s Simplex methods) and Genetic Algorithms are addressed and illustrated by a particular application in the field of nonlinear system identification. Subsequently, new hybrid methods are designed, combining principles from Genetic Algorithms and “hillclimbing ” methods in order to find a better compromise to the tradeoff. Inspired by biology and especially by the manner in which living beings adapt themselves to their environment, these hybrid methods involve two interwoven levels of optimization, namely Evolution (Genetic Algorithms) and Individual Learning (QuasiNewton), which cooperate in a global process of optimization. One of these hybrid methods appears to join the group of stateoftheart global optimization methods: it combines the reliability properties of the Genetic Algorithms with the accuracy of QuasiNewton method, while requiring a computation time only slightly higher than the latter. I.
Modifying a Sparse Cholesky Factorization
, 1997
"... Given a sparse symmetric positive definite matrix AA T and an associated sparse Cholesky factorization LL T , we develop sparse techniques for obtaining the new factorization associated with either adding a column to A or deleting a column from A. Our techniques are based on an analysis and mani ..."
Abstract

Cited by 51 (15 self)
 Add to MetaCart
(Show Context)
Given a sparse symmetric positive definite matrix AA T and an associated sparse Cholesky factorization LL T , we develop sparse techniques for obtaining the new factorization associated with either adding a column to A or deleting a column from A. Our techniques are based on an analysis and manipulation of the underlying graph structure and on ideas of Gill, Golub, Murray, and Saunders for modifying a dense Cholesky factorization. Our algorithm involves a new sparse matrix concept, the multiplicity of an entry in L. The multiplicity is essentially a measure of the number of times an entry is modified during symbolic factorization. We show that our methods extend to the general case where an arbitrary sparse symmetric positive definite matrix is modified. Our methods are optimal in the sense that they take time proportional to the number of nonzero entries in L that change. This work was supported by National Science Foundation grants DMS9404431 and DMS9504974. y davis@cise.uf...
Lanczostype solvers for nonsymmetric linear systems of equations
 Acta Numer
, 1997
"... Among the iterative methods for solving large linear systems with a sparse (or, possibly, structured) nonsymmetric matrix, those that are based on the Lanczos process feature short recurrences for the generation of the Krylov space. This means low cost and low memory requirement. This review article ..."
Abstract

Cited by 40 (11 self)
 Add to MetaCart
Among the iterative methods for solving large linear systems with a sparse (or, possibly, structured) nonsymmetric matrix, those that are based on the Lanczos process feature short recurrences for the generation of the Krylov space. This means low cost and low memory requirement. This review article introduces the reader not only to the basic forms of the Lanczos process and some of the related theory, but also describes in detail a number of solvers that are based on it, including those that are considered to be the most efficient ones. Possible breakdowns of the algorithms and ways to cure them by lookahead are also discussed. www.DownloadPaper.ir
Meanshift analysis using quasinewton methods
 Proceedings of the International Conference on Image Processing 3 (2003) 447 – 450
, 2003
"... Meanshift analysis is a general nonparametric clustering technique based on density estimation for the analysis of complex feature spaces. The algorithm consists of a simple iterative procedure that shifts each of the feature points to the nearest stationary point along the gradient directions of t ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
(Show Context)
Meanshift analysis is a general nonparametric clustering technique based on density estimation for the analysis of complex feature spaces. The algorithm consists of a simple iterative procedure that shifts each of the feature points to the nearest stationary point along the gradient directions of the estimated density function. It has been successfully applied to many applications such as segmentation and tracking. However, despite its promising performance, there are applications for which the algorithm converges too slowly to be practical. We propose and implement an improved version of the meanshift algorithm using quasiNewton methods to achieve higher convergence rates. Another benefit of our algorithm is its ability to achieve clustering even for very complex and irregular featurespace topography. Experimental results demonstrate the efficiency and effectiveness of our algorithm. 1.
Newton and quasiNewton methods for a class of nonsmooth equations and related problems
 SIAM J. Optim
, 1997
"... Abstract. The paper presents concrete realizations of quasiNewton methods for solving several standard problems including complementarity problems, special variational inequality problems, and the Karush–Kuhn–Tucker (KKT) system of nonlinear programming. A new approximation idea is introduced in th ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
(Show Context)
Abstract. The paper presents concrete realizations of quasiNewton methods for solving several standard problems including complementarity problems, special variational inequality problems, and the Karush–Kuhn–Tucker (KKT) system of nonlinear programming. A new approximation idea is introduced in this paper. The Qsuperlinear convergence of the Newton method and the quasiNewton method are established under suitable assumptions, in which the existence of F ′(x∗)isnot assumed. The new algorithms only need to solve a linear equation in each step. For complementarity problems, the QR factorization on the quasiNewton method is discussed.
A PRACTICAL PROCEDURE FOR CALIBRATING MICROSCOPIC TRAFFIC SIMULATION MODELS By
, 2002
"... As employment of simulation is becoming wide spread in traffic engineering practice, questions about the accuracy and reliability of its results need to be addressed convincingly. A major criticism related to this is proper calibration of the simulation parameters as well as validation which is ofte ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
(Show Context)
As employment of simulation is becoming wide spread in traffic engineering practice, questions about the accuracy and reliability of its results need to be addressed convincingly. A major criticism related to this is proper calibration of the simulation parameters as well as validation which is often not performed, or dealt with in an adhoc fashion. This paper presents a complete, systematic and general calibration methodology for obtaining accuracy needed in high performance situations. A technique for automating a significant part of the calibration process through an optimization process is also presented. The methodology is general and is implemented on a selected simulator to demonstrate its applicability. The results of the implementation in two freeway sections of reasonable size and complexity in which detailed data were collected and compared to simulated results, demonstrate the effectiveness of the manual calibration methodology. For instance, through calibration the average volume correlation coefficient on 21 detecting stations improved from 0.78 to 0.96. Comparable results were obtained with the automated calibration procedure with significant time savings and reduced
Design and Performance of Parallel and Distributed Approximation Algorithms for Maxcut
, 1995
"... We develop and experiment with a new parallel algorithm to approximate the maximum weight cut in a weighted undirected graph. Our implementation starts with the recent (serial) algorithm of Goemans and Williamson for this problem. We consider several different versions of this algorithm, varying the ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
We develop and experiment with a new parallel algorithm to approximate the maximum weight cut in a weighted undirected graph. Our implementation starts with the recent (serial) algorithm of Goemans and Williamson for this problem. We consider several different versions of this algorithm, varying the interiorpoint part of the algorithm in order to optimize the parallel efficiency of our method. Our work aims for an efficient, practical formulation of the algorithm with closeto optimal parallelization. We analyze our parallel algorithm in the LogP model and predict linear speedup for a wide range of the parameters. We have implemented the algorithm using the message passing interface (MPI) and run it on several parallel machines. In particular, we present performance measurements on the IBM SP2, the Connection Machine CM5, and a cluster of workstations. We observe that the measured speedups are predicted well by our analysis in the LogP model. Finally, we test our implementation on s...
Application of a New Adjoint Newton Algorithm to the 3D ARPS Storm Scale Model Using Simulated Data
, 1997
"... The adjoint Newton algorithm (ANA) is based on the first and secondorder adjoint techniques allowing one to obtain the "Newton line search direction" by integrating a "tangent linear model" backward in time (with negative time steps). Moreover, the ANA provides a new technique ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
The adjoint Newton algorithm (ANA) is based on the first and secondorder adjoint techniques allowing one to obtain the "Newton line search direction" by integrating a "tangent linear model" backward in time (with negative time steps). Moreover, the ANA provides a new technique to find "Newton line search direction" without using gradient information. The error present in approximating the Hessian (the matrix of second order derivatives) of the cost function with respect to the control variables in the quasiNewton type algorithm is thus completely eliminated, while the storage problem related to storing the Hessian no longer exists since the explicit Hessian is not required in this algorithm. The ANA is applied here, for the first time, in the framework of 4D variational data assimilation to the adiabatic version of the Advanced Regional Prediction System (ARPS), a 3dimensional, compressible, nonhydrostatic stormscale model. The purpose is to assess the feasibility and efficiency ...