Results 1  10
of
137
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 572 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning t parameters, interference between old and new data, implementing locally weighted learning e ciently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.
SBA: a software package for generic sparse bundle adjustment
 ACM Transactions on Mathematical Software
, 2009
"... Foundation for Research and Technology—Hellas ..."
2001 A SAS procedure based on mixture models for estimating developmental trajectories
 Sociological Methods & Research 29:374–393. Katz, Rebecca S
"... This article introduces a new SAS procedure written by the authors that analyzes longitudinal data (developmental trajectories) by fitting a mixture model. The TRAJ procedure fits semiparametric (discrete) mixtures of censored normal, Poisson, zeroinflated Poisson, and Bernoulli distributions to ..."
Abstract

Cited by 73 (8 self)
 Add to MetaCart
This article introduces a new SAS procedure written by the authors that analyzes longitudinal data (developmental trajectories) by fitting a mixture model. The TRAJ procedure fits semiparametric (discrete) mixtures of censored normal, Poisson, zeroinflated Poisson, and Bernoulli distributions to longitudinal data. Applications to psychometric scale data, offense counts, and a dichotomous prevalence measure in violence research are illustrated. In addition, the use of the Bayesian information criterion to address the problem of model selection, including the estimation of the number of components in the mixture, is demonstrated.
Hooking Your Solver to AMPL
, 1997
"... This report tells how to make solvers work with AMPL's solve command. It describes an interface library, amplsolver.a, whose source is available from netlib. Examples include programs for listing LPs, automatic conversion to the LP dual (shellscript as solver), solvers for various nonlinear ..."
Abstract

Cited by 36 (5 self)
 Add to MetaCart
This report tells how to make solvers work with AMPL's solve command. It describes an interface library, amplsolver.a, whose source is available from netlib. Examples include programs for listing LPs, automatic conversion to the LP dual (shellscript as solver), solvers for various nonlinear problems (with first and sometimes second derivatives computed by automatic differentiation), and getting C or Fortran 77 for nonlinear constraints, objectives and their first derivatives. Drivers for various well known linear, mixedinteger, and nonlinear solvers provide more examples.
MemoryBased Neural Networks For Robot Learning
 Neurocomputing
, 1995
"... This paper explores a memorybased approach to robot learning, using memorybased neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their ..."
Abstract

Cited by 29 (8 self)
 Add to MetaCart
(Show Context)
This paper explores a memorybased approach to robot learning, using memorybased neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their nearest neighbor network is augmented with a local model network, which fits a local model to a set of nearest neighbors. This network design is equivalent to a statistical approach known as locally weighted regression, in which a local model is formed to answer each query, using a weighted regression in which nearby points (similar experiences) are weighted more than distant points (less relevant experiences). We illustrate this approach by describing how it has been used to enable a robot to learn a difficult juggling task. Keywords: memorybased, robot learning, locally weighted regression, nearest neighbor, local models. 1 Introduction An important problem in motor learning is approxim...
Gate Sizing Using Lagrangian Relaxation Combined with a Fast GradientBased PreProcessing Step
 Proc. ICCAD, 2002
"... Abstract ─ In this paper, we present Forge, an optimal algorithm for gate sizing using the Elmore delay model. The algorithm utilizes Lagrangian relaxation with a fast gradientbased preprocessing step that provides an effective set of initial Lagrange multipliers. Compared to the previous Lagrang ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
(Show Context)
Abstract ─ In this paper, we present Forge, an optimal algorithm for gate sizing using the Elmore delay model. The algorithm utilizes Lagrangian relaxation with a fast gradientbased preprocessing step that provides an effective set of initial Lagrange multipliers. Compared to the previous Lagrangianbased approach, Forge is considerably faster and does not have the inefficiencies due to difficulttodetermine initial conditions and constant factors. We compared the two algorithms on 30 benchmark designs, on a Sun UltraSparc60 workstation. On average Forge is 200 times faster than the previously published algorithm. We then improved Forge by incorporating a slewratebased convex delay model, which handles distinct rise and fall gate delays. We show that Forge is 15 times faster, on average, than the AMPS transistorsizing tool from Synopsys, while achieving the same delay targets and using similar total transistor area. 1
Derivative Convergence for Iterative Equation Solvers
, 1993
"... this paper, we consider two approaches to computing the desired implicitly defined derivative x ..."
Abstract

Cited by 24 (16 self)
 Add to MetaCart
(Show Context)
this paper, we consider two approaches to computing the desired implicitly defined derivative x
The Solution of the Metric STRESS and SSTRESS Problems in Multidimensional Scaling Using Newton's Method
, 1995
"... This paper considers numerical algorithms for finding local minimizers of metric multidimensional scaling problems. Both the STRESS and SSTRESS criteria are considered, and the leading algorithms for each are carefully explicated. A new algorithm, based on Newton's method, is proposed. Translat ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
This paper considers numerical algorithms for finding local minimizers of metric multidimensional scaling problems. Both the STRESS and SSTRESS criteria are considered, and the leading algorithms for each are carefully explicated. A new algorithm, based on Newton's method, is proposed. Translational and rotational indeterminancy is removed by a parametrization that has not previously been used in multidimensional scaling algorithms. In contrast to previous algorithms, a very pleasant feature of the new algorithm is that it can be used with either the STRESS or the SSTRESS criterion. Numerical results are presented. Key words: Metric multidimensional scaling, STRESS criterion, SSTRESS criterion, unconstrained optimization, Newton's method. Department of Computational and Applied Mathematics, Rice University, Houston, TX 772511892. This author was generously supported by a Patricia R. Harris Fellowship. y Department of Computational and Applied Mathematics and Center for Research in...
Convergence theorems for least change secant update methods
 SIAM Journal of Numerical Analysis
, 1981
"... Abstract. The purpose of this paper is to present a convergence analysis of least change secant methods in which part of the derivative matrix being approximated is computed by other means. The theorems and proofs given here can be viewed as generalizations of those given by BroydenDennisMor6 [J. ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The purpose of this paper is to present a convergence analysis of least change secant methods in which part of the derivative matrix being approximated is computed by other means. The theorems and proofs given here can be viewed as generalizations of those given by BroydenDennisMor6 [J. Inst. Math. Appl. 12 (1973), pp. 223246] and by DennisMor6 [Math. Comp., 28 (1974), pp. 549560]. The analysis is done in the orthogonal projection setting of DennisSchnabel [SIAM Rev., 21 (1980), pp. 443459] and many readers might feel that it is easier to understand. The theorems here readily imply local and qsuperlinear convergence of all the standard methods in addition to proving these results for the first time for the sparse symmetric method of Marwil and Toint and the nonlinear leastsquares method of DennisGayWelsch. 1. Introduction. The
A wellposed shooting algorithm for optimal control problems with singular arcs
, 2011
"... In this article we establish for the first time the wellposedness of the shooting algorithm applied to optimal control problems for which all control variables enter linearly in the Hamiltonian. We start by investigating the case having only initialfinal state constraints and free control variable ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
(Show Context)
In this article we establish for the first time the wellposedness of the shooting algorithm applied to optimal control problems for which all control variables enter linearly in the Hamiltonian. We start by investigating the case having only initialfinal state constraints and free control variable, and afterwards we deal with control bounds. The shooting algorithm is wellposed if the derivative of its associated shooting function is injective at the optimal solution. The main result of this paper is to provide a sufficient condition for this injectivity, that is very close to the second order necessary condition. We prove that this sufficient condition guarantees the stability of the optimal solution under small perturbations and the wellposedness of the shooting algorithm for the perturbed problem. We present numerical tests that validate our method.