Results 1  10
of
159
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 596 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, assessing predictions, handling noisy data and outliers, improving the quality of predictions by tuning t parameters, interference between old and new data, implementing locally weighted learning e ciently, and applications of locally weighted learning. A companion paper surveys how locally weighted learning can be used in robot learning and control.
SBA: a software package for generic sparse bundle adjustment
 ACM Transactions on Mathematical Software
, 2009
"... Foundation for Research and Technology—Hellas ..."
2001 A SAS procedure based on mixture models for estimating developmental trajectories
 Sociological Methods & Research 29:374–393. Katz, Rebecca S
"... This article introduces a new SAS procedure written by the authors that analyzes longitudinal data (developmental trajectories) by fitting a mixture model. The TRAJ procedure fits semiparametric (discrete) mixtures of censored normal, Poisson, zeroinflated Poisson, and Bernoulli distributions to ..."
Abstract

Cited by 107 (10 self)
 Add to MetaCart
This article introduces a new SAS procedure written by the authors that analyzes longitudinal data (developmental trajectories) by fitting a mixture model. The TRAJ procedure fits semiparametric (discrete) mixtures of censored normal, Poisson, zeroinflated Poisson, and Bernoulli distributions to longitudinal data. Applications to psychometric scale data, offense counts, and a dichotomous prevalence measure in violence research are illustrated. In addition, the use of the Bayesian information criterion to address the problem of model selection, including the estimation of the number of components in the mixture, is demonstrated.
Modeling of the glottal flow derivative waveform with application to speaker identification
 IEEE Trans. Speech and Audio Processing
, 1999
"... Speech production has long been viewed as a linear filtering process, as described by Fant in the late 1950's [10]. The vocal tract, which acts as the filter, is the primary focus of most speech work. This thesis develops a method for estimating the source of speech, the glottal flow derivative ..."
Abstract

Cited by 89 (5 self)
 Add to MetaCart
(Show Context)
Speech production has long been viewed as a linear filtering process, as described by Fant in the late 1950's [10]. The vocal tract, which acts as the filter, is the primary focus of most speech work. This thesis develops a method for estimating the source of speech, the glottal flow derivative. Models are proposed for the coarse and fine structure of the glottal flow derivative, accounting for nonlinear sourcefilter interaction, and techniques are developed for estimating the parameters of these models. The importance of the source is demonstrated through speaker identification experiments. The glottal flow derivative waveform is estimated from the speech signal by inverse filtering the speech with a vocal tract estimate obtained during the glottal closed phase. The closed phase is determined through a sliding covariance analysis with a very short time window and a one sample shift. This allows calculation of formant motion within each pitch period predicted by Ananthapadmanabha and Fant to be a result of nonlinear sourcefilter interaction during the glottal open phase [1]. By
Hooking Your Solver to AMPL
, 1997
"... This report tells how to make solvers work with AMPL's solve command. It describes an interface library, amplsolver.a, whose source is available from netlib. Examples include programs for listing LPs, automatic conversion to the LP dual (shellscript as solver), solvers for various nonlinear ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
This report tells how to make solvers work with AMPL's solve command. It describes an interface library, amplsolver.a, whose source is available from netlib. Examples include programs for listing LPs, automatic conversion to the LP dual (shellscript as solver), solvers for various nonlinear problems (with first and sometimes second derivatives computed by automatic differentiation), and getting C or Fortran 77 for nonlinear constraints, objectives and their first derivatives. Drivers for various well known linear, mixedinteger, and nonlinear solvers provide more examples.
MemoryBased Neural Networks For Robot Learning
 Neurocomputing
, 1995
"... This paper explores a memorybased approach to robot learning, using memorybased neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their ..."
Abstract

Cited by 31 (8 self)
 Add to MetaCart
(Show Context)
This paper explores a memorybased approach to robot learning, using memorybased neural networks to learn models of the task to be performed. Steinbuch and Taylor presented neural network designs to explicitly store training data and do nearest neighbor lookup in the early 1960s. In this paper their nearest neighbor network is augmented with a local model network, which fits a local model to a set of nearest neighbors. This network design is equivalent to a statistical approach known as locally weighted regression, in which a local model is formed to answer each query, using a weighted regression in which nearby points (similar experiences) are weighted more than distant points (less relevant experiences). We illustrate this approach by describing how it has been used to enable a robot to learn a difficult juggling task. Keywords: memorybased, robot learning, locally weighted regression, nearest neighbor, local models. 1 Introduction An important problem in motor learning is approxim...
Gate Sizing Using Lagrangian Relaxation Combined with a Fast GradientBased PreProcessing Step
 Proc. ICCAD, 2002
"... Abstract ─ In this paper, we present Forge, an optimal algorithm for gate sizing using the Elmore delay model. The algorithm utilizes Lagrangian relaxation with a fast gradientbased preprocessing step that provides an effective set of initial Lagrange multipliers. Compared to the previous Lagrang ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
(Show Context)
Abstract ─ In this paper, we present Forge, an optimal algorithm for gate sizing using the Elmore delay model. The algorithm utilizes Lagrangian relaxation with a fast gradientbased preprocessing step that provides an effective set of initial Lagrange multipliers. Compared to the previous Lagrangianbased approach, Forge is considerably faster and does not have the inefficiencies due to difficulttodetermine initial conditions and constant factors. We compared the two algorithms on 30 benchmark designs, on a Sun UltraSparc60 workstation. On average Forge is 200 times faster than the previously published algorithm. We then improved Forge by incorporating a slewratebased convex delay model, which handles distinct rise and fall gate delays. We show that Forge is 15 times faster, on average, than the AMPS transistorsizing tool from Synopsys, while achieving the same delay targets and using similar total transistor area. 1
Advances in groupbased trajectory modeling and an SAS procedure for estimating them
 Sociological Methods & Research
, 2007
"... This article is a followup to Jones, Nagin, and Roeder (2001), which described a SAS procedure for estimating groupbased trajectory models. Groupbased trajectory is a specialized application of finite mixture modeling and is designed to identify clusters of individuals following similar progressi ..."
Abstract

Cited by 25 (3 self)
 Add to MetaCart
(Show Context)
This article is a followup to Jones, Nagin, and Roeder (2001), which described a SAS procedure for estimating groupbased trajectory models. Groupbased trajectory is a specialized application of finite mixture modeling and is designed to identify clusters of individuals following similar progressions of some behavior or outcome over age or time. This article has two purposes. One is to summarize extensions of the methodology and of the SAS procedure that have been developed since Jones et al. (2001). The other is to illustrate how groupbased trajectory modeling lends itself to presentation of findings in the form of easily understood graphical and tabular data summaries.
Derivative Convergence for Iterative Equation Solvers
, 1993
"... this paper, we consider two approaches to computing the desired implicitly defined derivative x ..."
Abstract

Cited by 24 (16 self)
 Add to MetaCart
(Show Context)
this paper, we consider two approaches to computing the desired implicitly defined derivative x
Convergence theorems for least change secant update methods
 SIAM Journal of Numerical Analysis
, 1981
"... Abstract. The purpose of this paper is to present a convergence analysis of least change secant methods in which part of the derivative matrix being approximated is computed by other means. The theorems and proofs given here can be viewed as generalizations of those given by BroydenDennisMor6 [J. ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
(Show Context)
Abstract. The purpose of this paper is to present a convergence analysis of least change secant methods in which part of the derivative matrix being approximated is computed by other means. The theorems and proofs given here can be viewed as generalizations of those given by BroydenDennisMor6 [J. Inst. Math. Appl. 12 (1973), pp. 223246] and by DennisMor6 [Math. Comp., 28 (1974), pp. 549560]. The analysis is done in the orthogonal projection setting of DennisSchnabel [SIAM Rev., 21 (1980), pp. 443459] and many readers might feel that it is easier to understand. The theorems here readily imply local and qsuperlinear convergence of all the standard methods in addition to proving these results for the first time for the sparse symmetric method of Marwil and Toint and the nonlinear leastsquares method of DennisGayWelsch. 1. Introduction. The