Results 1  10
of
656,784
LeastSquares Policy Iteration
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2003
"... We propose a new approach to reinforcement learning for control problems which combines valuefunction approximation with linear architectures and approximate policy iteration. This new approach ..."
Abstract

Cited by 461 (12 self)
 Add to MetaCart
We propose a new approach to reinforcement learning for control problems which combines valuefunction approximation with linear architectures and approximate policy iteration. This new approach
Least Median of Squares Regression
 JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION
, 1984
"... ..."
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 649 (21 self)
 Add to MetaCart
gradient algorithms, indicating that I~QR is the most reliable algorithm when A is illconditioned. Categories and Subject Descriptors: G.1.2 [Numerical Analysis]: ApprorJmationleast squares approximation; G.1.3 [Numerical Analysis]: Numerical Linear Algebralinear systems (direct and
Benchmarking Least Squares Support Vector Machine Classifiers
 NEURAL PROCESSING LETTERS
, 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LSSVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract

Cited by 446 (46 self)
 Add to MetaCart
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LSSVMs), a least squares cost function is proposed so as to obtain a linear set
Least angle regression
 Ann. Statist
"... The purpose of model selection algorithms such as All Subsets, Forward Selection and Backward Elimination is to choose a linear model on the basis of the same set of data to which the model will be applied. Typically we have available a large collection of possible covariates from which we hope to s ..."
Abstract

Cited by 1308 (43 self)
 Add to MetaCart
implements the Lasso, an attractive version of ordinary least squares that constrains the sum of the absolute regression coefficients; the LARS modification calculates all possible Lasso estimates for a given problem, using an order of magnitude less computer time than previous methods. (2) A different LARS
Regression quantiles
 Econometrica
, 1978
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 870 (19 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Direct least Square Fitting of Ellipses
, 1998
"... This work presents a new efficient method for fitting ellipses to scattered data. Previous algorithms either fitted general conics or were computationally expensive. By minimizing the algebraic distance subject to the constraint 4ac  b² = 1 the new method incorporates the ellipticity constraint ..."
Abstract

Cited by 421 (3 self)
 Add to MetaCart
This work presents a new efficient method for fitting ellipses to scattered data. Previous algorithms either fitted general conics or were computationally expensive. By minimizing the algebraic distance subject to the constraint 4ac  b² = 1 the new method incorporates the ellipticity constraint into the normalization factor. The proposed method combines several advantages: (i) It is ellipsespecific so that even bad data will always return an ellipse; (ii) It can be solved naturally by a generalized eigensystem and (iii) it is extremely robust, efficient and easy to implement.
Compressed LeastSquares Regression
"... We consider the problem of learning, from K data, a regression function in a linear space of high dimension N using projections onto a random subspace of lower dimension M. From any algorithm minimizing the (possibly penalized) empirical risk, we provide bounds on the excess risk of the estimate com ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
controlled) approximation error. We apply the analysis to LeastSquares (LS) regression and discuss the excess risk and numerical complexity of the resulting “Compressed Least Squares Regression” (CLSR) in terms of N, K, and M. When we choose M = O ( √ K), we show that CLSR has an estimation error of order
Results 1  10
of
656,784