Results 1  10
of
491,498
LeastSquares Policy Iteration
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2003
"... We propose a new approach to reinforcement learning for control problems which combines valuefunction approximation with linear architectures and approximate policy iteration. This new approach ..."
Abstract

Cited by 461 (12 self)
 Add to MetaCart
We propose a new approach to reinforcement learning for control problems which combines valuefunction approximation with linear architectures and approximate policy iteration. This new approach
LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares
 ACM Trans. Math. Software
, 1982
"... An iterative method is given for solving Ax ~ffi b and minU Ax b 112, where the matrix A is large and sparse. The method is based on the bidiagonalization procedure of Golub and Kahan. It is analytically equivalent to the standard method of conjugate gradients, but possesses more favorable numerica ..."
Abstract

Cited by 649 (21 self)
 Add to MetaCart
numerical properties. Reliable stopping criteria are derived, along with estimates of standard errors for x and the condition number of A. These are used in the FORTRAN implementation of the method, subroutine LSQR. Numerical tests are described comparing I~QR with several other conjugate
Goaloriented A Posteriori Error Estimation for Finite Volume Methods
"... A general framework for goaloriented a posteriori error estimation for finite volume methods is presented. The framework does not rely on recasting finite volume methods as special cases of finite element methods, but instead directly determines error estimators from the discretized finite volume ..."
Abstract
 Add to MetaCart
A general framework for goaloriented a posteriori error estimation for finite volume methods is presented. The framework does not rely on recasting finite volume methods as special cases of finite element methods, but instead directly determines error estimators from the discretized finite
GoalOriented Error Estimation and Adaptivity for the Finite Element Method
 Comput. Math. Appl
, 1999
"... this paper, we study a new approach in a posteriori error estimation, in which the numerical error of finite element approximations is estimated in terms of quantities of interest rather than the classical energy norm. These socalled quantities of interest are characterized by linear functionals on ..."
Abstract

Cited by 75 (9 self)
 Add to MetaCart
this paper, we study a new approach in a posteriori error estimation, in which the numerical error of finite element approximations is estimated in terms of quantities of interest rather than the classical energy norm. These socalled quantities of interest are characterized by linear functionals
GOALORIENTED A POSTERIORI ERROR ESTIMATES FOR TRANSPORT PROBLEMS
"... Abstract: Some aspects of goaloriented a posteriori error estimation are addressed in the context of steady convectiondiffusion equations. The difference between the exact and approximate values of a linear target functional is expressed in terms of integrals that depend on the solutions to the p ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract: Some aspects of goaloriented a posteriori error estimation are addressed in the context of steady convectiondiffusion equations. The difference between the exact and approximate values of a linear target functional is expressed in terms of integrals that depend on the solutions
Benchmarking Least Squares Support Vector Machine Classifiers
 NEURAL PROCESSING LETTERS
, 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LSSVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract

Cited by 446 (46 self)
 Add to MetaCart
In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LSSVMs), a least squares cost function is proposed so as to obtain a linear set
Locally weighted learning
 ARTIFICIAL INTELLIGENCE REVIEW
, 1997
"... This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias, ass ..."
Abstract

Cited by 594 (53 self)
 Add to MetaCart
This paper surveys locally weighted learning, a form of lazy learning and memorybased learning, and focuses on locally weighted linear regression. The survey discusses distance functions, smoothing parameters, weighting functions, local model structures, regularization of the estimates and bias
Finite element methods of leastsquares type
, 1998
"... We consider the application of leastsquares variational principles to the numerical solution of partial differential equations. Our main focus is on the development of leastsquares finite element methods for elliptic boundary value problems arising in fields such as fluid flows, linear elasticit ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
We consider the application of leastsquares variational principles to the numerical solution of partial differential equations. Our main focus is on the development of leastsquares finite element methods for elliptic boundary value problems arising in fields such as fluid flows, linear
Sequential data assimilation with a nonlinear quasigeostrophic model using Monte Carlo methods to forecast error statistics
 J. Geophys. Res
, 1994
"... . A new sequential data assimilation method is discussed. It is based on forecasting the error statistics using Monte Carlo methods, a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter. The ..."
Abstract

Cited by 782 (22 self)
 Add to MetaCart
. A new sequential data assimilation method is discussed. It is based on forecasting the error statistics using Monte Carlo methods, a better alternative than solving the traditional and computationally extremely demanding approximate error covariance equation used in the extended Kalman filter
Results 1  10
of
491,498