Results 1 
7 of
7
Smoothing Methods for Convex Inequalities and Linear Complementarity Problems
 Mathematical Programming
, 1993
"... A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization probl ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for ff sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite ff. Speedup over MINOS 5.4 was as high as 515 times for linear inequalities of size 1000 \Theta 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCP's with as many as 400 variables, the proposed approach was as much as 85 times faster than Lemke's method. Key Words: Smo...
Choice of Norms for Data Fitting and Function Approximation
, 2000
"... This article is, however, not concerned with interpolation, and thus in the data fitting context, it will be assumed that the data can be modelled by a function containing a number of free parameters, and minimizing a norm is appropriate. Perhaps the most commonly occurring criterion in such cases i ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This article is, however, not concerned with interpolation, and thus in the data fitting context, it will be assumed that the data can be modelled by a function containing a number of free parameters, and minimizing a norm is appropriate. Perhaps the most commonly occurring criterion in such cases is the least squares norm. Its use has a long and distinguished history, it is relatively well understood, and there are good algorithms available. Yet there are often situations where it is not ideal. For example, a statistical justification for least squares requires certain assumptions about the error pattern in the data, and if these are not satisfied there may be bias in the estimate
Approximation in Normed Linear Spaces
, 2000
"... A historical account is given of the development of methods for solving approximation problems set in normed linear spaces. Approximation of both real functions and real data is considered, with particular reference to L p (or l p ) and Chebyshev norms. As well as coverage of methods for the usu ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A historical account is given of the development of methods for solving approximation problems set in normed linear spaces. Approximation of both real functions and real data is considered, with particular reference to L p (or l p ) and Chebyshev norms. As well as coverage of methods for the usual linear problems, an account is given of the development of methods for approximation by functions which are nonlinear in the free parameters, and special attention is paid to some particular nonlinear approximating families. 1 Introduction The purpose of this paper is to give a historical account of the development of numerical methods for a range of problems in best approximation, that is problems which involve the minimization of a norm. A treatment is given of approximation of both real functions and data. For the approximation of functions, the emphasis is on the use of the Chebyshev norm, while for data approximation, we consider a wider range of criteria, including the other l ...
Fitting Data With Errors In All Variables Using The Huber MEstimator
, 1999
"... This article is concerned with the problem of data fitting where the model is nonlinear in the free parameters, using the Huber Mestimator. Under the assumption that there are significant errors in all the variables, an efficient algorithm is developed. Some numerical examples are given. ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This article is concerned with the problem of data fitting where the model is nonlinear in the free parameters, using the Huber Mestimator. Under the assumption that there are significant errors in all the variables, an efficient algorithm is developed. Some numerical examples are given.
Bound Constrained Quadratic Programming Via Piecewise Quadratic Functions
 Mathematical Programming
, 1999
"... . We consider the strictly convex quadratic programming problem with bounded variables. A dual problem is derived using Lagrange duality. The dual problem is the minimization of an unconstrained, piecewise quadratic function. It involves a lower bound of 1 , the smallest eigenvalue of a symmetric, ..."
Abstract
 Add to MetaCart
. We consider the strictly convex quadratic programming problem with bounded variables. A dual problem is derived using Lagrange duality. The dual problem is the minimization of an unconstrained, piecewise quadratic function. It involves a lower bound of 1 , the smallest eigenvalue of a symmetric, positive definite matrix, and is solved by Newton iteration with line search. The paper describes the algorithm and its implementation including estimation of 1 , how to get a good starting point for the iteration, and up and downdating of Choleky factorization. Results of extensive testing and comparison with other methods for constrained QP are given. Key words. Bound constrained quadratic programming. Huber's Mestimator. Condition estimation. Newton iteration. Factorization update. 1. Introduction The purpose of the present paper is to describe a finite, dual Newton algorithm for the bound constrained quadratic programming problem. Let c 2 IR n and H 2 IR n\Thetan be a given ve...
Fitting Data With Errors In All Variables Using The Huber MEstimator
, 1999
"... . This article is concernedwith the problem of data fitting where the model is nonlinear in the free parameters, using the Huber Mestimator. Under the assumption that there are significant errors in all the variables, an efficient algorithm is developed. Some numerical examples are given. Key word ..."
Abstract
 Add to MetaCart
. This article is concernedwith the problem of data fitting where the model is nonlinear in the free parameters, using the Huber Mestimator. Under the assumption that there are significant errors in all the variables, an efficient algorithm is developed. Some numerical examples are given. Key words. data fitting, errors in variables, Huber Mestimator AMS subject classifications. 65D10, 65K10, 65U05 1. Introduction. A general data fitting problem may be described as follows. Let a set of points (x i ; y i ); i = 1; : : : ; n be given with x i 2 IR m ; y i 2 IR, and assume an appropriate model for these data is given by y = f(x; fi); where fi 2 IR p is a vector of parameters. Because of observation errors and since n, the number of observations, usually exceeds p, it is generally not possible to fit the model exactly. The traditional approach is therefore to find parameters fi that minimize in some sense the errors or residuals g i (fi) = y i \Gamma f(x i ; fi) i = 1; : : : ...
Huber Approximation for the Nonlinear l1 Problem
, 1999
"... The smooth Huber approximation to the nonlinear ` 1 problem was proposed by Tishler and Zang (1982), and further developed in Yang (1995). In the present paper, we use the ideas of Gould (1989) to give a new algorithm with rate of convergence results for the smooth Huber approximation. The method ..."
Abstract
 Add to MetaCart
The smooth Huber approximation to the nonlinear ` 1 problem was proposed by Tishler and Zang (1982), and further developed in Yang (1995). In the present paper, we use the ideas of Gould (1989) to give a new algorithm with rate of convergence results for the smooth Huber approximation. The method is carefully implemented, and results of computational tests are reported.