Results 1 
9 of
9
A training algorithm for optimal margin classifiers
 PROCEEDINGS OF THE 5TH ANNUAL ACM WORKSHOP ON COMPUTATIONAL LEARNING THEORY
, 1992
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract

Cited by 1279 (44 self)
 Add to MetaCart
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leaveoneout method and the VCdimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
A PrimalDual Algorithm for Minimizing a NonConvex Function Subject to Bound and Linear Equality Constraints
, 1996
"... A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Converge ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
A new primaldual algorithm is proposed for the minimization of nonconvex objective functions subject to simple bounds and linear equality constraints. The method alternates between a classical primaldual step and a Newtonlike step in order to ensure descent on a suitable merit function. Convergence of a welldefined subsequence of iterates is proved from arbitrary starting points. Algorithmic variants are discussed and preliminary numerical results presented. 1 IBM T.J. Watson Research Center, P.O.Box 218, Yorktown Heights, NY 10598, USA Email : arconn@watson.ibm.com 2 Department for Computation and Information, Rutherford Appleton Laboratory, Chilton, Oxfordshire, OX11 0QX, England, EU Email : nimg@letterbox.rl.ac.uk 3 Current reports available by anonymous ftp from joyousgard.cc.rl.ac.uk (internet 130.246.9.91) in the directory "pub/reports". 4 Department of Mathematics, Facult'es Universitaires ND de la Paix, 61, rue de Bruxelles, B5000 Namur, Belgium, EU Email : pht@ma...
Timecritical Multiresolution Scene Rendering
 IEEE Visualization
, 1999
"... We describe a framework for timecritical rendering of graphics scenes composed of a large number of objects having complex geometric descriptions. Our technique relies upon a scene description in which objects are represented as multiresolution meshes. We perform a constrained optimization at each ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
We describe a framework for timecritical rendering of graphics scenes composed of a large number of objects having complex geometric descriptions. Our technique relies upon a scene description in which objects are represented as multiresolution meshes. We perform a constrained optimization at each frame to choose the resolution of each potentially visible object that generates the best quality image while meeting timing constraints. The technique provides smooth levelofdetail control and aims at guaranteeing a uniform, bounded frame rate even for widely changing viewing conditions. The optimization algorithm is independent from the particular data structure used to represent multiresolution meshes. The only requirements are the ability to represent a mesh with an arbitrary number of triangles and to traverse a mesh structure at an arbitrary resolution in a short predictable time. A data structure satisfying these criteria is described and experimental results are discussed. Keyword...
New Complexity Analysis of the PrimalDual Newton Method for Linear Optimization
, 1998
"... We deal with the primaldual Newton method for linear optimization (LO). Nowadays, this method is the working horse in all efficient interior point algorithms for LO, and its analysis is the basic element in all polynomiality proofs of such algorithms. At present there is still a gap between the pra ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
We deal with the primaldual Newton method for linear optimization (LO). Nowadays, this method is the working horse in all efficient interior point algorithms for LO, and its analysis is the basic element in all polynomiality proofs of such algorithms. At present there is still a gap between the practical behavior of the algorithms and the theoretical performance results, in favor of the practical behavior. This is especially true for socalled largeupdate methods. We present some new analysis tools, based on a proximity measure introduced by Jansen et al., in 1994, that may help to close this gap. This proximity measure has not been used in the analysis of largeupdate method before. Our new analysis not only provides a unified way for the analysis of both largeupdate and smallupdate methods, but also improves the known iteration bounds.
Selfregular proximities and new search directions for linear and semidefinite optimization
 Mathematical Programming
, 2000
"... In this paper, we first introduce the notion of selfregular functions. Various appealing properties of selfregular functions are explored and we also discuss the relation between selfregular functions and the wellknown selfconcordant functions. Then we use such functions to define selfregular p ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
In this paper, we first introduce the notion of selfregular functions. Various appealing properties of selfregular functions are explored and we also discuss the relation between selfregular functions and the wellknown selfconcordant functions. Then we use such functions to define selfregular proximity measure for pathfollowing interior point methods for solving linear optimization (LO) problems. Any selfregular proximity measure naturally defines a primaldual search direction. In this way a new class of primaldual search directions for solving LO problems is obtained. Using the appealing properties of selfregular functions, we prove that these new largeupdate pathfollowing methods for LO enjoy a polynomial, O n q+1 2q log n iteration bound, where q ≥ 1 is the socalled barrier degree of the selfregular ε proximity measure underlying the algorithm. When q increases, this � bound approaches the √n n best known complexity bound for interior point methods, namely O log. Our unified �√n ε n analysis provides also the O log best known iteration bound of smallupdate IPMs. ε At each iteration, we need only to solve one linear system. As a byproduct of our results, we remove some limitations of the algorithms presented in [24] and improve their complexity as well. An extension of these results to semidefinite optimization (SDO) is also discussed.
SURVEY OF DESCENT BASED METHODS FOR UNCONSTRAINED AND LINEARLY CONSTRAINED MINIMIZATION Nonlinear Programming Problems
"... subject includes all optimization problems other than linear programming problems, it is not usually the case. Optimization problems involving discrete valued variables (i. e., those which are restricted to assume values from speci ed discrete sets, such as 01 variables) are not usually considered ..."
Abstract
 Add to MetaCart
subject includes all optimization problems other than linear programming problems, it is not usually the case. Optimization problems involving discrete valued variables (i. e., those which are restricted to assume values from speci ed discrete sets, such as 01 variables) are not usually considered under nonlinear programming, they are called discrete, ormixeddiscrete optimization problems and studied separately. There are good reasons for this. To solve discrete optimization problems we normally need very special techniques (typically of some enumerative type) di erent from those needed to tackle continous variable optimization problems. So, the term nonlinear program usually refers to an optimization problem in which the variables are continuous variables, and the problem is of the following general form: minimize (x) subject to hi(x) =0 � i =1to m gp(x)> 0 � p =1to t
A Training Algorithm for Optimal Margin Classifiers
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptions, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract
 Add to MetaCart
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptions, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leaveoneout method and the VCdimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms. 1
Convergence analysis of a proximal GaussNewton method
, 2011
"... Abstract An extension of the GaussNewton algorithm is proposed to find local minimizers of penalized nonlinear least squares problems, under generalized Lipschitz assumptions. Convergence results of local type are obtained, as well as an estimate of the radius of the convergence ball. Some applicat ..."
Abstract
 Add to MetaCart
Abstract An extension of the GaussNewton algorithm is proposed to find local minimizers of penalized nonlinear least squares problems, under generalized Lipschitz assumptions. Convergence results of local type are obtained, as well as an estimate of the radius of the convergence ball. Some applications for solving constrained nonlinear equations are discussed and the numerical performance of the method is assessed on some significant test problems.