Results 1  10
of
11
A tutorial on support vector machines for pattern recognition
 Data Mining and Knowledge Discovery
, 1998
"... The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and nonseparable data, working through a nontrivial example in detail. We describe a mechanical analogy, and discuss when SV ..."
Abstract

Cited by 2272 (11 self)
 Add to MetaCart
The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and nonseparable data, working through a nontrivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.
Engineering and economic applications of complementarity problems
 SIAM Review
, 1997
"... Abstract. This paper gives an extensive documentation of applications of finitedimensional nonlinear complementarity problems in engineering and equilibrium modeling. For most applications, we describe the problem briefly, state the defining equations of the model, and give functional expressions f ..."
Abstract

Cited by 127 (24 self)
 Add to MetaCart
Abstract. This paper gives an extensive documentation of applications of finitedimensional nonlinear complementarity problems in engineering and equilibrium modeling. For most applications, we describe the problem briefly, state the defining equations of the model, and give functional expressions for the complementarity formulations. The goal of this documentation is threefold: (i) to summarize the essential applications of the nonlinear complementarity problem known to date, (ii) to provide a basis for the continued research on the nonlinear complementarity problem, and (iii) to supply a broad collection of realistic complementarity problems for use in algorithmic experimentation and other studies.
MCPLIB: A Collection of Nonlinear Mixed Complementarity Problems
 Optimization Methods and Software
, 1994
"... The origins and some motivational details of a collection of nonlinear mixed complementarity problems are given. This collection serves two purposes. Firstly, it gives a uniform basis for testing currently available and new algorithms for mixed complementarity problems. Function and Jacobian evaluat ..."
Abstract

Cited by 65 (27 self)
 Add to MetaCart
The origins and some motivational details of a collection of nonlinear mixed complementarity problems are given. This collection serves two purposes. Firstly, it gives a uniform basis for testing currently available and new algorithms for mixed complementarity problems. Function and Jacobian evaluations for the resulting problems are provided via a GAMS interface, making thorough testing of algorithms on practical complementarity problems possible. Secondly, it gives examples of how to formulate many popular problem formats as mixed complementarity problems and how to describe the resulting problems in GAMS format. We demonstrate the ease and power of formulating practical models in the MCP format. Given these examples, it is hoped that this collection will grow to include many problems that test complementarity algorithms more fully. The collection is available by anonymous ftp. Computational results using the PATH solver covering all of these problems are described. 1 Introduction R...
SuperResolution From Multiple Views Using Learnt Image Models
 Proc. of IEEE International Conference on Computer Vision and Pattern Recognition
, 2001
"... The objective of this work is the superresolution restoration of a set of images, and we investigate the use of learnt image models within a generative Bayesian framework. ..."
Abstract

Cited by 55 (3 self)
 Add to MetaCart
The objective of this work is the superresolution restoration of a set of images, and we investigate the use of learnt image models within a generative Bayesian framework.
ConjugateGradient Preconditioning Methods for ShiftVariant PET Image Reconstruction
 IEEE Tr. Im. Proc
, 2002
"... Gradientbased iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian mat ..."
Abstract

Cited by 51 (21 self)
 Add to MetaCart
Gradientbased iterative methods often converge slowly for tomographic image reconstruction and image restoration problems, but can be accelerated by suitable preconditioners. Diagonal preconditioners offer some improvement in convergence rate, but do not incorporate the structure of the Hessian matrices in imaging problems. Circulant preconditioners can provide remarkable acceleration for inverse problems that are approximately shiftinvariant, i.e. for those with approximately blockToeplitz or blockcirculant Hessians. However, in applications with nonuniform noise variance, such as arises from Poisson statistics in emission tomography and in quantumlimited optical imaging, the Hessian of the weighted leastsquares objective function is quite shiftvariant, and circulant preconditioners perform poorly. Additional shiftvariance is caused by edgepreserving regularization methods based on nonquadratic penalty functions. This paper describes new preconditioners that approximate more accurately the Hessian matrices of shiftvariant imaging problems. Compared to diagonal or circulant preconditioning, the new preconditioners lead to significantly faster convergence rates for the unconstrained conjugategradient (CG) iteration. We also propose a new efficient method for the linesearch step required by CG methods. Applications to positron emission tomography (PET) illustrate the method.
The Minpack2 Test Problem Collection
, 1991
"... The Army High Performance Computing Research Center at the University of Minnesota and the Mathematics and Computer Science Division at Argonne National Laboratory are collaborating on the development of the software package MINPACK2. As part of the MINPACK2 project we are developing a collection ..."
Abstract

Cited by 47 (5 self)
 Add to MetaCart
The Army High Performance Computing Research Center at the University of Minnesota and the Mathematics and Computer Science Division at Argonne National Laboratory are collaborating on the development of the software package MINPACK2. As part of the MINPACK2 project we are developing a collection of significant optimization problems to serve as test problems for the package. This report describes the problems in the preliminary version of this collection. 1 Introduction The Army High Performance Computing Research Center at the University of Minnesota and the Mathematics and Computer Science Division at Argonne National Laboratory have initiated a collaboration for the development of the software package MINPACK2. As part of the MINPACK2 project, we are developing a collection of significant optimization problems to serve as test problems for the package. This report describes some of the problems in the preliminary version of this collection. Optimization software has often bee...
Global Methods For Nonlinear Complementarity Problems
 MATH. OPER. RES
, 1994
"... Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a boundconstrained nonlinea ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Global methods for nonlinear complementarity problems formulate the problem as a system of nonsmooth nonlinear equations approach, or use continuation to trace a path defined by a smooth system of nonlinear equations. We formulate the nonlinear complementarity problem as a boundconstrained nonlinear least squares problem. Algorithms based on this formulation are applicable to general nonlinear complementarity problems, can be started from any nonnegative starting point, and each iteration only requires the solution of systems of linear equations. Convergence to a solution of the nonlinear complementarity problem is guaranteed under reasonable regularity assumptions. The converge rate is Qlinear, Qsuperlinear, or Qquadratic, depending on the tolerances used to solve the subproblems.
GPCG: A Case Study in the Performance and Scalability of Optimization Algorithms
 ACM TRANSACTIONS ON MATHEMATICAL SOFTWARE
, 1999
"... GPCG is an algorithm within the Toolkit for Advanced Optimization (TAO) for solving bound constrained, convex quadratic problems. Originally developed by Mor'e and Toraldo [19], this algorithm was designed for largescale problems but had been implemented only for a single processor. The TAO impl ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
GPCG is an algorithm within the Toolkit for Advanced Optimization (TAO) for solving bound constrained, convex quadratic problems. Originally developed by Mor'e and Toraldo [19], this algorithm was designed for largescale problems but had been implemented only for a single processor. The TAO implementation is available for a wide range of highperformance architecture, and has been tested on up to 64 processors to solve problems with over 2.5 million variables.
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."
An Infeasible Active Set Method for Convex Problems With Simple Bounds
, 2000
"... A primaldual active set method for convex quadratic problems with bound constraints is presented. Based on a guess on the active set, a primaldual pair (x; s) is computed that satises the rst order optimality condition and the complementarity condition. If (x; s) is not feasible, a new active set ..."
Abstract
 Add to MetaCart
A primaldual active set method for convex quadratic problems with bound constraints is presented. Based on a guess on the active set, a primaldual pair (x; s) is computed that satises the rst order optimality condition and the complementarity condition. If (x; s) is not feasible, a new active set is determined, and the process is iterated. Sucient conditions for the iterations to stop in a nite number of steps with an optimal solution are provided. Computational experience indicates that this approach often requires only a few (less than 10) iterations to nd the optimal solution. 1 Introduction We consider the convex programming problem (P ) minJ(x) subject to x b 0; (1) where J(x) := 1 2 x T Qx + d T x; Q is a positive denite nn matrix, and b; d 2 IR n . This problem has received considerable interest in the literature. We recall some of the more recent contributions. Supported in part by the Fonds zur Forderung der wissenschaftlichen Forschung (FWF), Austria, ...