Results 1  10
of
10
Successive Overrelaxation for Support Vector Machines
 IEEE Transactions on Neural Networks
, 1998
"... Successive overrelaxation (SOR) for symmetric linear complementarity problems and quadratic programs [11, 12, 9] is used to train a support vector machine (SVM) [20, 3] for discriminating between the elements of two massive datasets, each with millions of points. Because SOR handles one point at a t ..."
Abstract

Cited by 66 (14 self)
 Add to MetaCart
Successive overrelaxation (SOR) for symmetric linear complementarity problems and quadratic programs [11, 12, 9] is used to train a support vector machine (SVM) [20, 3] for discriminating between the elements of two massive datasets, each with millions of points. Because SOR handles one point at a time, similar to Platt's sequential minimal optimization (SMO) algorithm [18] which handles two constraints at a time, it can process very large datasets that need not reside in memory. The algorithm converges linearly to a solution. Encouraging numerical results are presented on datasets with up to 10 million points. Such massive discrimination problems cannot be processed by conventional linear or quadratic programming methods, and to our knowledge have not been solved by other methods. 1 Introduction Successive overrelaxation, originally developed for the solution of large systems of linear equations [16, 15] has been successfully applied to mathematical programming problems [4, 11, 12, 1...
Smoothing Methods for Convex Inequalities and Linear Complementarity Problems
 Mathematical Programming
, 1993
"... A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization probl ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for ff sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite ff. Speedup over MINOS 5.4 was as high as 515 times for linear inequalities of size 1000 \Theta 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCP's with as many as 400 variables, the proposed approach was as much as 85 times faster than Lemke's method. Key Words: Smo...
Mathematical Programming in Data Mining
 Data Mining and Knowledge Discovery
, 1996
"... Mathematical programming approaches to three fundamental problems will be described: feature selection, clustering and robust representation. The feature selection problem considered is that of discriminating between two sets while recognizing irrelevant and redundant features and suppressing them. ..."
Abstract

Cited by 26 (3 self)
 Add to MetaCart
Mathematical programming approaches to three fundamental problems will be described: feature selection, clustering and robust representation. The feature selection problem considered is that of discriminating between two sets while recognizing irrelevant and redundant features and suppressing them. This creates a lean model that often generalizes better to new unseen data. Computational results on real data confirm improved generalization of leaner models. Clustering is exemplified by the unsupervised learning of patterns and clusters that may exist in a given database and is a useful tool for knowledge discovery in databases (KDD). A mathematical programming formulation of this problem is proposed that is theoretically justifiable and computationally implementable in a finite number of steps. A resulting kMedian Algorithm is utilized to discover very useful survival curves for breast cancer patients from a medical database. Robust representation is concerned with minimizing trained m...
Data Discrimination via Nonlinear Generalized Support Vector Machines
 Complementarity: Applications, Algorithms and Extensions
, 1999
"... The main purpose of this paper is to show that new formulations of support vector machines can generate nonlinear separating surfaces which can discriminate between elements of a given set better than a linear surface. The principal approach used is that of generalized support vector machines (GSVMs ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
The main purpose of this paper is to show that new formulations of support vector machines can generate nonlinear separating surfaces which can discriminate between elements of a given set better than a linear surface. The principal approach used is that of generalized support vector machines (GSVMs) which employ possibly indefinite kernels [17]. The GSVM training procedure is carried out by either the simple successive overrelaxation (SOR) [18] iterative method or by linear programming. This novel combination of powerful support vector machines [24, 5] with the highly effective SOR computational algorithm [15, 16, 14] or with linear programming allows us to use a nonlinear surface to discriminate between elements of a dataset that belong to one of two categories. Numerical results on a number of datasets show improved testing set correctness, by as much as a factor of two, when comparing the nonlinear GSVM surface to a linear separating surface. 1 Introduction A very simple convex qu...
Optimal EquiPartition of Rectangular Domains for Parallel Computation
 Journal of Global Optimization
, 1995
"... We present an efficient method for the partitioning of rectangular domains into equiarea subdomains of minimum total perimeter. For a variety of applications in parallel computation, this corresponds to a loadbalanced distribution of tasks that minimize interprocessor communication. Our method is ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
We present an efficient method for the partitioning of rectangular domains into equiarea subdomains of minimum total perimeter. For a variety of applications in parallel computation, this corresponds to a loadbalanced distribution of tasks that minimize interprocessor communication. Our method is based on utilizing, to the maximum extent possible, a set of optimal shapes for subdomains. We prove that for a large class of these problems, we can construct solutions whose relative distance from a computable lower bound converges to zero as the problem size tends to infinity. PERIXGA, a genetic algorithm employing this approach, has successfully solved to optimality millionvariable instances of the perimeterminimization problem and for a onebillionvariable problem has generated a solution within 0.32% of the lower bound. We report on the results of an implementation on a CM5 supercomputer and make comparisons with other existing codes. 1 The Minimum Perimeter Problem We consider...
Fast EquiPartitioning of Rectangular Domains Using Stripe Decomposition
 Discrete Applied Mathematics
, 1996
"... This paper presents a fast algorithm that provides optimal or near optimal solutions to the minimum perimeter problem on a rectangular grid. The minimum perimeter problem is to partition a grid of size MN into P equal area regions while minimizing the total perimeter of the regions. The approach tak ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
This paper presents a fast algorithm that provides optimal or near optimal solutions to the minimum perimeter problem on a rectangular grid. The minimum perimeter problem is to partition a grid of size MN into P equal area regions while minimizing the total perimeter of the regions. The approach taken here is to divide the grid into stripes that can be filled completely with an integer number of regions. This striping method gives rise to a knapsack integer program that can be efficiently solved by existing codes. The solution of the knapsack problem is then used to generate the grid region assignments. An implementation of the algorithm partitioned a 10001000 grid into 1000 regions to a provably optimal solution in less than one second. With sufficient memory to hold the MN grid array, extremely large minimum perimeter problems can be solved easily. Introduction The focus of the algorithm presented here is the Minimum Perimeter Equipartition problem, MPE(M, N, P). In this problem o...
Optimization Methods In Massive Datasets
"... We describe the role of generalized support vector machines in separating massive and complex data using arbitrary nonlinear kernels. Feature selection that improves generalization is implemented via an effective procedure that utilizes a polyhedral norm or a concave function minimization. Massive d ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We describe the role of generalized support vector machines in separating massive and complex data using arbitrary nonlinear kernels. Feature selection that improves generalization is implemented via an effective procedure that utilizes a polyhedral norm or a concave function minimization. Massive data is separated using a linear programming chunking algorithm as well as a successive overrelaxation algorithm, each of which is capable of processing data with millions of points. 1 2 1. INTRODUCTION We address here the problem of classifying data in ndimensional real (Euclidean) space R n into one of two disjoint nite point sets (i.e. classes). The support vector machine (SVM) approach to classication [57, 2, 25, 58, 13, 54, 55] attempts to separate points belonging to two given sets in R n by a nonlinear surface, often only implicitly dened by a kernel function. Since the nonlinear surface in R n is typically linear in its parameters, it can be represented as a linear func...
MinimumPerimeter Domain Assignment
, 1995
"... For certain classes of problems defined over twodimensional domains with grid structure, optimization problems involving the assignment of grid cells to processors present a nonlinear network model for the problem of partitioning tasks among processors so as to minimize interprocessor communication ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
For certain classes of problems defined over twodimensional domains with grid structure, optimization problems involving the assignment of grid cells to processors present a nonlinear network model for the problem of partitioning tasks among processors so as to minimize interprocessor communication. Minimizing interprocessor communication in this context is shown to be equivalent to tiling the domain so as to minimize total tile perimeter, where each tile corresponds to the collection of tasks assigned to some processor. A tight lower bound on the perimeter of a tile as a function of its area is developed. We then show how to generate minimumperimeter tiles. By using assignments corresponding to nearrectangular minimumperimeter tiles, closed form solutions are developed for certain classes of domains. We conclude with computational results with parallel highlevel genetic algorithms that have produced good (and sometimes provably optimal) solutions for very large perimeter minimiza...
Optimal and Asymptotically Optimal Equipartition of Rectangular Domains via Stripe Decomposition
 Applied Mathematics and Parallel Computing  Festschrift for Klaus Ritter
, 1996
"... We present an efficient method for assigning any number of processors to tasks associated with the cells of a rectangular uniform grid. Load balancing equipartition constraints are observed while approximately minimizing the total perimeter of the partition, which corresponds to the amount of inter ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We present an efficient method for assigning any number of processors to tasks associated with the cells of a rectangular uniform grid. Load balancing equipartition constraints are observed while approximately minimizing the total perimeter of the partition, which corresponds to the amount of interprocessor communication. This method is based upon decomposition of the grid into stripes of "optimal" height. We prove that under some mild assumptions, as the problem size grows large in all parameters, the error bound associated with this feasible solution approaches zero. We also present computational results from a high level parallel Genetic Algorithm that utilizes this method, and make comparisons with other methods. On a network of workstations, our algorithm solves within minutes instances of the problem that would require one billion binary variables in a Quadratic Assignment formulation. 1 Introduction 1.1 Problem Formulation The Minimum Perimeter Equipartition problem (MPE) is...
Smoothing Methods in Mathematical Programming
"... sity function. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for sufficiently small positive value of the smoot ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
sity function. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for sufficiently small positive value of the smoothing param eter fl. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite fl. Speedup over the linear/nonlinear programming package MINOS 5.4 was as high as 1142 times for linear inequali ties of size 2000 x 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems(LCPs) were treated by converting them into a system of smooth nonlinear equations and are solved by a quadratically con vergent Newton method. For monotone LCPs with as many as 10,000 variables, the proposed approach was as much as 63 times faster than Lemke's method. Our smooth approach can also be used to solve nonlinear and mixed comple menrarity problems (NCPs and MCPs) by converting them to classes of smooth parametric nonlinear equations. For any solvable NCP or MCP, existence of an arbitrarily accurate solution to the smooth nonlinear equation as well as the NCP or MCP, is established for sufficiently large value of a smoothing param eter c = l. An efficient smooth algorithm, based on the NewtonArmijo approach with an adjusted smoothing parameter, is also given and its global and local quadratic convergence is established. For NCPs, exact solutions of our smooth nonlinear equation for various values of the parameter c, generate an interior path, which is different from the central path for the interior point method. Computational results for 52 test problems compare favorably with those for another Ne...