Results 1  10
of
89
The PATH Solver: A NonMonotone Stabilization Scheme for Mixed Complementarity Problems
 OPTIMIZATION METHODS AND SOFTWARE
, 1995
"... The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptan ..."
Abstract

Cited by 149 (33 self)
 Add to MetaCart
The Path solver is an implementation of a stabilized Newton method for the solution of the Mixed Complementarity Problem. The stabilization scheme employs a pathgeneration procedure which is used to construct a piecewiselinear path from the current point to the Newton point; a step length acceptance criterion and a nonmonotone pathsearch are then used to choose the next iterate. The algorithm is shown to be globally convergent under assumptions which generalize those required to obtain similar results in the smooth case. Several implementation issues are discussed, and extensive computational results obtained from problems commonly found in the literature are given.
Algorithms for the Satisfiability (SAT) Problem: A Survey
 DIMACS Series in Discrete Mathematics and Theoretical Computer Science
, 1996
"... . The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, compute ..."
Abstract

Cited by 127 (3 self)
 Add to MetaCart
. The satisfiability (SAT) problem is a core problem in mathematical logic and computing theory. In practice, SAT is fundamental in solving many problems in automated reasoning, computeraided design, computeraided manufacturing, machine vision, database, robotics, integrated circuit design, computer architecture design, and computer network design. Traditional methods treat SAT as a discrete, constrained decision problem. In recent years, many optimization methods, parallel algorithms, and practical techniques have been developed for solving SAT. In this survey, we present a general framework (an algorithm space) that integrates existing SAT algorithms into a unified perspective. We describe sequential and parallel SAT algorithms including variable splitting, resolution, local search, global optimization, mathematical programming, and practical SAT algorithms. We give performance evaluation of some existing SAT algorithms. Finally, we provide a set of practical applications of the sat...
Smoothing Methods for Convex Inequalities and Linear Complementarity Problems
 Mathematical Programming
, 1993
"... A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization probl ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for ff sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite ff. Speedup over MINOS 5.4 was as high as 515 times for linear inequalities of size 1000 \Theta 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCP's with as many as 400 variables, the proposed approach was as much as 85 times faster than Lemke's method. Key Words: Smo...
Feature Selection via Mathematical Programming
, 1997
"... The problem of discriminating between two finite point sets in ndimensional feature space by a separating plane that utilizes as few of the features as possible, is formulated as a mathematical program with a parametric objective function and linear constraints. The step function that appears in th ..."
Abstract

Cited by 59 (22 self)
 Add to MetaCart
The problem of discriminating between two finite point sets in ndimensional feature space by a separating plane that utilizes as few of the features as possible, is formulated as a mathematical program with a parametric objective function and linear constraints. The step function that appears in the objective function can be approximated by a sigmoid or by a concave exponential on the nonnegative real line, or it can be treated exactly by considering the equivalent linear program with equilibrium constraints (LPEC). Computational tests of these three approaches on publicly available realworld databases have been carried out and compared with an adaptation of the optimal brain damage (OBD) method for reducing neural network complexity. One feature selection algorithm via concave minimization (FSV) reduced crossvalidation error on a cancer prognosis database by 35.4% while reducing problem features from 32 to 4. Feature selection is an important problem in machine learning [18, 15, 1...
An Implementation Of Karmarkar's Algorithm For Linear Programming
 Mathematical Programming
, 1986
"... . This paper describes the implementation of power series dual affine scaling variants of Karmarkar's algorithm for linear programming. Based on a continuous version of Karmarkar's algorithm, two variants resulting from first and second order approximations of the continuous trajectory are implement ..."
Abstract

Cited by 57 (4 self)
 Add to MetaCart
. This paper describes the implementation of power series dual affine scaling variants of Karmarkar's algorithm for linear programming. Based on a continuous version of Karmarkar's algorithm, two variants resulting from first and second order approximations of the continuous trajectory are implemented and tested. Linear programs are expressed in an inequality form, which allows for the inexact computation of the algorithm's direction of improvement, resulting in a significant computational advantage. Implementation issues particular to this family of algorithms, such as treatment of dense columns, are discussed. The code is tested on several standard linear programming problems and compares favorably with the simplex code MINOS 4.0. 1. INTRODUCTION We describe in this paper a family of interior point power series affine scaling algorithms based on the linear programming algorithm presented by Karmarkar (1984). Two algorithms from this family, corresponding to first and second order pow...
Smooth Interpolation of Orientations with Angular Velocity Constraints using Quaternions
, 1992
"... In this paper we present methods to smoothly interpolate orientations, given N rotational keyframes of an object along a trajectory. The methods allow the user to impose constraints on the rotational path, such as the angular velocity at the endpoints of the trajectory. We convert the rotations to q ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
In this paper we present methods to smoothly interpolate orientations, given N rotational keyframes of an object along a trajectory. The methods allow the user to impose constraints on the rotational path, such as the angular velocity at the endpoints of the trajectory. We convert the rotations to quaternions, and then spline in that nonEuclidean space. Analogous to the mathematical foundations of flatspace spline curves, we minimize the net "tangential acceleration" of the quaternion path. We replace the flatspace quantities with curvedspace quantities, and numerically solve the resulting equation with finite difference and optimization methods. 1 Introduction The problem of using spline curves to smoothly interpolate mathematical quantities in flat Euclidean spaces is a wellstudied problem in computer graphics [bartels et al 87], [kochanek&bartels 84]. Many quantities important to computer graphics, however, such as rotations, lie in nonEuclidean spaces. In 1985, a method to...
Massive Data Discrimination via Linear Support Vector Machines
 Optimization Methods and Software
, 1998
"... A linear support vector machine formulation is used to generate a fast, finitelyterminating linearprogramming algorithm for discriminating between two massive sets in ndimensional space, where the number of points can be orders of magnitude larger than n. The algorithm creates a succession of su ..."
Abstract

Cited by 48 (16 self)
 Add to MetaCart
A linear support vector machine formulation is used to generate a fast, finitelyterminating linearprogramming algorithm for discriminating between two massive sets in ndimensional space, where the number of points can be orders of magnitude larger than n. The algorithm creates a succession of sufficiently small linear programs that separate chunks of the data at a time. The key idea is that a small number of support vectors, corresponding to linear programming constraints with positive dual variables, are carried over between the successive small linear programs, each of which containing a chunk of the data. We prove that this procedure is monotonic and terminates in a finite number of steps at an exact solution that leads to a globally optimal separating plane for the entire dataset. Numerical results on fully dense publicly available datasets, numbering 20,000 to 1 million points in 32dimensional space, confirm the theoretical results and demonstrate the ability to handle very l...
Interfaces to PATH 3.0: Design, Implementation and Usage
 Computational Optimization and Applications
, 1998
"... Several new interfaces have recently been developed requiring PATH to solve a mixed complementarity problem. To overcome the necessity of maintaining a different version of PATH for each interface, the code was reorganized using objectoriented design techniques. At the same time, robustness issues ..."
Abstract

Cited by 48 (17 self)
 Add to MetaCart
Several new interfaces have recently been developed requiring PATH to solve a mixed complementarity problem. To overcome the necessity of maintaining a different version of PATH for each interface, the code was reorganized using objectoriented design techniques. At the same time, robustness issues were considered and enhancements made to the algorithm. In this paper, we document the external interfaces to the PATH code and describe some of the new utilities using PATH. We then discuss the enhancements made and compare the results obtained from PATH 2.9 to the new version. 1 Introduction The PATH solver [12] for mixed complementarity problems (MCPs) was introduced in 1995 and has since become the standard against which new MCP solvers are compared. However, the main user group for PATH continues to be economists using the MPSGE preprocessor [36]. While developing the new PATH implementation, we had two goals: to make the solver accessible to a broad audience and to improve the effecti...
Computational experience with an interior point algorithm on the satisfiability problem
 Annals of Operations Research
, 1990
"... We apply the zeroone integer programming algorithm described in Karmarkar [12] and Karmarkar, Resende and Ramakrishnan [13] to solve randomly generated instances of the satisfiability problem (SAT). The interior point algorithm is briefly reviewed and shown to be easily adapted to solve large insta ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
We apply the zeroone integer programming algorithm described in Karmarkar [12] and Karmarkar, Resende and Ramakrishnan [13] to solve randomly generated instances of the satisfiability problem (SAT). The interior point algorithm is briefly reviewed and shown to be easily adapted to solve large instances of SAT. Hundreds of instances of SAT (having from 100 to 1000 variables and 100 to 32,000 clauses) are randomly generated and solved. For comparison, we attempt to solve the problems via linear programming relaxation with MINOS.