Results 1  10
of
12
Fast Algorithms for Finding Randomized Strategies in Game Trees
, 1994
"... Interactions among agents can be conveniently described by game trees. In order to analyze a game, it is important to derive optimal (or equilibrium) strategies for the different players. The standard approach to finding such strategies in games with imperfect information is, in general, computation ..."
Abstract

Cited by 89 (11 self)
 Add to MetaCart
Interactions among agents can be conveniently described by game trees. In order to analyze a game, it is important to derive optimal (or equilibrium) strategies for the different players. The standard approach to finding such strategies in games with imperfect information is, in general, computationally intractable. The approach is to generate the normal form of the game (the matrix containing the payoff for each strategy combination), and then solve a linear program (LP) or a linear complementarity problem (LCP). The size of the normal form, however, is typically exponential in the size of the game tree, thus making this method impractical in all but the simplest cases. This paper describes a new representation of strategies which results in a practical linear formulation of the problem of twoplayer games with perfect recall (i.e., games where players never forget anything, which is a standard assumption). Standard LP or LCP solvers can then be applied to find optimal randomized strategies. The resulting algorithms are, in general, exponentially better than the standard ones, both in terms of time and in terms of space.
Feature Selection via Mathematical Programming
, 1997
"... The problem of discriminating between two finite point sets in ndimensional feature space by a separating plane that utilizes as few of the features as possible, is formulated as a mathematical program with a parametric objective function and linear constraints. The step function that appears in th ..."
Abstract

Cited by 60 (22 self)
 Add to MetaCart
The problem of discriminating between two finite point sets in ndimensional feature space by a separating plane that utilizes as few of the features as possible, is formulated as a mathematical program with a parametric objective function and linear constraints. The step function that appears in the objective function can be approximated by a sigmoid or by a concave exponential on the nonnegative real line, or it can be treated exactly by considering the equivalent linear program with equilibrium constraints (LPEC). Computational tests of these three approaches on publicly available realworld databases have been carried out and compared with an adaptation of the optimal brain damage (OBD) method for reducing neural network complexity. One feature selection algorithm via concave minimization (FSV) reduced crossvalidation error on a cancer prognosis database by 35.4% while reducing problem features from 32 to 4. Feature selection is an important problem in machine learning [18, 15, 1...
Clustering via Concave Minimization
 Advances in Neural Information Processing Systems 9
, 1997
"... The problem of assigning m points in the ndimensional real space R n to k clusters is formulated as that of determining k centers in R n such that the sum of distances of each point to the nearest center is minimized. If a polyhedral distance is used, the problem can be formulated as that of ..."
Abstract

Cited by 49 (17 self)
 Add to MetaCart
The problem of assigning m points in the ndimensional real space R n to k clusters is formulated as that of determining k centers in R n such that the sum of distances of each point to the nearest center is minimized. If a polyhedral distance is used, the problem can be formulated as that of minimizing a piecewiselinear concave function on a polyhedral set which is shown to be equivalent to a bilinear program: minimizing a bilinear function on a polyhedral set. A fast finite kMedian Algorithm consisting of solving few linear programs in closed form leads to a stationary point of the bilinear program. Computational testing on a number of realworld databases was carried out. On the Wisconsin Diagnostic Breast Cancer (WDBC) database, kMedian training set correctness was comparable to that of the kMean Algorithm, however its testing set correctness was better. Additionally, on the Wisconsin Prognostic Breast Cancer (WPBC) database, distinct and clinically important survival curv...
Solution of General Linear Complementarity Problems via Nondifferentiable Concave Minimization
 Acta Mathematica Vietnamica
, 1997
"... Finite termination, at point satisfying the minimum principle necessary optimality condition, is established for a stepless (no line search) successive linearization algorithm (SLA) for minimizing a nondifferentiable concave function on a polyhedral set. The SLA is then applied to the general linear ..."
Abstract

Cited by 26 (11 self)
 Add to MetaCart
Finite termination, at point satisfying the minimum principle necessary optimality condition, is established for a stepless (no line search) successive linearization algorithm (SLA) for minimizing a nondifferentiable concave function on a polyhedral set. The SLA is then applied to the general linear complementarity problem (LCP), formulated as minimizing a piecewiselinear concave error function on the usual polyhedral feasible region defining the LCP. When the feasible region is nonempty, the concave error function always has a global minimum at a vertex, and the minimum is zero if and only if the LCP is solvable. The SLA terminates at a solution or stationary point of the problem in a finite number of steps. A special case of the proposed algorithm [8] solved without failure 80 consecutive cases of the LCP formulation of the knapsack feasibilty problem, ranging in size between 10 and 3000. 1 Introduction We consider the classical linear complementarity problem (LCP) [4, 12, 5] 0 x ?...
A Branch and Cut Algorithm for Nonconvex Quadratically Constrained Quadratic Programming
, 1999
"... We present a branch and cut algorithm that yields in finite time, a globally ffloptimal solution (with respect to feasibility and optimality) of the nonconvex quadratically constrained quadratic programming problem. The idea is to estimate all quadratic terms by successive linearizations within a ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
We present a branch and cut algorithm that yields in finite time, a globally ffloptimal solution (with respect to feasibility and optimality) of the nonconvex quadratically constrained quadratic programming problem. The idea is to estimate all quadratic terms by successive linearizations within a branching tree using ReformulationLinearization Techniques (RLT). To do so, four classes of linearizations (cuts), depending on one to three parameters, are detailed. For each class, we show how to select the best member with respect to a precise criterion. The cuts introduced at any node of the tree are valid in the whole tree, and not only within the subtree rooted at that node. In order to enhance the computational speed, the structure created at any node of the tree is flexible enough to be used at other nodes. Computational results are reported. Some problems of the literature are solved, for the first time with a proof of global optimality.
The Linear Complementarity Problem as a Separable Bilinear Program
 Journal of Global Optimization
, 1995
"... . The nonmonotone linear complementarity problem (LCP) is formulated as a bilinear program with separable constraints and an objective function that minimizesa natural error residual for the LCP. A linearprogrammingbasedalgorithm applied to the bilinear program terminates in a finite number of ste ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
. The nonmonotone linear complementarity problem (LCP) is formulated as a bilinear program with separable constraints and an objective function that minimizesa natural error residual for the LCP. A linearprogrammingbasedalgorithm applied to the bilinear program terminates in a finite number of steps at a solution or stationary point of the problem. The bilinear algorithm solved 80 consecutive cases of the LCP formulation of the knapsack feasibility problem ranging in size between 10 and 3000, with almost constant average number of major iterations equal to four. Keywords: linear complementarity, bilinear programming, knapsack 1. Introduction It is well known that the linear complementarity problem [4], [16] 0 x ? Mx+ q 0; (1) for a given n \Theta n real matrix M and a given n \Theta 1 vector q, can be written as the bilinear program min x;w fx 0 wjw = Mx+ q; x 0; w 0g: (2) For the case of a general M , considered here, the objective function of (2) is nonconvex and the cons...
MinimumSupport Solutions of Polyhedral Concave Programs
 OPTIMIZATION
, 1999
"... Motivated by the successful application of mathematical programming techniques to difficult machine learning problems, we seek solutions of concave minimization problems over polyhedral sets with a minimum number of nonzero components. We prove that if such problems have a solution, they have a v ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Motivated by the successful application of mathematical programming techniques to difficult machine learning problems, we seek solutions of concave minimization problems over polyhedral sets with a minimum number of nonzero components. We prove that if such problems have a solution, they have a vertex solution with a minimal number of zeros. This includes linear programs and general linear complementarity problems. A smooth concave exponential approximation to a step function solves the minimumsupport problem exactly for a finite value of the smoothing parameter. A fast finite linearprogrammingbased iterative method terminates at a stationary point, which for many important real world problems provides very useful answers. Utilizing the complementarity property of linear programs and linear complementarity problems, an upper bound on the number of nonzeros can be obtained by solving a single convex minimization problem on a polyhedral set.
Bilevel model selection for support vector machines
, 2007
"... Abstract. The successful application of Support Vector Machines (SVMs), kernel methods and other statistical machine learning methods requires selection of model parameters based on estimates of the generalization error. This paper presents a novel approach to systematic model selection through bile ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Abstract. The successful application of Support Vector Machines (SVMs), kernel methods and other statistical machine learning methods requires selection of model parameters based on estimates of the generalization error. This paper presents a novel approach to systematic model selection through bilevel optimization. We show how modelling tasks for widely used machine learning methods can be formulated as bilevel optimization problems and describe how the approach can address a broad range of tasksâ€”among which are parameter, feature and kernel selection In addition, we also discuss the challenges in implementing these approaches and enumerate opportunities for future work in this emerging research area. 1.
Complementarity Problems
 J. Comput. Appl. Math
, 2000
"... This paper provides an introduction to complementarity problems, with an emphasis on applications and solution algorithms. Various forms of complementarity problems are described along with a few sample applications, which provide a sense of what types of problems can be addressed eectively with ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
This paper provides an introduction to complementarity problems, with an emphasis on applications and solution algorithms. Various forms of complementarity problems are described along with a few sample applications, which provide a sense of what types of problems can be addressed eectively with complementarity problems. The most important algorithms are presented along with a discussion of when they can be used eectively. We also provide a brief introduction to the study of matrix classes and their relation to linear complementarity problems. Finally, we provide a brief summary of current research trends. Key words: complementarity problems,variational inequalities, matrix classes 1 Introduction The distinguishing feature of a complementarity problem is the set of complementarity conditions. Each of these conditions requires that the product of two or more nonnegative quantities should be zero. (Here, each quantity is either a decision variable, or a function of the decisi...
Mathematical Programming Approaches To Machine Learning And Data Mining
, 1998
"... Machine learning problems of supervised classification, unsupervised clustering and parsimonious approximation are formulated as mathematical programs. The feature selection problem arising in the supervised classification task is effectively addressed by calculating a separating plane by minimizing ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Machine learning problems of supervised classification, unsupervised clustering and parsimonious approximation are formulated as mathematical programs. The feature selection problem arising in the supervised classification task is effectively addressed by calculating a separating plane by minimizing separation error and the number of problem features utilized. The support vector machine approach is formulated using various norms to measure the margin of separation. The clustering problem of assigning m points in ndimensional real space to k clusters is formulated as minimizing a piecewiselinear concave function over a polyhedral set. This problem is also formulated in a novel fashion by minimizing the sum of squared distances of data points to nearest cluster planes characterizing the k clusters. The problem of obtaining a parsimonious solution to a linear system where the right hand side vector may be corrupted by noise is formulated as minimizing the system residual plus either the number of nonzero elements in the solution vector or the norm of the solution vector. The feature selection problem, the clustering problem and the parsimonious approximation problem can all be stated as the minimization of a concave function over a polyhedral region and are solved by a theoretically justifiable, fast and finite successive linearization algorithm. Numerical tests indicate the utility and efficiency of these formulations on realworld databases. In particular, the feature selection approach via concave minimization computes a separatingplane based classifier that improves upon the generalization ability of a separating plane computed without feature suppression. This approach produces ii classifiers utilizing fewer original problem features than the support vector machin...