Results 1  10
of
33
Engineering and economic applications of complementarity problems
 SIAM Review
, 1997
"... Abstract. This paper gives an extensive documentation of applications of finitedimensional nonlinear complementarity problems in engineering and equilibrium modeling. For most applications, we describe the problem briefly, state the defining equations of the model, and give functional expressions f ..."
Abstract

Cited by 172 (25 self)
 Add to MetaCart
(Show Context)
Abstract. This paper gives an extensive documentation of applications of finitedimensional nonlinear complementarity problems in engineering and equilibrium modeling. For most applications, we describe the problem briefly, state the defining equations of the model, and give functional expressions for the complementarity formulations. The goal of this documentation is threefold: (i) to summarize the essential applications of the nonlinear complementarity problem known to date, (ii) to provide a basis for the continued research on the nonlinear complementarity problem, and (iii) to supply a broad collection of realistic complementarity problems for use in algorithmic experimentation and other studies.
SemiSupervised Support Vector Machines for Unlabeled Data Classification
 Optimization Methods and Software
, 2001
"... A concave minimization approach is proposed for classifying unlabeled data based on the following ideas: (i) A small representative percentage (5% to 10%) of the unlabeled data is chosen by a clustering algorithm and given to an expert or oracle to label. (ii) A linear support vector machine is trai ..."
Abstract

Cited by 60 (3 self)
 Add to MetaCart
A concave minimization approach is proposed for classifying unlabeled data based on the following ideas: (i) A small representative percentage (5% to 10%) of the unlabeled data is chosen by a clustering algorithm and given to an expert or oracle to label. (ii) A linear support vector machine is trained using the small labeled sample while simultaneously assigning the remaining bulk of the unlabeled dataset to one of two classes so as to maximize the margin (distance) between the two bounding planes that determine the separating plane midway between them. This latter problem is formulated as a concave minimization problem on a polyhedral set for which a stationary point is quickly obtained by solving a few (5 to 7) linear programs. Such stationary points turn out to be very e#ective as evidenced by our computational results which show that clustered concave minimization yields: (a) Test set improvement as high as 20.4% over a linear support vector machine trained on a correspondingly sm...
Clustering via Concave Minimization
 Advances in Neural Information Processing Systems 9
, 1997
"... The problem of assigning m points in the ndimensional real space R n to k clusters is formulated as that of determining k centers in R n such that the sum of distances of each point to the nearest center is minimized. If a polyhedral distance is used, the problem can be formulated as that of ..."
Abstract

Cited by 56 (17 self)
 Add to MetaCart
The problem of assigning m points in the ndimensional real space R n to k clusters is formulated as that of determining k centers in R n such that the sum of distances of each point to the nearest center is minimized. If a polyhedral distance is used, the problem can be formulated as that of minimizing a piecewiselinear concave function on a polyhedral set which is shown to be equivalent to a bilinear program: minimizing a bilinear function on a polyhedral set. A fast finite kMedian Algorithm consisting of solving few linear programs in closed form leads to a stationary point of the bilinear program. Computational testing on a number of realworld databases was carried out. On the Wisconsin Diagnostic Breast Cancer (WDBC) database, kMedian training set correctness was comparable to that of the kMean Algorithm, however its testing set correctness was better. Additionally, on the Wisconsin Prognostic Breast Cancer (WPBC) database, distinct and clinically important survival curv...
Misclassification Minimization
 JOURNAL OF GLOBAL OPTIMIZATION
, 1994
"... The problem of minimizing the number of misclassified points by a plane, attempting to separate two point sets with intersecting convex hulls in ndimensional real space, is formulated as a linear program with equilibrium constraints (LPEC). This general LPEC can be converted to an exact penalty pro ..."
Abstract

Cited by 43 (13 self)
 Add to MetaCart
The problem of minimizing the number of misclassified points by a plane, attempting to separate two point sets with intersecting convex hulls in ndimensional real space, is formulated as a linear program with equilibrium constraints (LPEC). This general LPEC can be converted to an exact penalty problem with a quadratic objective and linear constraints. A FrankWolfetype algorithm is proposed for the penalty problem that terminates at a stationary point or a global solution. Novel aspects of the approach include: (i) A linear complementarity formulation of the step function that "counts" misclassifications, (ii) Exact penalty formulation without boundedness, nondegeneracy or constraint qualification assumptions, (iii) An exact solution extraction from the sequence of minimizers of the penalty function for a finite value of the penalty parameter for the general LPEC and an explicitly exact solution for the LPEC with uncoupled constraints, and (iv) A parametric quadratic programming form...
Bilinear Separation of Two Sets in nSpace
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1993
"... The NPcomplete problem of determining whether two disjoint point sets in the ndimensional real space R n can be separated by two planes is cast as a bilinear program, that is minimizing the scalar product of two linear functions on a polyhedral set. The bilinear program, which has a vertex solut ..."
Abstract

Cited by 39 (17 self)
 Add to MetaCart
(Show Context)
The NPcomplete problem of determining whether two disjoint point sets in the ndimensional real space R n can be separated by two planes is cast as a bilinear program, that is minimizing the scalar product of two linear functions on a polyhedral set. The bilinear program, which has a vertex solution, is processed by an iterative linear programming algorithm that terminates in a finite number of steps at a point satisfying a necessary optimality condition or at a global minimum. Encouraging computational experience on a number of test problems is reported.
Data Selection for Support Vector Machine Classifiers
 In Proceedings of the Sixth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
, 2000
"... The problem of extracting a minimal number of data points from a large dataset, in order to generate a support vector machine (SVM) classifier, is formulated as a concave min imization problem and solved by a finite number of linear programs. This minimal set of data points, which is the smallest n ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
The problem of extracting a minimal number of data points from a large dataset, in order to generate a support vector machine (SVM) classifier, is formulated as a concave min imization problem and solved by a finite number of linear programs. This minimal set of data points, which is the smallest number of support vectors that completely characterize a separating plane classifier, is considerably smaller than that required by a standard 1norm support vector machine with or without feature selection. The proposed approach also incorporates a feature selection procedure that results in a minimal number of input features used by the classifier. Tenfold cross validation gives as good or better test results using the proposed minimal support vector ma chine (MSVM) classifier based on the smaller set of data points compared to a standard 1norm support vector machine classifier. The reduction in data points used by an MSVM classifier over those used by a 1norm SVM classifier averaged 66% on seven public datasets and was as high as 81%. This makes MSVM a useful incremental classification tool which maintains only a small fraction of a large dataset before merging and processing it with new incoming data.
Sparse eigen methods by d.c. programming
 In Z. Ghahramani (Ed.), Proc. of the 24th Annual International Conference on Machine Learning (pp. 831–838). N.p.: Omnipress
, 2007
"... Eigenvalue problems are rampant in machine learning and statistics and appear in the context of classification, dimensionality reduction, etc. In this paper, we consider a cardinality constrained variational formulation of generalized eigenvalue problem with sparse principal component analysis (PCA) ..."
Abstract

Cited by 28 (7 self)
 Add to MetaCart
(Show Context)
Eigenvalue problems are rampant in machine learning and statistics and appear in the context of classification, dimensionality reduction, etc. In this paper, we consider a cardinality constrained variational formulation of generalized eigenvalue problem with sparse principal component analysis (PCA) as a special case. Using ℓ1norm approximation to the cardinality constraint, previous methods have proposed both convex and nonconvex solutions to the sparse PCA problem. In contrast, we propose a tighter approximation that is related to the negative loglikelihood of a Student’s tdistribution. The problem is then framed as a d.c. (difference of convex functions) program and is solved as a sequence of locally convex programs. We show that the proposed method not only explains more variance with sparse loadings on the principal directions but also has better scalability compared to other methods. We demonstrate these results on a collection of datasets of varying dimensionality, two of which are highdimensional gene datasets where the goal is to find few relevant genes that explain as much variance as possible. 1.
Parsimonious Least Norm Approximation
, 1997
"... A theoretically justifiable fast finite successive linear approximation algorithm is proposed for obtaining a parsimonious solution to a corrupted linear system Ax = b + p, where the corruption p is due to noise or error in measurement. The proposed linearprogrammingbased algorithm finds a solutio ..."
Abstract

Cited by 23 (7 self)
 Add to MetaCart
A theoretically justifiable fast finite successive linear approximation algorithm is proposed for obtaining a parsimonious solution to a corrupted linear system Ax = b + p, where the corruption p is due to noise or error in measurement. The proposed linearprogrammingbased algorithm finds a solution x by parametrically minimizing the number of nonzero elements in x and the error k Ax \Gamma b \Gamma p k 1 . Numerical tests on a signalprocessingbased example indicate that the proposed method is comparable to a method that parametrically minimizes the 1norm of the solution x and the error k Ax \Gamma b \Gamma p k 1 , and that both methods are superior, by orders of magnitude, to solutions obtained by least squares as well by combinatorially choosing an optimal solution with a specific number of nonzero elements. Keywords Minimal cardinality, least norm approximation 1 Introduction A wide range of important applications can be reduced to the problem of estimating a vector x by minim...
The Linear Complementarity Problem as a Separable Bilinear Program
 Journal of Global Optimization
, 1995
"... . The nonmonotone linear complementarity problem (LCP) is formulated as a bilinear program with separable constraints and an objective function that minimizesa natural error residual for the LCP. A linearprogrammingbasedalgorithm applied to the bilinear program terminates in a finite number of ste ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
(Show Context)
. The nonmonotone linear complementarity problem (LCP) is formulated as a bilinear program with separable constraints and an objective function that minimizesa natural error residual for the LCP. A linearprogrammingbasedalgorithm applied to the bilinear program terminates in a finite number of steps at a solution or stationary point of the problem. The bilinear algorithm solved 80 consecutive cases of the LCP formulation of the knapsack feasibility problem ranging in size between 10 and 3000, with almost constant average number of major iterations equal to four. Keywords: linear complementarity, bilinear programming, knapsack 1. Introduction It is well known that the linear complementarity problem [4], [16] 0 x ? Mx+ q 0; (1) for a given n \Theta n real matrix M and a given n \Theta 1 vector q, can be written as the bilinear program min x;w fx 0 wjw = Mx+ q; x 0; w 0g: (2) For the case of a general M , considered here, the objective function of (2) is nonconvex and the cons...
MinimumSupport Solutions of Polyhedral Concave Programs
 OPTIMIZATION
, 1999
"... Motivated by the successful application of mathematical programming techniques to difficult machine learning problems, we seek solutions of concave minimization problems over polyhedral sets with a minimum number of nonzero components. We prove that if such problems have a solution, they have a v ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Motivated by the successful application of mathematical programming techniques to difficult machine learning problems, we seek solutions of concave minimization problems over polyhedral sets with a minimum number of nonzero components. We prove that if such problems have a solution, they have a vertex solution with a minimal number of zeros. This includes linear programs and general linear complementarity problems. A smooth concave exponential approximation to a step function solves the minimumsupport problem exactly for a finite value of the smoothing parameter. A fast finite linearprogrammingbased iterative method terminates at a stationary point, which for many important real world problems provides very useful answers. Utilizing the complementarity property of linear programs and linear complementarity problems, an upper bound on the number of nonzeros can be obtained by solving a single convex minimization problem on a polyhedral set.