Results 1  10
of
143
The Extended Linear Complementarity Problem
, 1993
"... We consider an extension of the horizontal linear complementarity problem, which we call the extended linear complementarity problem (XLCP). With the aid of a natural bilinear program, we establish various properties of this extended complementarity problem; these include the convexity of the biline ..."
Abstract

Cited by 539 (23 self)
 Add to MetaCart
We consider an extension of the horizontal linear complementarity problem, which we call the extended linear complementarity problem (XLCP). With the aid of a natural bilinear program, we establish various properties of this extended complementarity problem; these include the convexity of the bilinear objective function under a monotonicity assumption, the polyhedrality of the solution set of a monotone XLCP, and an error bound result for a nondegenerate XLCP. We also present a finite, sequential linear programming algorithm for solving the nonmonotone XLCP.
Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems
 IEEE Journal of Selected Topics in Signal Processing
, 2007
"... Abstract—Many problems in signal processing and statistical inference involve finding sparse solutions to underdetermined, or illconditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined wi ..."
Abstract

Cited by 291 (15 self)
 Add to MetaCart
Abstract—Many problems in signal processing and statistical inference involve finding sparse solutions to underdetermined, or illconditioned, linear systems of equations. A standard approach consists in minimizing an objective function which includes a quadratic (squared ℓ2) error term combined with a sparsenessinducing (ℓ1) regularization term.Basis pursuit, the least absolute shrinkage and selection operator (LASSO), waveletbased deconvolution, and compressed sensing are a few wellknown examples of this approach. This paper proposes gradient projection (GP) algorithms for the boundconstrained quadratic programming (BCQP) formulation of these problems. We test variants of this approach that select the line search parameters in different ways, including techniques based on the BarzilaiBorwein method. Computational experiments show that these GP approaches perform well in a wide range of applications, often being significantly faster (in terms of computation time) than competing methods. Although the performance of GP methods tends to degrade as the regularization term is deemphasized, we show how they can be embedded in a continuation scheme to recover their efficient practical performance. A. Background I.
Use of the ZeroNorm With Linear Models and Kernel Methods
, 2002
"... We explore the use of the socalled zeronorm of the parameters of linear models in learning. ..."
Abstract

Cited by 115 (4 self)
 Add to MetaCart
We explore the use of the socalled zeronorm of the parameters of linear models in learning.
Misclassification Minimization
 JOURNAL OF GLOBAL OPTIMIZATION
, 1994
"... The problem of minimizing the number of misclassified points by a plane, attempting to separate two point sets with intersecting convex hulls in ndimensional real space, is formulated as a linear program with equilibrium constraints (LPEC). This general LPEC can be converted to an exact penalty pro ..."
Abstract

Cited by 40 (13 self)
 Add to MetaCart
The problem of minimizing the number of misclassified points by a plane, attempting to separate two point sets with intersecting convex hulls in ndimensional real space, is formulated as a linear program with equilibrium constraints (LPEC). This general LPEC can be converted to an exact penalty problem with a quadratic objective and linear constraints. A FrankWolfetype algorithm is proposed for the penalty problem that terminates at a stationary point or a global solution. Novel aspects of the approach include: (i) A linear complementarity formulation of the step function that "counts" misclassifications, (ii) Exact penalty formulation without boundedness, nondegeneracy or constraint qualification assumptions, (iii) An exact solution extraction from the sequence of minimizers of the penalty function for a finite value of the penalty parameter for the general LPEC and an explicitly exact solution for the LPEC with uncoupled constraints, and (iv) A parametric quadratic programming form...
ArbitraryNorm Separating Plane
 Operations Research Letters
, 1997
"... A plane separating two point sets in ndimensional real space is constructed such that it minimizes the sum of arbitrarynorm distances of misclassified points to the plane. In contrast to previous approaches that used surrogates for distanceminimization, the present work is based on a precise norm ..."
Abstract

Cited by 38 (13 self)
 Add to MetaCart
A plane separating two point sets in ndimensional real space is constructed such that it minimizes the sum of arbitrarynorm distances of misclassified points to the plane. In contrast to previous approaches that used surrogates for distanceminimization, the present work is based on a precise normdependent explicit closed form for the projection of a point on a plane. This projection is used to formulate the separatingplane problem as a minimization of a convex function on a unit sphere in a norm dual to that of the arbitrary norm used. For the 1norm, the problem can be solved in polynomial time by solving 2n linear programs or by solving a bilinear program. For a general pnorm, the minimization problem can be transformed via an exact penalty formulation to minimizing the sum of a convex function and a bilinear function on a convex set. For the one and infinity norms, a finite successive linearization algorithm can be used for solving the exact penalty formulation. 1 Introduction...
Bilinear Separation of Two Sets in nSpace
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1993
"... The NPcomplete problem of determining whether two disjoint point sets in the ndimensional real space R n can be separated by two planes is cast as a bilinear program, that is minimizing the scalar product of two linear functions on a polyhedral set. The bilinear program, which has a vertex solut ..."
Abstract

Cited by 35 (17 self)
 Add to MetaCart
The NPcomplete problem of determining whether two disjoint point sets in the ndimensional real space R n can be separated by two planes is cast as a bilinear program, that is minimizing the scalar product of two linear functions on a polyhedral set. The bilinear program, which has a vertex solution, is processed by an iterative linear programming algorithm that terminates in a finite number of steps at a point satisfying a necessary optimality condition or at a global minimum. Encouraging computational experience on a number of test problems is reported.
Parallel Variable Distribution
 SIAM Journal on Optimization
, 1994
"... We present an approach for solving optimization problems in which the variables are distributed among p processors. Each processor has primary responsibility for updating its own block of variables in parallel while allowing the remaining variables to change in a restricted fashion (e. g. along a st ..."
Abstract

Cited by 34 (5 self)
 Add to MetaCart
We present an approach for solving optimization problems in which the variables are distributed among p processors. Each processor has primary responsibility for updating its own block of variables in parallel while allowing the remaining variables to change in a restricted fashion (e. g. along a steepest descent, quasiNewton, or any arbitrary direction). This "forgetmenot" approach is a distinctive feature of our algorithm which has not been analyzed before. The parallelization step is followed by a fast synchronization step wherein the affine hull of the points computed by the parallel processors and the current point is searched for an optimal point. Convergence to a stationary point under continuous differentiability is established for the unconstrained case, as well as a linear convergence rate under the additional assumption of a Lipschitzian gradient and strong convexity. For problems constrained to lie in the Cartesian product of closed convex sets, convergence is establish...
Generalized linearquadratic problems of deterministic and stochastic optimal control in discrete time
 SIAM J. Control Opt
, 1990
"... Abstract. Two fundamental classes of problems in largescale linear and quadratic programming are described. Multistage problems covering a wide variety of models in dynamic programming and stochastic programming are represented in a new way. Strong properties of duality are revealed which support t ..."
Abstract

Cited by 32 (7 self)
 Add to MetaCart
Abstract. Two fundamental classes of problems in largescale linear and quadratic programming are described. Multistage problems covering a wide variety of models in dynamic programming and stochastic programming are represented in a new way. Strong properties of duality are revealed which support the development of iterative approximate techniques of solution in terms of saddlepoints. Optimality conditions are derived in a form that emphasizes the possibilities of decomposition.