Results 11  20
of
85
The Linear Complementarity Problem as a Separable Bilinear Program
 Journal of Global Optimization
, 1995
"... . The nonmonotone linear complementarity problem (LCP) is formulated as a bilinear program with separable constraints and an objective function that minimizesa natural error residual for the LCP. A linearprogrammingbasedalgorithm applied to the bilinear program terminates in a finite number of ste ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
. The nonmonotone linear complementarity problem (LCP) is formulated as a bilinear program with separable constraints and an objective function that minimizesa natural error residual for the LCP. A linearprogrammingbasedalgorithm applied to the bilinear program terminates in a finite number of steps at a solution or stationary point of the problem. The bilinear algorithm solved 80 consecutive cases of the LCP formulation of the knapsack feasibility problem ranging in size between 10 and 3000, with almost constant average number of major iterations equal to four. Keywords: linear complementarity, bilinear programming, knapsack 1. Introduction It is well known that the linear complementarity problem [4], [16] 0 x ? Mx+ q 0; (1) for a given n \Theta n real matrix M and a given n \Theta 1 vector q, can be written as the bilinear program min x;w fx 0 wjw = Mx+ q; x 0; w 0g: (2) For the case of a general M , considered here, the objective function of (2) is nonconvex and the cons...
Anytime coordination using separable bilinear programs
 In AAAI
, 2007
"... Developing scalable coordination algorithms for multiagent systems is a hard computational challenge. One useful approach, demonstrated by the Coverage Set Algorithm (CSA), exploits structured interaction to produce significant computational gains. Empirically, CSA exhibits very good anytime perfor ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
Developing scalable coordination algorithms for multiagent systems is a hard computational challenge. One useful approach, demonstrated by the Coverage Set Algorithm (CSA), exploits structured interaction to produce significant computational gains. Empirically, CSA exhibits very good anytime performance, but an error bound on the results has not been established. We reformulate the algorithm and derive both online and offline error bounds for approximate solutions. Moreover, we propose an effective way to automatically reduce the complexity of the interaction. Our experiments show that this is a promising approach to solve a broad class of decentralized decision problems. The general formulation used by the algorithm makes it both easy to implement and widely applicable to a variety of other AI problems.
Numerical Validation of Solutions of Linear Complementarity Problems
 Numer. Math
, 1997
"... This paper proposes a validation method for solutions of linear complementarity problems. The validation procedure consists in two sufficient conditions that can be tested on a digital computer. If the first condition is satisfied then a given multidimensional interval centered at an approximate sol ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
This paper proposes a validation method for solutions of linear complementarity problems. The validation procedure consists in two sufficient conditions that can be tested on a digital computer. If the first condition is satisfied then a given multidimensional interval centered at an approximate solution of the problem is guaranteed to contain an exact solution. If the second condition is satisfied then the multidimensional interval is guaranteed to contain no exact solution. This study is based on the mean value theorem for absolutely continuous functions and the reformulation of linear complementarity problems as nonsmooth nonlinear systems of equations. 1 Introduction Linear Complementarity Problems (LCP) model many important problems in engineering, management and economics. Furthermore linear and quadratic programming problems can be written as LCP. Several algorithms have been developed for solving LCP [11, 21, 22, 25, 26, 31], but few validation methods have been studied to giv...
Weak Univalence and Connectedness of Inverse Images of Continuous Functions
, 1997
"... A continuous function f with domain X and range f(X) in R n is weakly univalent if there is a sequence of continuous onetoone functions on X converging to f uniformly on bounded subsets of X . In this article, we establish, under certain conditions, the connectedness of an inverse image f \Gamm ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
A continuous function f with domain X and range f(X) in R n is weakly univalent if there is a sequence of continuous onetoone functions on X converging to f uniformly on bounded subsets of X . In this article, we establish, under certain conditions, the connectedness of an inverse image f \Gamma1 (q). The univalence results of RadulescuRadulescu, Mor'eRheinboldt, and GaleNikaido follow from our main result. We also show that the solution set of a nonlinear complementarity problem corresponding to a continuous P 0 function is connected if it contains a nonempty bounded clopen set; in particular, the problem will have a unique solution if it has a locally unique solution.
Some Generalizations Of The CrissCross Method For Quadratic Programming
 MATH. OPER. UND STAT. SER. OPTIMIZATION
, 1992
"... Three generalizations of the crisscross method for quadratic programming are presented here. Tucker's, Cottle's and Dantzig's principal pivoting methods are specialized as diagonal and exchange pivots for the linear complementarity problem obtained from a convex quadratic program. A finite criss ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
Three generalizations of the crisscross method for quadratic programming are presented here. Tucker's, Cottle's and Dantzig's principal pivoting methods are specialized as diagonal and exchange pivots for the linear complementarity problem obtained from a convex quadratic program. A finite crisscross method, based on leastindex resolution, is constructed for solving the LCP. In proving finiteness, orthogonality properties of pivot tableaus and positive semidefiniteness of quadratic matrices are used. In the last section some special cases and two further variants of the quadratic crisscross method are discussed. If the matrix of the LCP has full rank, then a surprisingly simple algorithm follows, which coincides with Murty's `Bard type schema' in the P matrix case.
Linear Complementarity for Regularized Policy Evaluation and Improvement
, 2010
"... Recent work in reinforcement learning has emphasized the power of L1 regularization to perform feature selection and prevent overfitting. We propose formulating the L1 regularized linear fixed point problem as a linear complementarity problem (LCP). This formulation offers several advantages over th ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Recent work in reinforcement learning has emphasized the power of L1 regularization to perform feature selection and prevent overfitting. We propose formulating the L1 regularized linear fixed point problem as a linear complementarity problem (LCP). This formulation offers several advantages over the LARSinspired formulation, LARSTD. The LCP formulation allows the use of efficient offtheshelf solvers, leads to a new uniqueness result, and can be initialized with starting points from similar problems (warm starts). We demonstrate that warm starts, as well as the efficiency of LCP solvers, can speed up policy iteration. Moreover, warm starts permit a form of modified policy iteration that can be used to approximate a “greedy” homotopy path, a generalization of the LARSTD homotopy path that combines policy evaluation and optimization.
Superlinear Convergence Of An Algorithm For Monotone Linear Complementarity Problems, When No Strictly Complementary Solution Exists
 Mathematics of Operations Research
, 1996
"... A new predictorcorrector interior point algorithm for solving monotone linear complementarity problems (LCP) is proposed, and it is shown to be superlinearly convergent with at least order 1.5, even if the LCP has no strictly complementary solution. Unlike Mizuno's recent algorithm [16], the fast ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
A new predictorcorrector interior point algorithm for solving monotone linear complementarity problems (LCP) is proposed, and it is shown to be superlinearly convergent with at least order 1.5, even if the LCP has no strictly complementary solution. Unlike Mizuno's recent algorithm [16], the fast local convergence is attained without any need for estimating the optimal partition. In the special case that a strictly complementary solution does exist, the order of convergence becomes quadratic. The proof relies on an investigation of the asymptotic behavior of first and second order derivatives that are associated with trajectories of weighted centers for LCP. AMS 1991 subject classification: 90C33. Key words. monotone linear complementarity problem, primaldual interior point method, superlinear convergence, central path. 1 1. Introduction Given n \Theta n real matrices Q and R and a real vector b of order n, the horizontal linear complementarity problem (LCP) is the problem of fin...
Solution of FiniteDimensional Variational Inequalities Using Smooth Optimization with Simple Bounds
, 1997
"... . The variational inequality problem is reduced to an optimization problem with a differentiable objective function and simple bounds. Theoretical results are proved, that relate stationary points of the minimization problem to solutions of the variational inequality problem. Perturbations of the or ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
. The variational inequality problem is reduced to an optimization problem with a differentiable objective function and simple bounds. Theoretical results are proved, that relate stationary points of the minimization problem to solutions of the variational inequality problem. Perturbations of the original problem are studied and an algorithm that uses the smooth minimization approach for solving monotone problems is defined. Key words. Variational inequalities, box constrained optimization, complementarity. 1 Introduction Let\Omega be a nonempty, closed and convex subset of IR n and F : IR n ! IR n . The finitedimensional variational inequality problem, denoted by VIP, is to find a vector x 2\Omega such that hF (x); w \Gamma xi 0; for all w 2\Omega : (1) This problem has many interesting applications and its solution using special techniques has been considered extensively in the literature; see, for example, (Ref. 1) and references therein. The linear and nonlinear comp...