Results 1  10
of
84
The Extended Linear Complementarity Problem
, 1993
"... We consider an extension of the horizontal linear complementarity problem, which we call the extended linear complementarity problem (XLCP). With the aid of a natural bilinear program, we establish various properties of this extended complementarity problem; these include the convexity of the biline ..."
Abstract

Cited by 539 (23 self)
 Add to MetaCart
We consider an extension of the horizontal linear complementarity problem, which we call the extended linear complementarity problem (XLCP). With the aid of a natural bilinear program, we establish various properties of this extended complementarity problem; these include the convexity of the bilinear objective function under a monotonicity assumption, the polyhedrality of the solution set of a monotone XLCP, and an error bound result for a nondegenerate XLCP. We also present a finite, sequential linear programming algorithm for solving the nonmonotone XLCP.
On Projection Algorithms for Solving Convex Feasibility Problems
, 1996
"... Due to their extraordinary utility and broad applicability in many areas of classical mathematics and modern physical sciences (most notably, computerized tomography), algorithms for solving convex feasibility problems continue to receive great attention. To unify, generalize, and review some of the ..."
Abstract

Cited by 142 (24 self)
 Add to MetaCart
Due to their extraordinary utility and broad applicability in many areas of classical mathematics and modern physical sciences (most notably, computerized tomography), algorithms for solving convex feasibility problems continue to receive great attention. To unify, generalize, and review some of these algorithms, a very broad and flexible framework is investigated . Several crucial new concepts which allow a systematic discussion of questions on behaviour in general Hilbert spaces and on the quality of convergence are brought out. Numerous examples are given. 1991 M.R. Subject Classification. Primary 47H09, 49M45, 6502, 65J05, 90C25; Secondary 26B25, 41A65, 46C99, 46N10, 47N10, 52A05, 52A41, 65F10, 65K05, 90C90, 92C55. Key words and phrases. Angle between two subspaces, averaged mapping, Cimmino's method, computerized tomography, convex feasibility problem, convex function, convex inequalities, convex programming, convex set, Fej'er monotone sequence, firmly nonexpansive mapping, H...
Some Perturbation Theory for Linear Programming
 Mathematical Programming
, 1992
"... This paper examines a few relations between solution characteristics of an LP and the amount by which the LP must be perturbed to obtain either a primal infeasible LP or a dual infeasible LP. We consider such solution characteristics as the size of the optimal solution and the sensitivity of the opt ..."
Abstract

Cited by 72 (2 self)
 Add to MetaCart
This paper examines a few relations between solution characteristics of an LP and the amount by which the LP must be perturbed to obtain either a primal infeasible LP or a dual infeasible LP. We consider such solution characteristics as the size of the optimal solution and the sensitivity of the optimal value to data perturbations. We show, for example, that an LP has a large optimal solution, or has a sensitive optimal value, only if the instance is nearly primal infeasible or dual infeasible. The results are not particularly surprising but they do formalize an interesting viewpoint which apparently has not been made explicit in the linear programming literature. The results are rather general. Several of the results are valid for linear programs defined in arbitrary real normed spaces. A HahnBanach Theorem is the main tool employed in the analysis; given a closed convex set in a normed vector space and a point in the space but not in the set, there exists a continuous linear functional strictly separating the set from the point. We introduce notation, then the results. Let X;Y denote real vector spaces, each with a norm. We use the same notation (i.e. k k) for all norms, it being clear from context which norm is referred to. Let X
Smoothing Methods for Convex Inequalities and Linear Complementarity Problems
 Mathematical Programming
, 1993
"... A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization probl ..."
Abstract

Cited by 62 (6 self)
 Add to MetaCart
A smooth approximation p(x; ff) to the plus function: maxfx; 0g, is obtained by integrating the sigmoid function 1=(1 + e \Gammaffx ), commonly used in neural networks. By means of this approximation, linear and convex inequalities are converted into smooth, convex unconstrained minimization problems, the solution of which approximates the solution of the original problem to a high degree of accuracy for ff sufficiently large. In the special case when a Slater constraint qualification is satisfied, an exact solution can be obtained for finite ff. Speedup over MINOS 5.4 was as high as 515 times for linear inequalities of size 1000 \Theta 1000, and 580 times for convex inequalities with 400 variables. Linear complementarity problems are converted into a system of smooth nonlinear equations and are solved by a quadratically convergent Newton method. For monotone LCP's with as many as 400 variables, the proposed approach was as much as 85 times faster than Lemke's method. Key Words: Smo...
Existence and uniqueness of semimartingale reflecting Brownian motions in convex polyhedrons
 Theory of Probability and Its Applications
, 1995
"... We consider the problem of existence and uniqueness of semimartingale reflecting Brownian motions (SRBM's) in convex polyhedrons. Loosely speaking, such a process has a semimartingale decomposition such that in the interior of the polyhedron the process behaves like a Brownian motion with a constant ..."
Abstract

Cited by 51 (14 self)
 Add to MetaCart
We consider the problem of existence and uniqueness of semimartingale reflecting Brownian motions (SRBM's) in convex polyhedrons. Loosely speaking, such a process has a semimartingale decomposition such that in the interior of the polyhedron the process behaves like a Brownian motion with a constant drift and covariance matrix, and at each of the (d \Gamma 1)dimensional faces that form the boundary of the polyhedron, the bounded variation part of the process increases in a given direction (constant for any particular face), so as to confine the process to the polyhedron. For historical reasons, this "pushing " at the boundary is called instantaneous reflection. For simple convex polyhedrons, we give a necessary and sufficient condition on the geometric data for the existence and uniqueness of an SRBM. For nonsimple convex polyhedrons, our condition is shown to be sufficient. It is an open question as to whether our condition is also necessary in the nonsimple case. From the uniqueness, it follows that an SRBM defines a strong Markov process. Our results have application to the study of diffusions arising as heavy traffic limits of multiclass queueing networks and in particular, the nonsimple case has application to multiclass fork and join networks. Our proof of weak existence uses a patchwork martingale problem introduced by T. G. Kurtz, whereas uniqueness hinges on an ergodic argument similar to that used by L. M. Taylor and R. J. Williams to prove uniqueness for SRBM's in an orthant.
Superlinear Convergence Of A Symmetric PrimalDual Path Following Algorithm For Semidefinite Programming
 SIAM Journal on Optimization
, 1996
"... This paper establishes the superlinear convergence of a symmetric primaldual path following algorithm for semidefinite programming under the assumptions that the semidefinite program has a strictly complementary primaldual optimal solution and that the size of the central path neighborhood tends t ..."
Abstract

Cited by 50 (5 self)
 Add to MetaCart
This paper establishes the superlinear convergence of a symmetric primaldual path following algorithm for semidefinite programming under the assumptions that the semidefinite program has a strictly complementary primaldual optimal solution and that the size of the central path neighborhood tends to zero. The interior point algorithm considered here closely resembles the MizunoTodd Ye predictorcorrector method for linear programming which is known to be quadratically convergent. It is shown that when the iterates are well centered, the duality gap is reduced superlinearly after each predictor step. Indeed, if each predictor step is succeeded by r consecutive corrector steps then the predictor reduces the duality gap superlinearly with order 2 1+2 \Gamma2r . The proof relies on a careful analysis of the central path for semidefinite programming. It is shown that under the strict complementarity assumption, the primaldual central path converges to the analytic center of the primald...
On the Convergence of the Exponential Multiplier Method for Convex Programming
 Mathematical Programming
, 1993
"... In this paper, we analyze the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic. We also analyze a dual counterpart, the entrop ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
In this paper, we analyze the exponential method of multipliers for convex constrained minimization problems, which operates like the usual Augmented Lagrangian method, except that it uses an exponential penalty function in place of the usual quadratic. We also analyze a dual counterpart, the entropy minimization algorithm, which operates like the proximal minimization algorithm, except that it uses a logarithmic/entropy "proximal" term in place of a quadratic. We strengthen substantially the available convergence results for these methods, and we derive their convergence rate when applied to linear programs.
Modifying SQP for degenerate problems
 Preprint ANL/MCSP6991097, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Ill
, 1997
"... Abstract. Most local convergence analyses of the sequential quadratic programming (SQP) algorithm for nonlinear programming make strong assumptions about the solution, namely, that the active constraint gradients are linearly independent and that there are no weakly active constraints. In this paper ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
Abstract. Most local convergence analyses of the sequential quadratic programming (SQP) algorithm for nonlinear programming make strong assumptions about the solution, namely, that the active constraint gradients are linearly independent and that there are no weakly active constraints. In this paper, we establish a framework for variants of SQP that retain the characteristic superlinear convergence rate even when these assumptions are relaxed, proving general convergence results and placing some recently proposed SQP variants in this framework. We discuss the reasons for which implementations of SQP often continue to exhibit good local convergence behavior even when the assumptions commonly made in the analysis are violated. Finally, we describe a new algorithm that formalizes and extends standard SQP implementation techniques, and we prove convergence results for this method also. AMS subject classifications. 90C33, 90C30, 49M45 1. Introduction. We
Stabilized Sequential Quadratic Programming
 Computational Optimization and Applications
, 1998
"... . Recently, Wright proposed a stabilized sequential quadratic programming algorithm for inequality constrained optimization. Assuming the MangasarianFromovitz constraint qualification and the existence of a strictly positive multiplier (but possibly dependent constraint gradients), he proved a local ..."
Abstract

Cited by 38 (0 self)
 Add to MetaCart
. Recently, Wright proposed a stabilized sequential quadratic programming algorithm for inequality constrained optimization. Assuming the MangasarianFromovitz constraint qualification and the existence of a strictly positive multiplier (but possibly dependent constraint gradients), he proved a local quadratic convergence result. In this paper, we establish quadratic convergence in cases where both strict complementarity and the MangasarianFromovitz constraint qualification do not hold. The constraints on the stabilization parameter are relaxed, and linear convergence is demonstrated when the parameter is kept fixed. We show that the analysis of this method can be carried out using recent results for the stability of variational problems. Key words. Sequential quadratic programming, quadratic convergence, superlinear convergence, degenerate optimization, stabilized SQP, error estimation To appear in Computational Optimization and Applications This paper is dedicated to Olvi L. Manga...
Stability in the Presence of Degeneracy and Error Estimation
 Mathematical Programming, Series A
, 1997
"... . Given an approximation to a local minimizer to a nonlinear optimization problem and to associated multipliers, we obtain a tight error estimate in terms of the violation of the rstorder conditions. Our results apply to degenerate optimization problems where independence of the active constraint g ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
. Given an approximation to a local minimizer to a nonlinear optimization problem and to associated multipliers, we obtain a tight error estimate in terms of the violation of the rstorder conditions. Our results apply to degenerate optimization problems where independence of the active constraint gradients and strict complementarity can be violated. Key words. Stability analysis, perturbation theory, degenerate optimization, error estimation, quadratic program stability, merit functions January, 1997 Revised June 6, 1998 Mathematical Programming, 85 (1999), pp. 181192. This work was supported by the National Science Foundation. 1 1. Introduction. We obtain estimates for the error in an approximation to the solution to an optimization problem. One of our main objectives is to establish error estimates that apply in situations where the MangasarianFromovitz constraint qualication (MFCQ) does not necessarily hold, or where strict complementarity is violated. For a system o...