Results 1  10
of
109
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Optimization of Convex Risk Functions
, 2004
"... We consider optimization problems involving convex risk functions. By employing techniques of convex analysis and optimization theory in vector spaces of measurable functions we develop new representation theorems for risk models, and optimality and duality theory for problems involving risk functio ..."
Abstract

Cited by 52 (11 self)
 Add to MetaCart
We consider optimization problems involving convex risk functions. By employing techniques of convex analysis and optimization theory in vector spaces of measurable functions we develop new representation theorems for risk models, and optimality and duality theory for problems involving risk functions.
Nonlinear inverse scale space methods for image restoration
 Communications in Mathematical Sciences
, 2005
"... Abstract. In this paper we generalize the iterated refinement method, introduced by the authors in [8], to a timecontinuous inverse scalespace formulation. The iterated refinement procedure yields a sequence of convex variational problems, evolving toward the noisy image. The inverse scale space m ..."
Abstract

Cited by 47 (12 self)
 Add to MetaCart
Abstract. In this paper we generalize the iterated refinement method, introduced by the authors in [8], to a timecontinuous inverse scalespace formulation. The iterated refinement procedure yields a sequence of convex variational problems, evolving toward the noisy image. The inverse scale space method arises as a limit for a penalization parameter tending to zero, while the number of iteration steps tends to infinity. For the limiting flow, similar properties as for the iterated refinement procedure hold. Specifically, when a discrepancy principle is used as the stopping criterion, the error between the reconstruction and the noisefree image decreases until termination, even if only the noisy image is available and a bound on the variance of the noise is known. The inverse flow is computed directly for onedimensional signals, yielding high quality restorations. In higher spatial dimensions, we introduce a relaxation technique using two evolution equations. These equations allow accurate, efficient and straightforward implementation. 1
Optimization Problems with perturbations, A guided tour
 SIAM REVIEW
, 1996
"... This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and app ..."
Abstract

Cited by 46 (10 self)
 Add to MetaCart
This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and approximate solutions in situations where the set of Lagrange multipliers may be unbounded, or even empty. We give rather complete results for nonlinear programming problems, and describe some partial extensions of the method to more general problems. We illustrate the results by computing the equilibrium position of a chain that is almost vertical or horizontal.
Asymptotic behavior of statistical estimators and of optimal solutions of stochastic optimization problems
 Annals of Statistics
, 1988
"... Abstract. We study the asymptotic behavior of the statistical estimators that maximize a not necessarily differentiable criterion function, possibly subject to side constraints (equalities and inequalities). The consistency results generalize those of Wald and Huber. Conditions are also given under ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
Abstract. We study the asymptotic behavior of the statistical estimators that maximize a not necessarily differentiable criterion function, possibly subject to side constraints (equalities and inequalities). The consistency results generalize those of Wald and Huber. Conditions are also given under which one is still able to obtain asymptotic normality. The analysis brings to the fore the relationship between the problem of finding statistical estimators and that of finding the optimal solutions of stochastic optimization problems with partial information. The last section is devoted to the properties of the saddle points of the associated Lagrangians.
A Minimax Method for Finding Multiple Critical Points and Its Applications to Semilinear PDE
 SIAM J. Sci. Comp
"... Most minimax theorems in critical point theory require one to solve a twolevel global optimization problem and therefore are not for algorithm implementation. The objective of this research is to develop numerical algorithms and corresponding mathematical theory for finding multiple saddle points i ..."
Abstract

Cited by 20 (15 self)
 Add to MetaCart
Most minimax theorems in critical point theory require one to solve a twolevel global optimization problem and therefore are not for algorithm implementation. The objective of this research is to develop numerical algorithms and corresponding mathematical theory for finding multiple saddle points in a stable way. In this paper, inspired by the numerical works of ChoiMcKenna and DingCostaChen, and the idea to define a solution submanifold, some local minimax theorems are established, which require to solve only a twolevel local optimization problem. Based on the local theory, a new local numerical minimax method for finding multiple saddle points is developed. The local theory is applied and the numerical method is implemented successfully to solve a class of semilinear elliptic boundary value problems for multiple solutions on some nonconvex, non starshaped and multiconnected domains. Numerical solutions are illustrated by their graphics for visualization. In a subsequent paper [20], we establish some convergence results for the algorithm.
Existence of Search Directions in InteriorPoint Algorithms for the SDP and the Monotone SDLCP
, 1996
"... . Various search directions used in interiorpointalgorithms for the SDP (semidefinite program) and the monotone SDLCP (semidefinite linear complementarity problem) are characterized by the intersection of a maximal monotone affine subspace and a maximal and strictly antitone affine subspace. This ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
. Various search directions used in interiorpointalgorithms for the SDP (semidefinite program) and the monotone SDLCP (semidefinite linear complementarity problem) are characterized by the intersection of a maximal monotone affine subspace and a maximal and strictly antitone affine subspace. This observation provides a unified geometric view over the existence of those search directions. Key words InteriorPoint Algorithm, Semidefinite Program, Semidefinite Linear Complementarity Problem, Monotonicity y Department of Mathematics, Kanagawa University, Rokkakubashi 3271, Kanagawaku, Yokohama 221, Japan. z Department of Mathematics and Physics, The National Defense Academy, Hashirimizu 11020, Yokosuka, 239, Japan. ] Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, 2121 OhOkayama, Meguroku, Tokyo 152, Japan. Research Report B310, Department of Mathematical and Computing Sciences, Tokyo Institute of Technology, Tokyo, Japan 1. Introduction. ...
Maximal monotonicity of dense type, local maximal monotonicity, and monotonicity of the conjugate are all the same for continuous linear operators
 PACIFIC J. MATH
, 1999
"... The concept of a monotone operator — which covers both linear positive semidefinite operators and subdifferentials of convex functions — is fundamental in various branches of mathematics. Over the last few decades, several stronger notions of monotonicity have been introduced: Gossez’s maximal mono ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
The concept of a monotone operator — which covers both linear positive semidefinite operators and subdifferentials of convex functions — is fundamental in various branches of mathematics. Over the last few decades, several stronger notions of monotonicity have been introduced: Gossez’s maximal monotonicity of dense type, Fitzpatrick and Phelps’s local maximal monotonicity, and Simons’s monotonicity of type (NI). While these monotonicities are automatic for maximal monotone operators in reflexive Banach spaces and for subdifferentials of convex functions, their precise relationship is largely unknown. Here, it is shown — within the beautiful framework of Convex Analysis — that for continuous linear monotone operators, all these notions coincide and are equivalent to the monotonicity of the conjugate operator. This condition is further
Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces
 COMM. CONTEMP. MATH
, 2001
"... The classical notions of essential smoothness, essential strict convexity, and Legendreness for convex functions are extended from Euclidean to Banach spaces. A pertinent duality theory is developed and several useful characterizations are given. The proofs rely on new results on the more subtle beh ..."
Abstract

Cited by 17 (12 self)
 Add to MetaCart
The classical notions of essential smoothness, essential strict convexity, and Legendreness for convex functions are extended from Euclidean to Banach spaces. A pertinent duality theory is developed and several useful characterizations are given. The proofs rely on new results on the more subtle behavior of subdifferentials and directional derivatives at boundary points of the domain. In weak Asplund spaces, a new formula allows the recovery of the subdifferential from nearby gradients. Finally, it is shown that every Legendre function on a reflexive Banach space is zone consistent, a fundamental property in the analysis of optimization algorithms based on Bregman distances. Numerous illustrating examples are provided.