Results 1  10
of
164
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 92 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Optimization of Convex Risk Functions
, 2004
"... We consider optimization problems involving convex risk functions. By employing techniques of convex analysis and optimization theory in vector spaces of measurable functions we develop new representation theorems for risk models, and optimality and duality theory for problems involving risk functio ..."
Abstract

Cited by 51 (11 self)
 Add to MetaCart
We consider optimization problems involving convex risk functions. By employing techniques of convex analysis and optimization theory in vector spaces of measurable functions we develop new representation theorems for risk models, and optimality and duality theory for problems involving risk functions.
Nonlinear inverse scale space methods for image restoration
 Communications in Mathematical Sciences
, 2005
"... Abstract. In this paper we generalize the iterated refinement method, introduced by the authors in [8], to a timecontinuous inverse scalespace formulation. The iterated refinement procedure yields a sequence of convex variational problems, evolving toward the noisy image. The inverse scale space m ..."
Abstract

Cited by 49 (12 self)
 Add to MetaCart
Abstract. In this paper we generalize the iterated refinement method, introduced by the authors in [8], to a timecontinuous inverse scalespace formulation. The iterated refinement procedure yields a sequence of convex variational problems, evolving toward the noisy image. The inverse scale space method arises as a limit for a penalization parameter tending to zero, while the number of iteration steps tends to infinity. For the limiting flow, similar properties as for the iterated refinement procedure hold. Specifically, when a discrepancy principle is used as the stopping criterion, the error between the reconstruction and the noisefree image decreases until termination, even if only the noisy image is available and a bound on the variance of the noise is known. The inverse flow is computed directly for onedimensional signals, yielding high quality restorations. In higher spatial dimensions, we introduce a relaxation technique using two evolution equations. These equations allow accurate, efficient and straightforward implementation. 1
Optimization Problems with perturbations, A guided tour
 SIAM REVIEW
, 1996
"... This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and app ..."
Abstract

Cited by 46 (10 self)
 Add to MetaCart
This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and approximate solutions in situations where the set of Lagrange multipliers may be unbounded, or even empty. We give rather complete results for nonlinear programming problems, and describe some partial extensions of the method to more general problems. We illustrate the results by computing the equilibrium position of a chain that is almost vertical or horizontal.
Asymptotic behavior of statistical estimators and of optimal solutions of stochastic optimization problems
 Annals of Statistics
, 1988
"... Abstract. We study the asymptotic behavior of the statistical estimators that maximize a not necessarily differentiable criterion function, possibly subject to side constraints (equalities and inequalities). The consistency results generalize those of Wald and Huber. Conditions are also given under ..."
Abstract

Cited by 42 (1 self)
 Add to MetaCart
Abstract. We study the asymptotic behavior of the statistical estimators that maximize a not necessarily differentiable criterion function, possibly subject to side constraints (equalities and inequalities). The consistency results generalize those of Wald and Huber. Conditions are also given under which one is still able to obtain asymptotic normality. The analysis brings to the fore the relationship between the problem of finding statistical estimators and that of finding the optimal solutions of stochastic optimization problems with partial information. The last section is devoted to the properties of the saddle points of the associated Lagrangians.
M Kojima , Existence of search directions in interiorpoint algorithms for the SDP and the monotone SDLCP
 SIAM Journal on Optimization
, 1998
"... ..."
A Minimax Method for Finding Multiple Critical Points and Its Applications to Semilinear PDE
 SIAM J. Sci. Comp
"... Most minimax theorems in critical point theory require one to solve a twolevel global optimization problem and therefore are not for algorithm implementation. The objective of this research is to develop numerical algorithms and corresponding mathematical theory for finding multiple saddle points i ..."
Abstract

Cited by 20 (15 self)
 Add to MetaCart
Most minimax theorems in critical point theory require one to solve a twolevel global optimization problem and therefore are not for algorithm implementation. The objective of this research is to develop numerical algorithms and corresponding mathematical theory for finding multiple saddle points in a stable way. In this paper, inspired by the numerical works of ChoiMcKenna and DingCostaChen, and the idea to define a solution submanifold, some local minimax theorems are established, which require to solve only a twolevel local optimization problem. Based on the local theory, a new local numerical minimax method for finding multiple saddle points is developed. The local theory is applied and the numerical method is implemented successfully to solve a class of semilinear elliptic boundary value problems for multiple solutions on some nonconvex, non starshaped and multiconnected domains. Numerical solutions are illustrated by their graphics for visualization. In a subsequent paper [20], we establish some convergence results for the algorithm.
Essential smoothness, essential strict convexity, and Legendre functions in Banach spaces
 COMM. CONTEMP. MATH
, 2001
"... The classical notions of essential smoothness, essential strict convexity, and Legendreness for convex functions are extended from Euclidean to Banach spaces. A pertinent duality theory is developed and several useful characterizations are given. The proofs rely on new results on the more subtle beh ..."
Abstract

Cited by 18 (13 self)
 Add to MetaCart
The classical notions of essential smoothness, essential strict convexity, and Legendreness for convex functions are extended from Euclidean to Banach spaces. A pertinent duality theory is developed and several useful characterizations are given. The proofs rely on new results on the more subtle behavior of subdifferentials and directional derivatives at boundary points of the domain. In weak Asplund spaces, a new formula allows the recovery of the subdifferential from nearby gradients. Finally, it is shown that every Legendre function on a reflexive Banach space is zone consistent, a fundamental property in the analysis of optimization algorithms based on Bregman distances. Numerous illustrating examples are provided.
Algorithms And Visualization For Solutions Of Nonlinear Elliptic Equations
, 2000
"... this paper, we compute and visualize solutions of several major types of semilinear elliptic boundary value problems with a homogeneous Dirichlet boundary condition in 2D. We present the mountainpass algorithm (MPA), the scaling iterative algorithm (SIA), the monotone iteration and the direct i ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
this paper, we compute and visualize solutions of several major types of semilinear elliptic boundary value problems with a homogeneous Dirichlet boundary condition in 2D. We present the mountainpass algorithm (MPA), the scaling iterative algorithm (SIA), the monotone iteration and the direct iteration algorithms (MIA and DIA). Semilinear elliptic equations are well known to be rich in their multiplicity of solutions. Many such physically significant solutions are also known to lack stability and, thus, are elusive to capture numerically. We will compute and visualize the profiles of such multiple solutions, thereby exhibiting the geometrical e#ects of the domains on the multiplicity. Special emphasis is placed on SIA and MPA, by which multiple unstable solutions are computed. The domains include the disk, symmetric or nonsymmetric annuli, dumbbells, and dumbbells with cavities. The nonlinear partial di#erential equations include the LaneEmden equation, Chandrasekhar's equation, Henon's equation, a singularly perturbed equation, and equations with sublinear growth. Relevant numerical data of solutions are listed as possible benchmarks for other researchers. Commentaries from the existing literature concerning solution behavior will be made, wherever appropriate. Some further theoretical properties of the solutions obtained from visualization will also be presented