Results 1  10
of
134
Toward a Theory of Discounted Repeated Games with Imperfect Monitoring,”Econometrica
, 1990
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 243 (2 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
LAGRANGE MULTIPLIERS AND OPTIMALITY
, 1993
"... Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions ..."
Abstract

Cited by 89 (7 self)
 Add to MetaCart
Lagrange multipliers used to be viewed as auxiliary variables introduced in a problem of constrained minimization in order to write firstorder optimality conditions formally as a system of equations. Modern applications, with their emphasis on numerical methods and more complicated side conditions than equations, have demanded deeper understanding of the concept and how it fits into a larger theoretical picture. A major line of research has been the nonsmooth geometry of onesided tangent and normal vectors to the set of points satisfying the given constraints. Another has been the gametheoretic role of multiplier vectors as solutions to a dual problem. Interpretations as generalized derivatives of the optimal value with respect to problem parameters have also been explored. Lagrange multipliers are now being seen as arising from a general rule for the subdifferentiation of a nonsmooth objective function which allows blackandwhite constraints to be replaced by penalty expressions. This paper traces such themes in the current theory of Lagrange multipliers, providing along the way a freestanding exposition of basic nonsmooth analysis as motivated by and applied to this subject.
Optimization of Convex Risk Functions
, 2004
"... We consider optimization problems involving convex risk functions. By employing techniques of convex analysis and optimization theory in vector spaces of measurable functions we develop new representation theorems for risk models, and optimality and duality theory for problems involving risk functio ..."
Abstract

Cited by 52 (11 self)
 Add to MetaCart
We consider optimization problems involving convex risk functions. By employing techniques of convex analysis and optimization theory in vector spaces of measurable functions we develop new representation theorems for risk models, and optimality and duality theory for problems involving risk functions.
First and Second Order Analysis of Nonlinear Semidefinite Programs
 Mathematical Programming
, 1997
"... In this paper we study nonlinear semidefinite programming problems. Convexity, duality and firstorder optimality conditions for such problems are presented. A secondorder analysis is also given. Secondorder necessary and sufficient optimality conditions are derived. Finally, sensitivity analysi ..."
Abstract

Cited by 47 (11 self)
 Add to MetaCart
In this paper we study nonlinear semidefinite programming problems. Convexity, duality and firstorder optimality conditions for such problems are presented. A secondorder analysis is also given. Secondorder necessary and sufficient optimality conditions are derived. Finally, sensitivity analysis of such programs is discussed. Key words: Semidefinite programming, cone constraints, convex programming, duality, secondorder optimality conditions, tangent cones, optimal value function, sensitivity analysis. AMS subject classification: 90C25, 90C30, 90C31 1 Introduction In this paper we consider the following optimization problem (P ) min x2IR m f(x) subject to G(x) 0: Here G : IR m ! S n is a mapping from IR m into the space S n of n \Theta n symmetric matrices and, for A; B 2 S n , the notation A B (the notation A B) means that the matrix A \Gamma B is positive semidefinite (negative semidefinite). Consider the cone K ae S n of positive semidefinite matrices. Then the co...
Optimization Problems with perturbations, A guided tour
 SIAM REVIEW
, 1996
"... This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and app ..."
Abstract

Cited by 46 (10 self)
 Add to MetaCart
This paper presents an overview of some recent and significant progress in the theory of optimization with perturbations. We put the emphasis on methods based on upper and lower estimates of the value of the perturbed problems. These methods allow to compute expansions of the value function and approximate solutions in situations where the set of Lagrange multipliers may be unbounded, or even empty. We give rather complete results for nonlinear programming problems, and describe some partial extensions of the method to more general problems. We illustrate the results by computing the equilibrium position of a chain that is almost vertical or horizontal.
Polyhedral risk measures in stochastic programming
 SIAM JOURNAL ON OPTIMIZATION
, 2005
"... We consider stochastic programs with risk measures in the objective and study stability properties as well as decomposition structures. Thereby we place emphasis on dynamic models, i.e., multistage stochastic programs with multiperiod risk measures. In this context, we define the class of polyhedra ..."
Abstract

Cited by 36 (10 self)
 Add to MetaCart
We consider stochastic programs with risk measures in the objective and study stability properties as well as decomposition structures. Thereby we place emphasis on dynamic models, i.e., multistage stochastic programs with multiperiod risk measures. In this context, we define the class of polyhedral risk measures such that stochastic programs with risk measures taken from this class have favorable properties. Polyhedral risk measures are defined as optimal values of certain linear stochastic programs where the arguments of the risk measure appear on the righthand side of the dynamic constraints. Dual representations for polyhedral risk measures are derived and used to deduce criteria for convexity and coherence. As examples of polyhedral risk measures we propose multiperiod extensions of the ConditionalValueatRisk.
A NewtonCG augmented Lagrangian method for semidefinite programming
 SIAM J. Optim
"... Abstract. We consider a NewtonCG augmented Lagrangian method for solving semidefinite programming (SDP) problems from the perspective of approximate semismooth Newton methods. In order to analyze the rate of convergence of our proposed method, we characterize the Lipschitz continuity of the corresp ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
Abstract. We consider a NewtonCG augmented Lagrangian method for solving semidefinite programming (SDP) problems from the perspective of approximate semismooth Newton methods. In order to analyze the rate of convergence of our proposed method, we characterize the Lipschitz continuity of the corresponding solution mapping at the origin. For the inner problems, we show that the positive definiteness of the generalized Hessian of the objective function in these inner problems, a key property for ensuring the efficiency of using an inexact semismooth NewtonCG method to solve the inner problems, is equivalent to the constraint nondegeneracy of the corresponding dual problems. Numerical experiments on a variety of large scale SDPs with the matrix dimension n up to 4, 110 and the number of equality constraints m up to 2, 156, 544 show that the proposed method is very efficient. We are also able to solve the SDP problem fap36 (with n = 4, 110 and m = 1, 154, 467) in the Seventh DIMACS Implementation Challenge much more accurately than previous attempts.
A quadratically convergent Newton method for computing the nearest correlation matrix
 SIAM J. Matrix Anal. Appl
, 2006
"... The nearest correlation matrix problem is to find a correlation matrix which is closest to a given symmetric matrix in the Frobenius norm. The well studied dual approach is to reformulate this problem as an unconstrained continuously differentiable convex optimization problem. Gradient methods and q ..."
Abstract

Cited by 28 (9 self)
 Add to MetaCart
The nearest correlation matrix problem is to find a correlation matrix which is closest to a given symmetric matrix in the Frobenius norm. The well studied dual approach is to reformulate this problem as an unconstrained continuously differentiable convex optimization problem. Gradient methods and quasiNewton methods like BFGS have been used directly to obtain globally convergent methods. Since the objective function in the dual approach is not twice continuously differentiable, these methods converge at best linearly. In this paper, we investigate a Newtontype method for the nearest correlation matrix problem. Based on recent developments on strongly semismooth matrix valued functions, we prove the quadratic convergence of the proposed Newton method. Numerical experiments confirm the fast convergence and the high efficiency of the method. AMS subject classifications. 49M45, 90C25, 90C33 1
Joint sourcechannel coding error exponent for discrete communication systems with Markovian memory
 IEEE Trans. Info. Theory
, 2007
"... Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formula ..."
Abstract

Cited by 23 (9 self)
 Add to MetaCart
Abstract—We investigate the computation of Csiszár’s bounds for the joint source–channel coding (JSCC) error exponent of a communication system consisting of a discrete memoryless source and a discrete memoryless channel. We provide equivalent expressions for these bounds and derive explicit formulas for the rates where the bounds are attained. These equivalent representations can be readily computed for arbitrary source–channel pairs via Arimoto’s algorithm. When the channel’s distribution satisfies a symmetry property, the bounds admit closedform parametric expressions. We then use our results to provide a systematic comparison between the JSCC error exponent and the tandem coding error exponent, which applies if the source and channel are separately coded. It is shown that 2. We establish conditions for which and for which =2. Numerical examples indicate that is close to2 for many source– channel pairs. This gain translates into a power saving larger than 2 dB for a binary source transmitted over additive white Gaussian noise (AWGN) channels and Rayleighfading channels with finite output quantization. Finally, we study the computation of the lossy JSCC error exponent under the Hamming distortion measure. Index Terms—Discrete memoryless sources and channels, error exponent, Fenchel’s duality, Hamming distortion measure, joint source–channel coding, randomcoding exponent, reliability function, spherepacking exponent, symmetric channels, tandem source and channel coding. I.