Results 1  10
of
21
Optimization of Conditional ValueatRisk
 Journal of Risk
, 2000
"... A new approach to optimizing or hedging a portfolio of nancial instruments to reduce risk is presented and tested on applications. It focuses on minimizing Conditional ValueatRisk (CVaR) rather than minimizing ValueatRisk (VaR), but portfolios with low CVaR necessarily have low VaR as well. CVaR ..."
Abstract

Cited by 201 (18 self)
 Add to MetaCart
A new approach to optimizing or hedging a portfolio of nancial instruments to reduce risk is presented and tested on applications. It focuses on minimizing Conditional ValueatRisk (CVaR) rather than minimizing ValueatRisk (VaR), but portfolios with low CVaR necessarily have low VaR as well. CVaR, also called Mean Excess Loss, Mean Shortfall, or Tail VaR, is anyway considered to be a more consistent measure of risk than VaR. Central to the new approach is a technique for portfolio optimization which calculates VaR and optimizes CVaR simultaneously. This technique is suitable for use by investment companies, brokerage rms, mutual funds, and any business that evaluates risks. It can be combined with analytical or scenariobased methods to optimize portfolios with large numbers of instruments, in which case the calculations often come down to linear programming or nonsmooth programming. The methodology can be applied also to the optimization of percentiles in contexts outside of nance.
Fast Linear Iterations for Distributed Averaging
 Systems and Control Letters
, 2003
"... We consider the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes. When the iteration is assumed symmetric, the problem of finding the fastest converging linear ..."
Abstract

Cited by 190 (12 self)
 Add to MetaCart
We consider the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes. When the iteration is assumed symmetric, the problem of finding the fastest converging linear iteration can be cast as a semidefinite program, and therefore efficiently and globally solved. These optimal linear iterations are often substantially faster than several common heuristics that are based on the Laplacian of the associated graph.
Incremental Subgradient Methods For Nondifferentiable Optimization
, 2001
"... We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to p ..."
Abstract

Cited by 64 (12 self)
 Add to MetaCart
We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large di#erentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method. In this paper, we establish the convergence properties of a number of variants of incremental subgradient methods, including some that are stochastic. Based on the analysis and computational experiments, the methods appear very promising and e#ective for important classes of large problems. A particularly interesting discovery is that by randomizing the order of selection of component functions for iteration, the convergence rate is substantially improved. 1 Research supported by NSF under Grant ACI9873339.
Convex Nondifferentiable Optimization: A Survey Focussed On The Analytic Center Cutting Plane Method.
, 1999
"... We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct pr ..."
Abstract

Cited by 51 (2 self)
 Add to MetaCart
We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct proofs based on the properties of the logarithmic function. We also provide an in depth analysis of two extensions that are very relevant to practical problems: the case of multiple cuts and the case of deep cuts. We further examine extensions to problems including feasible sets partially described by an explicit barrier function, and to the case of nonlinear cuts. Finally, we review several implementation issues and discuss some applications.
Credit risk optimization with Conditional ValueatRisk criterion
, 2001
"... This paper examines a new approach for credit risk optimization. The model is based on the Conditional ValueatRisk (CVaR) risk measure, the expected loss exceeding ValueatRisk. CVaR is also known as Mean Excess, Mean Shortfall, or Tail VaR. This model can simultaneously adjust all positions i ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
This paper examines a new approach for credit risk optimization. The model is based on the Conditional ValueatRisk (CVaR) risk measure, the expected loss exceeding ValueatRisk. CVaR is also known as Mean Excess, Mean Shortfall, or Tail VaR. This model can simultaneously adjust all positions in a portfolio of financial instruments in order to minimize CVaR subject to trading and return constraints.
Structured and Simultaneous Lyapunov Functions for System Stability Problems
, 2001
"... It is shown that many system stability and robustness problems can be reduced to the question of when there is a quadratic Lyapunov function of a certain structure which establishes stability of x = Ax for some appropriate A. The existence of such a Lyapunov function can be determined by solving a c ..."
Abstract

Cited by 26 (4 self)
 Add to MetaCart
It is shown that many system stability and robustness problems can be reduced to the question of when there is a quadratic Lyapunov function of a certain structure which establishes stability of x = Ax for some appropriate A. The existence of such a Lyapunov function can be determined by solving a convex program. We present several numerical methods for these optimization problems. A simple numerical example is given.
Penalty/barrier multiplier algorithm for semidefinite programming
 Optimization Methods and Software
"... We present a generalization of the Penalty/Barrier Multiplier algorithm for the semidefinite programming, based on a matrix form of Lagrange multipliers. Our approach allows to use among others logarithmic, shifted logarithmic, exponential and a very effective quadraticlogarithmic penalty/barrier f ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
We present a generalization of the Penalty/Barrier Multiplier algorithm for the semidefinite programming, based on a matrix form of Lagrange multipliers. Our approach allows to use among others logarithmic, shifted logarithmic, exponential and a very effective quadraticlogarithmic penalty/barrier functions. We present dual analysis of the method, based on its correspondence to a proximal point algorithm with nonquadratic distancelike function. We give computationally tractable dual bounds, which are produced by the Legendre transformation of the penalty function. Numerical results for largescale problems from robust control, robust truss topology design and free material design demonstrate high efficiency of the algorithm. 1
Semidefinite Programming Relaxations and Algebraic Optimization in Control
, 2003
"... We present an overview of the essential elements of semide nite programming as a computational tool for the analysis of systems and control problems. We make particular emphasis on general duality properties as providing suboptimality or infeasibility certi cates. Our focus is on the exciting d ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We present an overview of the essential elements of semide nite programming as a computational tool for the analysis of systems and control problems. We make particular emphasis on general duality properties as providing suboptimality or infeasibility certi cates. Our focus is on the exciting developments occurred in the last few years, including robust optimization, combinatorial optimization, and algebraic methods such as sumofsquares. These developments are illustrated with examples of applications to control systems.