Results 1  10
of
25
The impact of imperfect scheduling on crosslayer congestion control in wireless networks
, 2005
"... In this paper, we study crosslayer design for congestion control in multihop wireless networks. In previous work, we have developed an optimal crosslayer congestion control scheme that jointly computes both the rate allocation and the stabilizing schedule that controls the resources at the under ..."
Abstract

Cited by 224 (16 self)
 Add to MetaCart
In this paper, we study crosslayer design for congestion control in multihop wireless networks. In previous work, we have developed an optimal crosslayer congestion control scheme that jointly computes both the rate allocation and the stabilizing schedule that controls the resources at the underlying layers. However, the scheduling component in this optimal crosslayer congestion control scheme has to solve a complex global optimization problem at each time, and is hence too computationally expensive for online implementation. In this paper, we study how the performance of crosslayer congestion control will be impacted if the network can only use an imperfect (and potentially distributed) scheduling component that is easier to implement. We study both the case when the number of users in the system is fixed and the case with dynamic arrivals and departures of the users, and we establish performance bounds of crosslayer congestion control with imperfect scheduling. Compared with a layered approach that does not design congestion control and scheduling together, our crosslayer approach has provably better performance bounds, and substantially outperforms the layered approach. The insights drawn from our analyses also enable us to design a fully distributed crosslayer congestion control and scheduling algorithm for a restrictive interference model.
Optimization of Conditional ValueatRisk
 Journal of Risk
, 2000
"... A new approach to optimizing or hedging a portfolio of nancial instruments to reduce risk is presented and tested on applications. It focuses on minimizing Conditional ValueatRisk (CVaR) rather than minimizing ValueatRisk (VaR), but portfolios with low CVaR necessarily have low VaR as well. CVaR ..."
Abstract

Cited by 215 (18 self)
 Add to MetaCart
A new approach to optimizing or hedging a portfolio of nancial instruments to reduce risk is presented and tested on applications. It focuses on minimizing Conditional ValueatRisk (CVaR) rather than minimizing ValueatRisk (VaR), but portfolios with low CVaR necessarily have low VaR as well. CVaR, also called Mean Excess Loss, Mean Shortfall, or Tail VaR, is anyway considered to be a more consistent measure of risk than VaR. Central to the new approach is a technique for portfolio optimization which calculates VaR and optimizes CVaR simultaneously. This technique is suitable for use by investment companies, brokerage rms, mutual funds, and any business that evaluates risks. It can be combined with analytical or scenariobased methods to optimize portfolios with large numbers of instruments, in which case the calculations often come down to linear programming or nonsmooth programming. The methodology can be applied also to the optimization of percentiles in contexts outside of nance.
Fast Linear Iterations for Distributed Averaging
 Systems and Control Letters
, 2003
"... We consider the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes. When the iteration is assumed symmetric, the problem of finding the fastest converging linear ..."
Abstract

Cited by 203 (11 self)
 Add to MetaCart
We consider the problem of finding a linear iteration that yields distributed averaging consensus over a network, i.e., that asymptotically computes the average of some initial values given at the nodes. When the iteration is assumed symmetric, the problem of finding the fastest converging linear iteration can be cast as a semidefinite program, and therefore efficiently and globally solved. These optimal linear iterations are often substantially faster than several common heuristics that are based on the Laplacian of the associated graph.
Incremental Subgradient Methods For Nondifferentiable Optimization
, 2001
"... We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to p ..."
Abstract

Cited by 62 (10 self)
 Add to MetaCart
We consider a class of subgradient methods for minimizing a convex function that consists of the sum of a large number of component functions. This type of minimization arises in a dual context from Lagrangian relaxation of the coupling constraints of large scale separable problems. The idea is to perform the subgradient iteration incrementally, by sequentially taking steps along the subgradients of the component functions, with intermediate adjustment of the variables after processing each component function. This incremental approach has been very successful in solving large differentiable least squares problems, such as those arising in the training of neural networks, and it has resulted in a much better practical rate of convergence than the steepest descent method. In this paper, we establish the convergence properties of a number of variants of incremental subgradient methods, including some that are stochastic. Based on the analysis and computational experiments, the methods appear very promising and effective for important classes of large problems. A particularly interesting discovery is that by randomizing the order of selection of component functions for iteration, the convergence rate is substantially improved.
Convex Nondifferentiable Optimization: A Survey Focussed On The Analytic Center Cutting Plane Method.
, 1999
"... We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct pr ..."
Abstract

Cited by 54 (2 self)
 Add to MetaCart
We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct proofs based on the properties of the logarithmic function. We also provide an in depth analysis of two extensions that are very relevant to practical problems: the case of multiple cuts and the case of deep cuts. We further examine extensions to problems including feasible sets partially described by an explicit barrier function, and to the case of nonlinear cuts. Finally, we review several implementation issues and discuss some applications.
Credit risk optimization with Conditional ValueatRisk criterion
, 2001
"... This paper examines a new approach for credit risk optimization. The model is based on the Conditional ValueatRisk (CVaR) risk measure, the expected loss exceeding ValueatRisk. CVaR is also known as Mean Excess, Mean Shortfall, or Tail VaR. This model can simultaneously adjust all positions i ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
This paper examines a new approach for credit risk optimization. The model is based on the Conditional ValueatRisk (CVaR) risk measure, the expected loss exceeding ValueatRisk. CVaR is also known as Mean Excess, Mean Shortfall, or Tail VaR. This model can simultaneously adjust all positions in a portfolio of financial instruments in order to minimize CVaR subject to trading and return constraints.
Structured and Simultaneous Lyapunov Functions for System Stability Problems
, 2001
"... It is shown that many system stability and robustness problems can be reduced to the question of when there is a quadratic Lyapunov function of a certain structure which establishes stability of x = Ax for some appropriate A. The existence of such a Lyapunov function can be determined by solving a c ..."
Abstract

Cited by 28 (4 self)
 Add to MetaCart
It is shown that many system stability and robustness problems can be reduced to the question of when there is a quadratic Lyapunov function of a certain structure which establishes stability of x = Ax for some appropriate A. The existence of such a Lyapunov function can be determined by solving a convex program. We present several numerical methods for these optimization problems. A simple numerical example is given.
Penalty/barrier multiplier algorithm for semidefinite programming
 Optimization Methods and Software
"... We present a generalization of the Penalty/Barrier Multiplier algorithm for the semidefinite programming, based on a matrix form of Lagrange multipliers. Our approach allows to use among others logarithmic, shifted logarithmic, exponential and a very effective quadraticlogarithmic penalty/barrier f ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
We present a generalization of the Penalty/Barrier Multiplier algorithm for the semidefinite programming, based on a matrix form of Lagrange multipliers. Our approach allows to use among others logarithmic, shifted logarithmic, exponential and a very effective quadraticlogarithmic penalty/barrier functions. We present dual analysis of the method, based on its correspondence to a proximal point algorithm with nonquadratic distancelike function. We give computationally tractable dual bounds, which are produced by the Legendre transformation of the penalty function. Numerical results for largescale problems from robust control, robust truss topology design and free material design demonstrate high efficiency of the algorithm. 1