Results 1  10
of
10
Design of affine controllers via convex optimization
, 2008
"... Abstract—We consider a discretetime timevarying linear dynamical system, perturbed by process noise, with linear noise corrupted measurements, over a finite horizon. We address the problem of designing a general affine causal controller, in which the control input is an affine function of all prev ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
Abstract—We consider a discretetime timevarying linear dynamical system, perturbed by process noise, with linear noise corrupted measurements, over a finite horizon. We address the problem of designing a general affine causal controller, in which the control input is an affine function of all previous measurements, in order to minimize a convex objective, in either a stochastic or worstcase setting. This controller design problem is not convex in its natural form, but can be transformed to an equivalent convex optimization problem by a nonlinear change of variables, which allows us to efficiently solve the problem. Our method is related to the classicaldesign procedure for timeinvariant, infinitehorizon linear controller design, and the more recent purified output control method. We illustrate the method with applications to supply chain optimization and dynamic portfolio optimization, and show the method can be combined with model predictive control techniques when perfect state information is available. Index Terms—Affine controller, dynamical system, dynamic linear programming (DLP), linear exponential quadratic Gaussian (LEQG), linear quadratic Gaussian (LQG), model predictive control (MPC), proportionalintegralderivative (PID). I.
Robust Formulations for Handling Uncertainty in Kernel Matrices
"... We study the problem of uncertainty in the entries of the Kernel matrix, arising in SVM formulation. Using Chance Constraint Programming and a novel large deviation inequality we derive a formulation which is robust to such noise. The resulting formulation applies when the noise is Gaussian, or has ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We study the problem of uncertainty in the entries of the Kernel matrix, arising in SVM formulation. Using Chance Constraint Programming and a novel large deviation inequality we derive a formulation which is robust to such noise. The resulting formulation applies when the noise is Gaussian, or has finite support. The formulation in general is nonconvex, but in several cases of interest it reduces to a convex program. The problem of uncertainty in kernel matrix is motivated from the real world problem of classifying proteins when the structures are provided with some uncertainty. The formulation derived here naturally incorporates such uncertainty in a principled manner leading to significant improvements over the state of the art. 1.
Interval Data Classification under Partial Information: A ChanceConstraint Approach
"... Abstract. This paper presents a novel methodology for constructing maximummargin classifiers which are robust to intervalvalued uncertainty in examples. The idea is to employ chanceconstraints which ensure that the uncertain examples are classified correctly with high probability. The key novelty ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. This paper presents a novel methodology for constructing maximummargin classifiers which are robust to intervalvalued uncertainty in examples. The idea is to employ chanceconstraints which ensure that the uncertain examples are classified correctly with high probability. The key novelty is in employing Bernstein bounding schemes to relax the resulting chanceconstrained program as a convex second order cone program. The Bernstein based relaxations presented in the paper require the knowledge of support and mean of the uncertain examples alone and make no assumptions on distributions regarding the underlying uncertainty. Classifiers built using the proposed methodology model intervalvalued uncertainty in a less conservative fashion and hence are expected to generalize better than existing methods. Experimental results on synthetic and realworld datasets show that the proposed classifiers are better equipped to handle intervalvalued uncertainty than stateoftheart. 1
Submitted to Math Programming, manuscript No. On the Power and Limitations of Affine Policies in TwoStage Adaptive Optimization
, 2009
"... Abstract We consider a twostage adaptive linear optimization problem under right hand side uncertainty with a minmax objective and give a sharp characterization of the power and limitations of affine policies (where the second stage solution is an affine function of the right hand side uncertainty ..."
Abstract
 Add to MetaCart
Abstract We consider a twostage adaptive linear optimization problem under right hand side uncertainty with a minmax objective and give a sharp characterization of the power and limitations of affine policies (where the second stage solution is an affine function of the right hand side uncertainty). In particular, we show that the worstcase cost of an optimal affine policy can be Ω(m 1/2−δ) times the worstcase cost of an optimal fullyadaptable solution for any δ> 0, where m is the number of linear constraints. We also show that the worstcase cost of the best affine policy is O ( √ m) times the optimal cost when the firststage constraint matrix has nonnegative coefficients. Moreover, if there are only k ≤ m uncertain parameters, we generalize the performance bound for affine policies to O ( √ k) which is particularly useful if only a few parameters are uncertain. We also provide an O ( √ k)approximation algorithm for the general case without any restriction on the constraint matrix but the solution is not an affine function of the uncertain parameters. We also give a tight characterization of the conditions under which an affine policy is optimal for the above model. In particular, we show that if the uncertainty set, U ⊆ R m + is a simplex then an affine policy is optimal. However, an affine policy is suboptimal even if U is a convex combination of only (m + 3) extreme points (only two more extreme points than a simplex) and the worstcase cost of an optimal affine policy can be a factor (2 − δ) worse than the worstcase cost of an optimal fullyadaptable solution for any δ> 0.
Research Article Robust THP Transceiver Designs for Multiuser MIMO Downlink with Imperfect CSIT
"... We present robust joint nonlinear transceiver designs for multiuser multipleinput multipleoutput (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is ..."
Abstract
 Add to MetaCart
We present robust joint nonlinear transceiver designs for multiuser multipleinput multipleoutput (MIMO) downlink in the presence of imperfections in the channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. The BS employs TomlinsonHarashima precoding (THP) for interuser interference precancellation at the transmitter. We consider robust transceiver designs that jointly optimize the transmit THP filters and receive filter for two models of CSIT errors. The first model is a stochastic error (SE) model, where the CSIT error is Gaussiandistributed. This model is applicable when the CSIT error is dominated by channel estimation error. In this case, the proposed robust transceiver design seeks to minimize a stochastic function of the sum mean square error (SMSE) under a constraint on the total BS transmit power. We propose an iterative algorithm to solve this problem. The other model we consider is a normbounded error (NBE) model, where the CSIT error can be specified by an uncertainty set. This model is applicable when the CSIT error is dominated by quantization errors. In this case, we consider a worstcase design. For this model, we consider robust (i) minimum SMSE, (ii) MSEconstrained, and (iii) MSEbalancing transceiver designs. We propose iterative algorithms to solve these problems, wherein each iteration involves a pair of semidefinite programs (SDPs). Further, we consider an extension of the proposed algorithm to the case with perantenna power constraints. We evaluate the robustness of the proposed algorithms to imperfections in CSIT through simulation, and show that the proposed robust designs outperform nonrobust designs as well as robust linear transceiver designs reported in the recent literature.
allocation, Wireless networks.
, 2009
"... DRAFT IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (REVISED) 1 This paper considers a spectrum sharing based cognitive radio (CR) communication system, which consists of a secondary user (SU) having multiple transmit antennas and a single receive antenna and a primary user (PU) having a single recei ..."
Abstract
 Add to MetaCart
DRAFT IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS (REVISED) 1 This paper considers a spectrum sharing based cognitive radio (CR) communication system, which consists of a secondary user (SU) having multiple transmit antennas and a single receive antenna and a primary user (PU) having a single receive antenna. The channel state information (CSI) on the link of the SU is assumed to be perfectly known at the SU transmitter (SUTx). However, due to loose cooperation between the SU and the PU, only partial CSI of the link between the SUTx and the PU is available at the SUTx. With the partial CSI and a prescribed transmit power constraint, our design objective is to determine the transmit signal covariance matrix that maximizes the rate of the SU while keeping the interference power to the PU below a threshold for all the possible channel realization within an uncertainty set. This problem, termed the robust cognitive beamforming problem, can be naturally formulated as a semiinfinite programming (SIP) problem with infinitely many constraints. This problem is first transformed into the second order cone programming (SOCP) problem and then solved via a standard interior point algorithm. Then, an analytical solution with much reduced
Robust Portfolio Optimization with ValueAtRisk Adjusted Sharpe Ratios
, 2012
"... We propose a robust portfolio optimization approach based on ValueatRisk (VaR) adjusted Sharpe ratios. Traditional Sharpe ratio estimates using a limited series of historical returns are subject to estimation errors. Portfolio optimization based on traditional Sharpe ratios ignores this uncertaint ..."
Abstract
 Add to MetaCart
We propose a robust portfolio optimization approach based on ValueatRisk (VaR) adjusted Sharpe ratios. Traditional Sharpe ratio estimates using a limited series of historical returns are subject to estimation errors. Portfolio optimization based on traditional Sharpe ratios ignores this uncertainty and, as a result, is not robust. In this paper, we propose a robust portfolio optimization model that selects the portfolio with the largest worsecasescenario Sharpe ratios. We show that this framework is equivalent to maximizing the Sharpe ratio reduced by the VaR of the Sharpe ratio and highlight the relationship between the VaRadjusted Sharpe ratios and other modified Sharpe ratios proposed in the literature. In addition, we present both numerical and empirical results comparing optimal portfolios generated by the approach advocated here with those generated by both the traditional and the alternative optimization approaches. Using outofsample backtests, we present evidence that the optimization approach advocated here is effective in mitigating market volatility without sacrificing realized returns. 1
Robust Least Square Semidefinite Programming with Applications
, 2013
"... In this paper, we consider a least square semidefinite programming problem under ellipsoidal data uncertainty. We show that the robustification of this uncertain problem can be reformulated as a semidefinite linear programming problem with an additional secondorder cone constraint. We then provide ..."
Abstract
 Add to MetaCart
In this paper, we consider a least square semidefinite programming problem under ellipsoidal data uncertainty. We show that the robustification of this uncertain problem can be reformulated as a semidefinite linear programming problem with an additional secondorder cone constraint. We then provide an explicit quantitative sensitivity analysis on how the solution under the robustification depends on the size/shape of the ellipsoidal data uncertainty set. Next, we prove that, under suitable constraint qualifications, the reformulation has zero duality gap with its dual problem, even when the primal problem itself is infeasible. The dual problem is equivalent to minimizing a smooth objective function over the Cartesian product of secondorder cones and the Euclidean space, which is easy to project onto. Thus, we propose a simple variant of the spectral projected gradient method [7] to solve the dual problem. While it is wellknown that any accumulation point of the sequence generated from the algorithm is a dual optimal solution, we show in addition that the dual objective value along the sequence generated converges to a finite value if and only if the primal problem is feasible, again under suitable constraint qualifications. This latter fact leads to a simple certificate for primal
Environ Model Assess DOI 10.1007/s106660109229z Confronting Management Challenges in Highly Uncertain Natural Resource Systems: a Robustness–Vulnerability Tradeoff Approach
, 2009
"... Abstract This paper presents a framework for the study of policy implementation in highly uncertain natural resource systems in which uncertainty cannot be characterized by probability distributions. We apply the framework to parametric uncertainty in the traditional Gordon–Schaefer model of a fishe ..."
Abstract
 Add to MetaCart
Abstract This paper presents a framework for the study of policy implementation in highly uncertain natural resource systems in which uncertainty cannot be characterized by probability distributions. We apply the framework to parametric uncertainty in the traditional Gordon–Schaefer model of a fishery to illustrate how performance can be sacrificed (tradedoff) for reduced sensitivity and hence increased robustness, with respect to model parameter uncertainty. With sufficient data, our robustness–vulnerability analysis provides tools to discuss policy options. When less data are available, it can be used to inform the early stages of a learning process. Several key insights emerge from this analysis: (1) the classic optimal control policy can be very sensitive to parametric uncertainty, (2) even mild robustness properties are difficult to achieve for the simple Gordon–Schaefer model, and (3) achieving increased robustness with respect to some parameters (e.g., biological parameters) necessarily results in increased The authors gratefully acknowledge financial support for