Results 1  10
of
22
Cuttingset methods for robust convex optimization with pessimizing oracles
 DEPARTMENT OF ELECTRICAL AND COMPUTER ENGINEERING, UNIVERSITY OF CALIFORNIA, SAN DIEGO. FROM
, 2011
"... We consider a general worstcase robust convex optimization problem, with arbitrary dependence on the uncertain parameters, which are assumed to lie in some given set of possible values. We describe a general method for solving such a problem, which alternates between optimization and worstcase ana ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
(Show Context)
We consider a general worstcase robust convex optimization problem, with arbitrary dependence on the uncertain parameters, which are assumed to lie in some given set of possible values. We describe a general method for solving such a problem, which alternates between optimization and worstcase analysis. With exact worstcase analysis, the method is shown to converge to a robust optimal point. With approximate worstcase analysis, which is the best we can do in many practical cases, the method seems to work very well in practice, subject to the errors in our worstcase analysis. We give variations on the basic method that can give enhanced convergence, reduce data storage, or improve other algorithm properties. Numerical simulations suggest that the method finds a quite robust solution within a few tens of steps; using warmstart techniques in the optimization steps reduces the overall effort to a modest multiple of solving a nominal problem, ignoring the parameter variation. The method is illustrated with several application examples.
Tractable approximate robust geometric programming
, 2005
"... The optimal solution of a geometric program (GP) can be sensitive to variations in the problem data. Robust geometric programming can systematically alleviate the sensitivity problem by explicitly incorporating a model of data uncertainty in a GP and optimizing for the worstcase scenario under this ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
The optimal solution of a geometric program (GP) can be sensitive to variations in the problem data. Robust geometric programming can systematically alleviate the sensitivity problem by explicitly incorporating a model of data uncertainty in a GP and optimizing for the worstcase scenario under this model. However, it is not known whether a general robust GP can be reformulated as a tractable optimization problem that interiorpoint or other algorithms can efficiently solve. In this paper we propose an approximation method that seeks a compromise between solution accuracy and computational efficiency. The method is based on approximating the robust GP as a robust linear program (LP), by replacing each nonlinear constraint function with a piecewiselinear (PWL) convex approximation. With a polyhedral or ellipsoidal description of the uncertain data, the resulting robust LP can be formulated as a standard convex optimization problem that interiorpoint methods can solve. The drawback of this basic method is that the number of terms in the PWL approximations required to obtain an acceptable approximation error can be very large. To overcome the “curse of dimensionality ” that arises in directly approximating the nonlinear constraint functions in the original robust GP, we form a conservative approximation of the original robust GP, which contains only bivariate constraint functions. We show how to find globally optimal PWL approximations of these bivariate constraint functions.
Learning heuristic functions through approximate linear programming
 International Conference on Automated Planning and Scheduling (ICAPS
, 2008
"... Planning problems are often formulated as heuristic search. The choice of the heuristic function plays a significant role in the performance of planning systems, but a good heuristic is not always available. We propose a new approach to learning heuristic functions from previously solved problem ins ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Planning problems are often formulated as heuristic search. The choice of the heuristic function plays a significant role in the performance of planning systems, but a good heuristic is not always available. We propose a new approach to learning heuristic functions from previously solved problem instances in a given domain. Our approach is based on approximate linear programming, commonly used in reinforcement learning. We show that our approach can be used effectively to learn admissible heuristic estimates and provide an analysis of the accuracy of the heuristic. When applied to common heuristic search problems, this approach reliably produces good heuristic functions.
A General RobustOptimization Formulation for Nonlinear Programming
 J. Optim. Theory Appl
, 2004
"... Most research in robust optimization has so far been focused on inequalityonly, convex conic programming with simple linear models for uncertain parameters. ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Most research in robust optimization has so far been focused on inequalityonly, convex conic programming with simple linear models for uncertain parameters.
Robust Portfolio Management
, 2004
"... In this paper we present robust models for index tracking and active portfolio management. The goal of these models is to control the e#ect of statistical errors in estimating market parameters on the performance of the portfolio. The proposed models allow one to impose additional side constraints s ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
In this paper we present robust models for index tracking and active portfolio management. The goal of these models is to control the e#ect of statistical errors in estimating market parameters on the performance of the portfolio. The proposed models allow one to impose additional side constraints such as bounds on the portfolio holdings, constraints on the portfolio beta, limits on cash exposure, etc. The optimal portfolios are computed by solving secondorder cone programs. Since the complexity of solving a secondorder cone program is comparable to that of solving a convex quadratic program, it follows that the e#ort required to compute the optimal robust portfolio is comparable to that of computing the Markowitz optimal portfolio. We report on the performance of our robust strategies in tracking the S&P 500 index over 19942003. We find that our robust strategy is able to track the index with a significantly smaller number of assets than a nonrobust meanvariance index tracking strategy. We propose a simple strategy for managing the cost of the robust index tracking strategy in markets with transaction costs. Our computational results also suggest that the robust active portfolio management strategy significantly outperforms the S&P 500 index without a significant increase in volatility. 1
Optimizationbased Approximate Dynamic Programming
, 2010
"... as to style and content by: ..."
(Show Context)
Worstcase violation of sampled convex programs for optimization with uncertainty. Research Report B425; Dept
 of Mathematical and Computing Sciences; Tokyo Institute of Technology; 2006. www.is.titech.ac.jp/research/researchreport/B/index.html Kouvelis P, Yu G. Robust
, 1997
"... Abstract. Uncertain programs have been developed to deal with optimization problems including inexact data, i.e., uncertainty. A deterministic approach called robust optimization is commonly applied to solve these problems. Recently, Calafiore and Campi have proposed a randomized approach based on s ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Uncertain programs have been developed to deal with optimization problems including inexact data, i.e., uncertainty. A deterministic approach called robust optimization is commonly applied to solve these problems. Recently, Calafiore and Campi have proposed a randomized approach based on sampling of constraints, where the number of samples is determined so that only small portion of original constraints is violated at the randomized solution. Our main concern is not only the probability of violation, but also the degree of violation i.e., the worstcase violation. We derive an upper bound of the worstcase violation for the sampled convex programs and consider the relation between the probability of violation and worstcase violation. The probability of violation and the degree of violation are simultaneously bounded by small values, when the number of random samples is sufficiently large. Our method is applicable to not only a bounded uncertainty set but also an unbounded one such as Gaussian uncertain variables. Key words.
Second Order Cone Programming Formulations for Robust Multiclass Classification 1
"... Abstract Multiclass classification is an important and ongoing research subject in machine learning. Current support vector methods for multiclass classification implicitly assume that the parameters in the optimization problems to be known exactly. However, in practice, the parameters have per ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract Multiclass classification is an important and ongoing research subject in machine learning. Current support vector methods for multiclass classification implicitly assume that the parameters in the optimization problems to be known exactly. However, in practice, the parameters have perturbations since they are estimated from the training data which are usually subject to measurement noise. In this paper, we propose linear and nonlinear robust formulations for multiclass classification based on MSVM method. The preliminary numerical experiments confirm the robustness of the proposed method. Keywords: Multiclass classification; Support vector machine; Secondorder cone program; Robust classifier AMS Subject classification: 65K05, 68T10, 68Q32 1
Regular Analog/RF Integrated Circuits Design Using Optimization With Recourse Including Ellipsoidal Uncertainty
, 2008
"... Abstract—Long design cycles due to the inability to predict silicon realities are a wellknown problem that plagues analog/RF integrated circuit product development. As this problem worsens for nanoscale IC technologies, the high cost of design and multiple manufacturing spins causes fewer products ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Long design cycles due to the inability to predict silicon realities are a wellknown problem that plagues analog/RF integrated circuit product development. As this problem worsens for nanoscale IC technologies, the high cost of design and multiple manufacturing spins causes fewer products to have the volume required to support fullcustom implementation. Design reuse and analog synthesis make analog/RF design more affordable; however, the increasing process variability and lack of modeling accuracy remain extremely challenging for nanoscale analog/RF design. We propose a regular analog/RF IC using metalmask configurability design methodology Optimization with Recourse of Analog Circuits including Layout Extraction (ORACLE), which is a combination of reuse and shareduse by formulating the synthesis problem as an optimization with recourse problem. Using a twostage geometric programming with recourse approach, ORACLE solves for both the globally optimal shared and applicationspecific variables. Furthermore, robust optimization is proposed to treat the design with variability problem, further enhancing the ORACLE methodology by providing yield bound for each configuration of regular designs. The statistical variations of the process parameters are captured by a confidence ellipsoid. We demonstrate ORACLE for regular Low Noise Amplifier designs using metalmask configurability, where a range of applications share common underlying structure and applicationspecific customization is performed using the metalmask layers. Two RF oscillator design examples are shown to achieve robust designs with guaranteed yield bound. Index Terms—Configurable design, optimization with recourse, robustness, statistical optimization. I.