Results 1  10
of
12
ROBUST PORTFOLIO SELECTION PROBLEMS
, 2003
"... In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce “u ..."
Abstract

Cited by 160 (8 self)
 Add to MetaCart
In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce “uncertainty structures” for the market parameters and show that the robust portfolio selection problems corresponding to these uncertainty structures can be reformulated as secondorder cone programs and, therefore, the computational effort required to solve them is comparable to that required for solving convex quadratic programs. Moreover, we show that these uncertainty structures correspond to confidence regions associated with the statistical procedures employed to estimate the market parameters. Finally, we demonstrate a simple recipe for efficiently computing robust portfolios given raw market data and a desired level of confidence.
Generalized Chebyshev bounds via semidefinite programming
, 2007
"... A sharp lower bound on the probability of a set defined by quadratic inequalities, given the first two moments of the distribution, can be efficiently computed using convex optimization. This result generalizes Chebyshev’s inequality for scalar random variables. Two semidefinite programming formul ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
A sharp lower bound on the probability of a set defined by quadratic inequalities, given the first two moments of the distribution, can be efficiently computed using convex optimization. This result generalizes Chebyshev’s inequality for scalar random variables. Two semidefinite programming formulations are presented, with a constructive proof based on convex optimization duality and elementary linear algebra.
Robustness and the Internet: Theoretical Foundations
, 2002
"... While control and communications theory have played a crucial role throughout in designing aspects of the Internet, a unified and integrated theory of the Internet as a whole has only recently become a practical and achievable research objective. Dramatic progress has been made recently in analytica ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
While control and communications theory have played a crucial role throughout in designing aspects of the Internet, a unified and integrated theory of the Internet as a whole has only recently become a practical and achievable research objective. Dramatic progress has been made recently in analytical results that provide for the first time a nascent but promising foundation for a rigorous and coherent mathematical theory underpinning Internet technology. This new theory addresses directly the performance and robustness of both the “horizontal ” decentralized and asynchronous nature of control in TCP/IP as well as the “vertical ” separation into the layers of the TCP/IP protocol stack from application down to the link layer. These results generalize notions of source and channel coding from information theory as well as decentralized versions of robust control. The new theoretical insights gained about the Internet also combine with our understanding of its origins and evolution to provide a rich source of ideas about complex systems in general. Most surprisingly, our deepening understanding from genomics and molecular biology has revealed that at the network and protocol level, cells and organisms are strikingly similar to technological networks, despite having completely different material substrates, evolution, and development/construction. 1
Identification of ARX Models with TimeVarying Bounded Parameters: A Semidefinite Programming Approach
, 2000
"... In this paper, we develop a new identification procedure for ARX models with uncertain, unknownbutbounded timevarying coefficients. The method seeks the smallest ellipsoid containing the coefficients, such that the resulting model in unfalsified by the observed data. The problem is formulated as ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper, we develop a new identification procedure for ARX models with uncertain, unknownbutbounded timevarying coefficients. The method seeks the smallest ellipsoid containing the coefficients, such that the resulting model in unfalsified by the observed data. The problem is formulated as a semidefinite programming program. We interpret our deterministic method in a probabilistic setting, and show how this interpretation can be used to effectively discard outliers. We generalize the method to multivariate models with matrix unstructured uncertainty. The resulting model is directly amenable to worstcase simulation and robust control techniques.
Parameter Estimation with Expected and ResidualatRisk Criteria
 TO APPEAR IN: SYSTEMS AND CONTROL LETTERS (ELSEVIER)
"... In this paper we study a class of uncertain linear estimation problems in which the data are affected by random uncertainty. In this setting, we consider two estimation criteria, one based on minimization of the expected ℓ1 or ℓ2 norm residual and one based on minimization of the level within which ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
In this paper we study a class of uncertain linear estimation problems in which the data are affected by random uncertainty. In this setting, we consider two estimation criteria, one based on minimization of the expected ℓ1 or ℓ2 norm residual and one based on minimization of the level within which the ℓ1 or ℓ2 norm residual is guaranteed to lie with an apriori fixed probability (residual at risk). The random uncertainty affecting the data is characterized by means of its first two statistical moments, and the above criteria are intended in a worstcase probabilistic sense, that is worstcase expectations and probabilities over all possible distribution having the specified moments are considered. The ensuing estimation problems can be solved efficiently via convex programming, yielding exact solutions in the ℓ2 norm case and upperbounds on the optimal solutions in the ℓ1 case.
ITERATIVE METHODS FOR ROBUST ESTIMATION UNDER BIVARIATE DISTRIBUTIONAL UNCERTAINTY
"... We propose an iterative algorithm to approximate the solution to an optimization problem that arises in estimating the value of a performance metric in a distributionally robust manner. The optimization formulation seeks to find a bivariate distribution that provides the worstcase estimate within a ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We propose an iterative algorithm to approximate the solution to an optimization problem that arises in estimating the value of a performance metric in a distributionally robust manner. The optimization formulation seeks to find a bivariate distribution that provides the worstcase estimate within a specified statistical distance from a nominal distribution and satisfies certain independence condition. This formulation is in general nonconvex and no closedform solution is known. We use recent results that characterize the local “sensitivity ” of the estimation to the distribution used, and propose an iterative procedure on the space of probability distributions. We establish that the iterations of solutions are always feasible and that the sequence is provably improving the estimate. We describe conditions under which this sequence can be shown to converge to a locally optimal solution. Numerical experiments illustrate the effectiveness of this approach for a variety of nominal distributions. 1
Discrete Chebyshev Classifiers
"... In large scale learning problems it is often easy to collect simple statistics of the data, but hard or impractical to store all the original data. A key question in this setting is how to construct classifiers based on such partial information. One traditional approach to the problem has been to us ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In large scale learning problems it is often easy to collect simple statistics of the data, but hard or impractical to store all the original data. A key question in this setting is how to construct classifiers based on such partial information. One traditional approach to the problem has been to use maximum entropy arguments to induce a complete distribution on variables from statistics. However, this approach essentially makes conditional independence assumptions about the distribution, and furthermore does not optimize prediction loss. Here we present a framework for discriminative learning given a set of statistics. Specifically, we address the case where all variables are discrete and we have access to various marginals. Our approach minimizes the worst case hinge loss in this case, which upper bounds the generalization error. We show that for certain sets of statistics the problem is tractable, and in the general case can be approximated using MAP LP relaxations. Empirical results show that the method is competitive with other approaches that use the same input. 1.
Some Applications of Semidefinite Optimization from an Operations Research Viewpoint
, 2008
"... This survey paper is intended for the graduate students and researchers who are interested in Operations Research, have solid understanding of linear optimization but are not familiar with Semidefinite Programming (SDP). Here, I provide a very gentle introduction to SDP, some entry points for furthe ..."
Abstract
 Add to MetaCart
(Show Context)
This survey paper is intended for the graduate students and researchers who are interested in Operations Research, have solid understanding of linear optimization but are not familiar with Semidefinite Programming (SDP). Here, I provide a very gentle introduction to SDP, some entry points for further look into the SDP literature, and brief introductions to some selected wellknown applications which may be attractive to such audience and in turn motivate them to learn more about semidefinite optimization.
Transductive Minimax Probability Machine
"... Abstract. The Minimax Probability Machine (MPM) is an elegant machine learning algorithm for inductive learning. It learns a classifier that minimizes an upper bound on its own generalization error. In this paper, we extend its celebrated inductive formulation to an equally elegant transductive le ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. The Minimax Probability Machine (MPM) is an elegant machine learning algorithm for inductive learning. It learns a classifier that minimizes an upper bound on its own generalization error. In this paper, we extend its celebrated inductive formulation to an equally elegant transductive learning algorithm. In the transductive setting, the label assignment of a test set is already optimized during training. This optimization problem is an intractable mixedinteger programming. Thus, we provide an efficient labelswitching approach to solve it approximately. The resulting method scales naturally to large data sets and is very efficient to run. In comparison with nine competitive algorithms on eleven data sets, we show that the proposed Transductive MPM (TMPM) almost outperforms all the other algorithms in both accuracy and speed.