Results 1 
4 of
4
Smoothed analysis: an attempt to explain the behavior of algorithms in practice
 COMMUN. ACM
, 2009
"... Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to random perturbation and modification in their formation. A concrete example of such a smoothed analysis is a proof that the simplex algorithm for linear programming usually runs in polynomial time, when its input is subject to modeling or measurement noise.
Approximation Algorithms for Offline Riskaverse Combinatorial Optimization
, 2010
"... We consider generic optimization problems that can be formulated as minimizing the cost of a feasible solution w T x over a combinatorial feasible set F ⊂ {0, 1} n. For these problems we describe a framework of riskaverse stochastic problems where the cost vector W has independent random components ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We consider generic optimization problems that can be formulated as minimizing the cost of a feasible solution w T x over a combinatorial feasible set F ⊂ {0, 1} n. For these problems we describe a framework of riskaverse stochastic problems where the cost vector W has independent random components, unknown at the time of solution. A natural and important objective that incorporates risk in this stochastic setting is to look for a feasible solution whose stochastic cost has a small tail or a small convex combination of mean and standard deviation. Our models can be equivalently reformulated as nonconvex programs for which no efficient algorithms are known. In this paper, we make progress on these hard problems. Our results are several efficient generalpurpose approximation schemes. They use as a blackbox (exact or approximate) the solution to the underlying deterministic problem and thus immediately apply to arbitrary combinatorial problems. For example, from an available δapproximation algorithm to the linear problem, we construct a δ(1 + ǫ)approximation algorithm for the stochastic problem, which invokes the linear algorithm only a logarithmic number of times in the problem input (and polynomial in 1 ǫ), for any desired accuracy level ǫ> 0. The algorithms are based on a geometric analysis of the curvature and approximability of the nonlinear level sets of the objective functions. 1
An FPTAS for optimizing a class of lowrank functions over a polytope
, 2011
"... We present a fully polynomial time approximation scheme (FPTAS) for optimizing a very general class of nonlinear functions of low rank over a polytope. Our approximation scheme relies on constructing an approximate Paretooptimal front of the linear functions which constitute the given lowrank func ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We present a fully polynomial time approximation scheme (FPTAS) for optimizing a very general class of nonlinear functions of low rank over a polytope. Our approximation scheme relies on constructing an approximate Paretooptimal front of the linear functions which constitute the given lowrank function. In contrast to existing results in the literature, our approximation scheme does not require the assumption of quasiconcavity on the objective function. For the special case of quasiconcave function minimization, we give an alternative FPTAS, which always returns a solution which is an extreme point of the polytope. Our technique can also be used to obtain an FPTAS for combinatorial optimization problems with nonlinear objective functions, for example when the objective is a product of a fixed number of linear functions. We also show that it is not possible to approximate the minimum of a general concave function over the unit hypercube to within any factor, unless P = NP. We prove this by showing a similar hardness of approximation result for supermodular function minimization, a result that may be of independent interest.
Stochastic Combinatorial Optimization with Risk
"... We consider general combinatorial optimization problems that can be formulated as minimizing the weight of a feasible solution w T x over an arbitrary feasible set. For these problems we describe a broad class of corresponding stochastic problems where the weight vector W has independent random comp ..."
Abstract
 Add to MetaCart
We consider general combinatorial optimization problems that can be formulated as minimizing the weight of a feasible solution w T x over an arbitrary feasible set. For these problems we describe a broad class of corresponding stochastic problems where the weight vector W has independent random components, unknown at the time of solution. A natural and important objective which incorporates risk in this stochastic setting, is to look for a feasible solution whose stochastic weight has a small tail or a small linear combination of mean and standard deviation. Our models can be equivalently reformulated as deterministic nonconvex programs for which no efficient algorithms are known. In this paper, we make progress on these hard problems. Our results are several efficient generalpurpose approximation schemes. They use as a blackbox (exact or approximate) the solution to the underlying deterministic combinatorial problem and thus immediately apply to arbitrary combinatorial problems. For example, from an available δapproximation algorithm to the deterministic problem, we construct a δ(1 + ǫ)approximation algorithm that invokes the deterministic algorithm only a logarithmic number of times in the input and polynomial in 1 ǫ, for any desired accuracy level ǫ> 0. The algorithms are based on a geometric analysis of the curvature and approximability of the nonlinear level sets of the objective functions. Key words: approximation algorithms, combinatorial optimization, stochastic optimization, risk, nonconvex optimization