Results 1 
3 of
3
Smoothed analysis: an attempt to explain the behavior of algorithms in practice
 Commun. ACM
, 2009
"... Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to random perturbation and modification in their formation. A concrete example of such a smoothed analysis is a proof that the simplex algorithm for linear programming usually runs in polynomial time, when its input is subject to modeling or measurement noise. 1. MODELING REAL DATA “My experiences also strongly confirmed my previous opinion that the best theory is inspired by practice and the best practice is inspired by theory. ” [Donald E. Knuth: “Theory and Practice”, Theoretical Computer Science, 90 (1), 1–15, 1991.] Algorithms are highlevel descriptions of how computational tasks are performed. Engineers and experimentalists design and implement algorithms, and generally consider them a success if they work in practice. However, an algorithm that works well in one practical domain might perform poorly in another. Theorists also design and analyze algorithms, with the goal of providing provable guarantees about their performance. The traditional goal of theoretical computer science is to prove that an algorithm performs well This material is based upon work supported by the National
Approximation Algorithms for Offline Riskaverse Combinatorial Optimization
, 2010
"... We consider generic optimization problems that can be formulated as minimizing the cost of a feasible solution w T x over a combinatorial feasible set F ⊂ {0, 1} n. For these problems we describe a framework of riskaverse stochastic problems where the cost vector W has independent random components ..."
Abstract
 Add to MetaCart
We consider generic optimization problems that can be formulated as minimizing the cost of a feasible solution w T x over a combinatorial feasible set F ⊂ {0, 1} n. For these problems we describe a framework of riskaverse stochastic problems where the cost vector W has independent random components, unknown at the time of solution. A natural and important objective that incorporates risk in this stochastic setting is to look for a feasible solution whose stochastic cost has a small tail or a small convex combination of mean and standard deviation. Our models can be equivalently reformulated as nonconvex programs for which no efficient algorithms are known. In this paper, we make progress on these hard problems. Our results are several efficient generalpurpose approximation schemes. They use as a blackbox (exact or approximate) the solution to the underlying deterministic problem and thus immediately apply to arbitrary combinatorial problems. For example, from an available δapproximation algorithm to the linear problem, we construct a δ(1 + ǫ)approximation algorithm for the stochastic problem, which invokes the linear algorithm only a logarithmic number of times in the problem input (and polynomial in 1 ǫ), for any desired accuracy level ǫ> 0. The algorithms are based on a geometric analysis of the curvature and approximability of the nonlinear level sets of the objective functions. 1
Stochastic Combinatorial Optimization with Risk
"... We consider general combinatorial optimization problems that can be formulated as minimizing the weight of a feasible solution w T x over an arbitrary feasible set. For these problems we describe a broad class of corresponding stochastic problems where the weight vector W has independent random comp ..."
Abstract
 Add to MetaCart
We consider general combinatorial optimization problems that can be formulated as minimizing the weight of a feasible solution w T x over an arbitrary feasible set. For these problems we describe a broad class of corresponding stochastic problems where the weight vector W has independent random components, unknown at the time of solution. A natural and important objective which incorporates risk in this stochastic setting, is to look for a feasible solution whose stochastic weight has a small tail or a small linear combination of mean and standard deviation. Our models can be equivalently reformulated as deterministic nonconvex programs for which no efficient algorithms are known. In this paper, we make progress on these hard problems. Our results are several efficient generalpurpose approximation schemes. They use as a blackbox (exact or approximate) the solution to the underlying deterministic combinatorial problem and thus immediately apply to arbitrary combinatorial problems. For example, from an available δapproximation algorithm to the deterministic problem, we construct a δ(1 + ǫ)approximation algorithm that invokes the deterministic algorithm only a logarithmic number of times in the input and polynomial in 1 ǫ, for any desired accuracy level ǫ> 0. The algorithms are based on a geometric analysis of the curvature and approximability of the nonlinear level sets of the objective functions. Key words: approximation algorithms, combinatorial optimization, stochastic optimization, risk, nonconvex optimization