Results 1  10
of
18
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit, submitted
, 2007
"... Abstract. This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements – L1minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has ..."
Abstract

Cited by 102 (10 self)
 Add to MetaCart
Abstract. This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements – L1minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of L1minimization. Our algorithm ROMP reconstructs a sparse signal in a number of iterations linear in the sparsity, and the reconstruction is exact provided the linear measurements satisfy the Uniform Uncertainty Principle. 1.
Are stable instances easy?
, 2008
"... We introduce the notion of a stable instance for a discrete optimization problem, and argue that in many practical situations only sufficiently stable instances are of interest. The question then arises whether stable instances of NP–hard problems are easier to solve. In particular, whether there ex ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
We introduce the notion of a stable instance for a discrete optimization problem, and argue that in many practical situations only sufficiently stable instances are of interest. The question then arises whether stable instances of NP–hard problems are easier to solve. In particular, whether there exist algorithms that solve correctly and in polynomial time all sufficiently stable instances of some NP–hard problem. The paper focuses on the Max–Cut problem, for which we show that this is indeed the case.
Smoothed analysis: an attempt to explain the behavior of algorithms in practice
 Commun. ACM
, 2009
"... Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to random perturbation and modification in their formation. A concrete example of such a smoothed analysis is a proof that the simplex algorithm for linear programming usually runs in polynomial time, when its input is subject to modeling or measurement noise. 1. MODELING REAL DATA “My experiences also strongly confirmed my previous opinion that the best theory is inspired by practice and the best practice is inspired by theory. ” [Donald E. Knuth: “Theory and Practice”, Theoretical Computer Science, 90 (1), 1–15, 1991.] Algorithms are highlevel descriptions of how computational tasks are performed. Engineers and experimentalists design and implement algorithms, and generally consider them a success if they work in practice. However, an algorithm that works well in one practical domain might perform poorly in another. Theorists also design and analyze algorithms, with the goal of providing provable guarantees about their performance. The traditional goal of theoretical computer science is to prove that an algorithm performs well This material is based upon work supported by the National
kmeans has polynomial smoothed complexity
 in Proc. of the 50th FOCS (Atlanta, USA
, 2009
"... Abstract — The kmeans method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worstcase running time. In order to close the gap between practical performance and theoretical analysis, the k ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Abstract — The kmeans method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worstcase running time. In order to close the gap between practical performance and theoretical analysis, the kmeans method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still superpolynomial in the number n of data points. In this paper, we settle the smoothed running time of the kmeans method. We show that the smoothed number of iterations is bounded by a polynomial in n and 1/σ, where σ is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the kmeans method will run in expected polynomial time on that input set. Keywordskmeans; clustering; smoothed analysis 1.
Noisy Signal Recovery via Iterative Reweighted L1Minimization
"... Abstract — Compressed sensing has shown that it is possible to reconstruct sparse high dimensional signals from few linear measurements. In many cases, the solution can be obtained by solving an ℓ1minimization problem, and this method is accurate even in the presence of noise. Recent a modified ver ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Abstract — Compressed sensing has shown that it is possible to reconstruct sparse high dimensional signals from few linear measurements. In many cases, the solution can be obtained by solving an ℓ1minimization problem, and this method is accurate even in the presence of noise. Recent a modified version of this method, reweighted ℓ1minimization, has been suggested. Although no provable results have yet been attained, empirical studies have suggested the reweighted version outperforms the standard method. Here we analyze the reweighted ℓ1minimization method in the noisy case, and provide provable results showing an improvement in the error bound over the standard bounds. I.
Smoothed Analysis of the kMeans Method
, 2010
"... The kmeans method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worstcase running time. In order to close the gap between practical performance and theoretical analysis, the kmeans metho ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
The kmeans method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worstcase running time. In order to close the gap between practical performance and theoretical analysis, the kmeans method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still superpolynomial in the number n of data points. In this paper, we settle the smoothed running time of the kmeans method. We show that the smoothed number of iterations is bounded by a polynomial in n and 1/σ, where σ is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the kmeans method will run in expected polynomial time on that input set.
Stochastic mean payoff games: Smoothed analysis and approximation schemes
 In Proc. of the 38th Int. Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science
, 2011
"... We consider twoplayer zerosum stochastic mean payoff games with perfect information modeled by a digraph with black, white, and random vertices. These BWRgames games are polynomially equivalent with the classical Gillette games, which include many wellknown subclasses, such as cyclic games, simp ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We consider twoplayer zerosum stochastic mean payoff games with perfect information modeled by a digraph with black, white, and random vertices. These BWRgames games are polynomially equivalent with the classical Gillette games, which include many wellknown subclasses, such as cyclic games, simple stochastic games, stochastic parity games, and Markov decision processes. They can also be used to model parlor games such as Chess or Backgammon. It is a longstanding open question if a polynomial algorithm exists that solves BWRgames. In fact, a pseudopolynomial algorithm for these games with an arbitrary number of random nodes would already imply their polynomial solvability. Currently, only two classes are known to have such a pseudopolynomial algorithm: BWgames (the case with no random nodes) and ergodic BWRgames (in which the game’s value does not depend on the initial position) with constant number of random nodes. We show that the existence of a pseudopolynomial algorithm for BWRgames with a constant number of random vertices implies smoothed polynomial complexity and the existence of absolute and relative polynomialtime approximation schemes. In particular, we obtain smoothed polynomial complexity and derive absolute and relative approximation schemes for BWgames and ergodic BWRgames (assuming a technical requirement about the probabilities at the random nodes). 1.
Some problems in asymptotic convex geometry and random matrices motivated by numerical algorithms
 Proceedings of the conference on Banach Spaces and their applications in analysis (in honor of N. Kalton’s 60th birthday
"... Abstract. The simplex method in Linear Programming motivates several problems of asymptotic convex geometry. We discuss some conjectures and known results in two related directions – computing the size of projections of high dimensional polytopes and estimating the norms of random matrices and their ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. The simplex method in Linear Programming motivates several problems of asymptotic convex geometry. We discuss some conjectures and known results in two related directions – computing the size of projections of high dimensional polytopes and estimating the norms of random matrices and their inverses. 1. Asyptotic convex geometry and Linear Programming Linear Programming studies the problem of maximizing a linear functional subject to linear constraints. Given an objective vector z ∈ R d and constraint vectors a1,...,an ∈ R d, we consider the linear program (LP) maximize 〈z, x〉 subject to 〈ai, x 〉 ≤ 1, i = 1,...,n. This linear program has d unknowns, represented by x, and n constraints. Every linear program can be reduced to this form by a simple interpolation argument [36]. The feasible set of the linear program is the polytope P: = {x ∈ R d: 〈ai, x 〉 ≤ 1, i = 1,..., n}. The solution of (LP) is then a vertex of P. We can thus look at (LP) from a geometric viewpoint: for a polytope P in R d given by n faces, and for a vector z, find the vertex that maximizes the linear functional 〈z, x〉. The oldest and still the most popular method to solve this problem is the simplex method. It starts at some vertex of P and generates a walk on the edges of P toward the solution vertex. At each step, a pivot rule determines a choice of the next vertex; so there are many variants of the simplex method with different pivot rules. (We are not concerned here with how to find the initial vertex, which is a nontrivial problem in itself).