Results 1  10
of
18
Uniform Uncertainty Principle and signal recovery via Regularized Orthogonal Matching Pursuit, submitted
, 2007
"... Abstract. This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements – L1minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has ..."
Abstract

Cited by 113 (6 self)
 Add to MetaCart
Abstract. This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set of linear measurements – L1minimization methods and iterative methods (Matching Pursuits). We find a simple regularized version of Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the speed and transparency of OMP and the strong uniform guarantees of L1minimization. Our algorithm ROMP reconstructs a sparse signal in a number of iterations linear in the sparsity, and the reconstruction is exact provided the linear measurements satisfy the Uniform Uncertainty Principle. 1.
Are stable instances easy?
, 2008
"... We introduce the notion of a stable instance for a discrete optimization problem, and argue that in many practical situations only sufficiently stable instances are of interest. The question then arises whether stable instances of NP–hard problems are easier to solve. In particular, whether there ex ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
We introduce the notion of a stable instance for a discrete optimization problem, and argue that in many practical situations only sufficiently stable instances are of interest. The question then arises whether stable instances of NP–hard problems are easier to solve. In particular, whether there exist algorithms that solve correctly and in polynomial time all sufficiently stable instances of some NP–hard problem. The paper focuses on the Max–Cut problem, for which we show that this is indeed the case.
Smoothed analysis: an attempt to explain the behavior of algorithms in practice
 COMMUN. ACM
, 2009
"... Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Many algorithms and heuristics work well on real data, despite having poor complexity under the standard worstcase measure. Smoothed analysis [36] is a step towards a theory that explains the behavior of algorithms in practice. It is based on the assumption that inputs to algorithms are subject to random perturbation and modification in their formation. A concrete example of such a smoothed analysis is a proof that the simplex algorithm for linear programming usually runs in polynomial time, when its input is subject to modeling or measurement noise.
kmeans has polynomial smoothed complexity
 IN PROC. OF THE 50TH FOCS (ATLANTA, USA
, 2009
"... The kmeans method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worstcase running time. In order to close the gap between practical performance and theoretical analysis, the kmeans metho ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
The kmeans method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worstcase running time. In order to close the gap between practical performance and theoretical analysis, the kmeans method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still superpolynomial in the number n of data points. In this paper, we settle the smoothed running time of the kmeans method. We show that the smoothed number of iterations is bounded by a polynomial in n and 1/σ, where σ is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the kmeans method will run in expected polynomial time on that input set.
Noisy Signal Recovery via Iterative Reweighted L1Minimization
"... Abstract — Compressed sensing has shown that it is possible to reconstruct sparse high dimensional signals from few linear measurements. In many cases, the solution can be obtained by solving an ℓ1minimization problem, and this method is accurate even in the presence of noise. Recent a modified ver ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
(Show Context)
Abstract — Compressed sensing has shown that it is possible to reconstruct sparse high dimensional signals from few linear measurements. In many cases, the solution can be obtained by solving an ℓ1minimization problem, and this method is accurate even in the presence of noise. Recent a modified version of this method, reweighted ℓ1minimization, has been suggested. Although no provable results have yet been attained, empirical studies have suggested the reweighted version outperforms the standard method. Here we analyze the reweighted ℓ1minimization method in the noisy case, and provide provable results showing an improvement in the error bound over the standard bounds. I.
Computational Complexity of KernelBased DensityRatio Estimation: A Condition Number Analysis
 MACHINE LEARNING, VOL.90, NO.3, PP.431–460
, 2013
"... In this study, the computational properties of a kernelbased leastsquares densityratio estimator are investigated from the viewpoint of condition numbers. The condition number of the Hessian matrix of the loss function is closely related to the convergence rate of optimization and the numerical st ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
In this study, the computational properties of a kernelbased leastsquares densityratio estimator are investigated from the viewpoint of condition numbers. The condition number of the Hessian matrix of the loss function is closely related to the convergence rate of optimization and the numerical stability. We use smoothed analysis techniques and theoretically demonstrate that the kernel leastsquares method has a smaller condition number than other Mestimators. This implies that the kernel leastsquares method has desirable computational properties. In addition, an alternate formulation of the kernel leastsquares estimator that possesses an even smaller condition number is presented. The validity of the theoretical analysis is verified through numerical experiments.
Stochastic mean payoff games: Smoothed analysis and approximation schemes
 In Proc. of the 38th Int. Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science
, 2011
"... We consider twoplayer zerosum stochastic mean payoff games with perfect information modeled by a digraph with black, white, and random vertices. These BWRgames games are polynomially equivalent with the classical Gillette games, which include many wellknown subclasses, such as cyclic games, simp ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We consider twoplayer zerosum stochastic mean payoff games with perfect information modeled by a digraph with black, white, and random vertices. These BWRgames games are polynomially equivalent with the classical Gillette games, which include many wellknown subclasses, such as cyclic games, simple stochastic games, stochastic parity games, and Markov decision processes. They can also be used to model parlor games such as Chess or Backgammon. It is a longstanding open question if a polynomial algorithm exists that solves BWRgames. In fact, a pseudopolynomial algorithm for these games with an arbitrary number of random nodes would already imply their polynomial solvability. Currently, only two classes are known to have such a pseudopolynomial algorithm: BWgames (the case with no random nodes) and ergodic BWRgames (in which the game’s value does not depend on the initial position) with constant number of random nodes. We show that the existence of a pseudopolynomial algorithm for BWRgames with a constant number of random vertices implies smoothed polynomial complexity and the existence of absolute and relative polynomialtime approximation schemes. In particular, we obtain smoothed polynomial complexity and derive absolute and relative approximation schemes for BWgames and ergodic BWRgames (assuming a technical requirement about the probabilities at the random nodes). 1.
Smoothed Analysis of the kMeans Method
, 2010
"... The kmeans method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worstcase running time. In order to close the gap between practical performance and theoretical analysis, the kmeans metho ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
The kmeans method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worstcase running time. In order to close the gap between practical performance and theoretical analysis, the kmeans method has been studied in the model of smoothed analysis. But even the smoothed analyses so far are unsatisfactory as the bounds are still superpolynomial in the number n of data points. In this paper, we settle the smoothed running time of the kmeans method. We show that the smoothed number of iterations is bounded by a polynomial in n and 1/σ, where σ is the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the kmeans method will run in expected polynomial time on that input set.