Results 1  10
of
37
Robust Uncertainty Principles: Exact Signal Reconstruction From Highly Incomplete Frequency Information
, 2006
"... This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discretetime signal and a randomly chosen set of frequencies. Is it possible to reconstruct from the partial knowledge of its Fourier coefficients on the set? A typical result of this pa ..."
Abstract

Cited by 1416 (45 self)
 Add to MetaCart
This paper considers the model problem of reconstructing an object from incomplete frequency samples. Consider a discretetime signal and a randomly chosen set of frequencies. Is it possible to reconstruct from the partial knowledge of its Fourier coefficients on the set? A typical result of this paper is as follows. Suppose that is a superposition of spikes @ Aa @ A @ A obeying @�� � A I for some constant H. We do not know the locations of the spikes nor their amplitudes. Then with probability at least I @ A, can be reconstructed exactly as the solution to the I minimization problem I aH @ A s.t. ” @ Aa ” @ A for all
Quantitative Robust Uncertainty Principles and Optimally Sparse Decompositions
, 2004
"... In this paper, we develop a robust uncertainty principle for finite signals in C N which states that for nearly all choices T, Ω ⊂ {0,..., N − 1} such that T  + Ω  ≍ (log N) −1/2 · N, there is no signal f supported on T whose discrete Fourier transform ˆ f is supported on Ω. In fact, we can mak ..."
Abstract

Cited by 127 (13 self)
 Add to MetaCart
In this paper, we develop a robust uncertainty principle for finite signals in C N which states that for nearly all choices T, Ω ⊂ {0,..., N − 1} such that T  + Ω  ≍ (log N) −1/2 · N, there is no signal f supported on T whose discrete Fourier transform ˆ f is supported on Ω. In fact, we can make the above uncertainty principle quantitative in the sense that if f is supported on T, then only a small percentage of the energy (less than half, say) of ˆ f is concentrated on Ω. As an application of this robust uncertainty principle (QRUP), we consider the problem of decomposing a signal into a sparse superposition of spikes and complex sinusoids f(s) = � α1(t)δ(s − t) + � α2(ω)e i2πωs/N / √ N. t∈T We show that if a generic signal f has a decomposition (α1, α2) using spike and frequency locations in T and Ω respectively, and obeying ω∈Ω T  + Ω  ≤ Const · (log N) −1/2 · N, then (α1, α2) is the unique sparsest possible decomposition (all other decompositions have more nonzero terms). In addition, if T  + Ω  ≤ Const · (log N) −1 · N, then the sparsest (α1, α2) can be found by solving a convex optimization problem. Underlying our results is a new probabilistic approach which insists on finding the correct uncertainty relation or the optimally sparse solution for nearly all subsets but not necessarily all of them, and allows to considerably sharpen previously known results [9, 10]. In fact, we show that the fraction of sets (T, Ω) for which the above properties do not hold can be upper bounded by quantities like N −α for large values of α. The QRUP (and the application to finding sparse representations) can be extended to general pairs of orthogonal bases Φ1, Φ2 of C N. For nearly all choices Γ1, Γ2 ⊂ {0,..., N − 1} obeying Γ1  + Γ2  ≍ µ(Φ1, Φ2) −2 · (log N) −m, where m ≤ 6, there is no signal f such that Φ1f is supported on Γ1 and Φ2f is supported on Γ2 where µ(Φ1, Φ2) is the mutual coherence between Φ1 and Φ2.
Computation in Noisy Radio Networks
 in Proc. 9th Ann. ACMSIAM Symp. on Discrete Algorithms
"... In this paper we examine noisy radio (broadcast) networks in which every bit transmitted has a certain probability to be flipped. Each processor has some initial input bit, and the goal is to compute a function of the initial inputs. In this model we show a protocol to compute any threshold function ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
(Show Context)
In this paper we examine noisy radio (broadcast) networks in which every bit transmitted has a certain probability to be flipped. Each processor has some initial input bit, and the goal is to compute a function of the initial inputs. In this model we show a protocol to compute any threshold function using only a linear number of transmissions. 1 Introduction The influence of noise (or faults) on the complexity of computation was studied in many contexts. In particular people were interested in random noise. In a typical such scenario, it is assumed that the outcome of each operation is noisy with some fixed probability p and all the faults are independent. Usually, if t is the number of operations performed by the computation, then by repeating each operation O(log t) times and taking the majority of the results, one can ensure a constant probability of error at the cost of O(t log t) operations. It is desirable however to obtain a cost of O(t) (i.e., increase only by a constant fa...
FINDING STRUCTURE WITH RANDOMNESS: STOCHASTIC ALGORITHMS FOR CONSTRUCTING APPROXIMATE MATRIX DECOMPOSITIONS
, 2009
"... Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing l ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
Lowrank matrix approximations, such as the truncated singular value decomposition and the rankrevealing QR decomposition, play a central role in data analysis and scientific computing. This work surveys recent research which demonstrates that randomization offers a powerful tool for performing lowrank matrix approximation. These techniques exploit modern computational architectures more fully than classical methods and open the possibility of dealing with truly massive data sets. In particular, these techniques offer a route toward principal component analysis (PCA) for petascale data. This paper presents a modular framework for constructing randomized algorithms that compute partial matrix decompositions. These methods use random sampling to identify a subspace that captures most of the action of a matrix. The input matrix is then compressed—either explicitly or implicitly—to this subspace, and the reduced matrix is manipulated deterministically to obtain the desired lowrank factorization. In many cases, this approach beats its classical competitors in terms of accuracy, speed, and robustness. These claims are supported by extensive numerical experiments and a detailed error analysis. The specific benefits of randomized techniques depend on the computational environment. Consider
On the approximability of Dodgson and Young elections
, 2008
"... The voting rules proposed by Dodgson and Young are both designed to find the alternative closest to being a Condorcet winner, according to two different notions of proximity; the score of a given alternative is known to be hard to compute under either rule. In this paper, we put forward two algorith ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
(Show Context)
The voting rules proposed by Dodgson and Young are both designed to find the alternative closest to being a Condorcet winner, according to two different notions of proximity; the score of a given alternative is known to be hard to compute under either rule. In this paper, we put forward two algorithms for approximating the Dodgson score: an LPbased randomized rounding algorithm and a deterministic greedy algorithm, both of which yield an O(log m) approximation ratio, where m is the number of alternatives; we observe that this result is asymptotically optimal, and further prove that our greedy algorithm is optimal up to a factor of 2, unless problems in N P have quasipolynomial time algorithms. Although the greedy algorithm is computationally superior, we argue that
Probabilistic bounds on the coefficients of polynomials with only real zeros
 J. Combin. Theory Ser. A
, 1997
"... ..."
(Show Context)
Pac learning with nasty noise
 Theoretical Computer Science
, 1999
"... We introduce a new model for learning in the presence of noise, which we call the Nasty Noise model. This model generalizes previously considered models of learning with noise. The learning process in this model, which is a variant of the PAC model, proceeds as follows: Suppose that the learning alg ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We introduce a new model for learning in the presence of noise, which we call the Nasty Noise model. This model generalizes previously considered models of learning with noise. The learning process in this model, which is a variant of the PAC model, proceeds as follows: Suppose that the learning algorithm during its execution asks for m examples. The examples that the algorithm gets are generated by a nasty adversary that works according to a following steps. First, the adversary chooses m examples (independently) according to the fixed (but unknown to the learning algorithm) distribution D as in the PACmodel. Then the powerful adversary, upon seeing the specific m examples that were chosen (and using his knowledge of the target function, the distribution D and the learning algorithm), is allowed to remove a fraction of the examples at its choice, and replace these examples by the same number of arbitrary examples of its choice; the m modified examples are then given to the learning algorithm. The only restriction on the adversary is that the number of examples that the adversary is allowed to modify should be distributed according to a binomial distribution with parameters η (the noise rate) and m. On the negative side, we prove that no algorithm can achieve accuracy of ɛ < 2η in learning
The Random Paving Property for Uniformly Bounded Matrices
 Studia Mathematica
"... Abstract. This note presents a new proof of an important result due to Bourgain and Tzafriri that provides a partial solution to the Kadison–Singer problem. The result shows that every unitnorm matrix whose entries are relatively small in comparison with its dimension can be paved by a partition of ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract. This note presents a new proof of an important result due to Bourgain and Tzafriri that provides a partial solution to the Kadison–Singer problem. The result shows that every unitnorm matrix whose entries are relatively small in comparison with its dimension can be paved by a partition of constant size. That is, the coordinates can be partitioned into a constant number of blocks so that the restriction of the matrix to each block of coordinates has norm less than one half. The original proof of Bourgain and Tzafriri involves a long, delicate calculation. The new proof relies on the systematic use of symmetrization and Khintchine inequalities to estimate the norm of some random matrices. 1.
Sampleefficient Strategies for Learning in the Presence of Noise
, 1999
"... In this paper we prove various results about PAC learning in the presence of malicious noise. Our main interest is the sample size behaviour of learning algorithms. We prove the first nontrivial sample complexity lower bound in this model by showing that order of "=\Delta 2 + d=\Delta (up ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
In this paper we prove various results about PAC learning in the presence of malicious noise. Our main interest is the sample size behaviour of learning algorithms. We prove the first nontrivial sample complexity lower bound in this model by showing that order of "=\Delta 2 + d=\Delta (up to logarithmic factors) examples are necessary for PAC learning any target class of f0; 1gvalued functions of VC dimension d, where " is the desired accuracy and j = "=(1 + ") \Gamma \Delta the malicious noise rate (it is well known that any nontrivial target class cannot be PAC learned with accuracy " and malicious noise rate j "=(1 + "), this irrespective to sample complexity) . We also show that this result cannot be significantly improved in general by presenting efficient learning algorithms for the class of all subsets of d elements and the class of unions of at most d intervals on the real line. This is especially interesting as we can also show that the popular minimum disagreement strategy needs samples of size d"=\Delta 2 , hence is not optimal with respect to sample size. We then discuss the use of randomized hypotheses. For these the bound "=(1 + ") on the noise rate is no longer true and is replaced by 2"=(1 + 2"). In fact, we present a generic algorithm using randomized hypotheses which can tolerate noise rates slightly larger than "=(1 + ") while using samples of size d=" as in the noisefree case. Again one observes a quadratic powerlaw (in this case d"=\Delta 2 , \Delta = 2"=(1 + 2") \Gamma j) as \Delta goes to zero. We show upper and lower bounds of this order.
Reliable Broadcast in Wireless Networks with Probabilistic Failures
 In Proceedings of IEEE INFOCOM
, 2007
"... ..."
(Show Context)