Results 1  10
of
18
COMBINING GEOMETRY AND COMBINATORICS: A UNIFIED APPROACH TO SPARSE SIGNAL RECOVERY
"... Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constru ..."
Abstract

Cited by 73 (12 self)
 Add to MetaCart
Abstract. There are two main algorithmic approaches to sparse signal recovery: geometric and combinatorial. The geometric approach starts with a geometric constraint on the measurement matrix Φ and then uses linear programming to decode information about x from Φx. The combinatorial approach constructs Φ and a combinatorial decoding algorithm to match. We present a unified approach to these two classes of sparse signal recovery algorithms. The unifying elements are the adjacency matrices of highquality unbalanced expanders. We generalize the notion of Restricted Isometry Property (RIP), crucial to compressed sensing results for signal recovery, from the Euclidean norm to the ℓp norm for p ≈ 1, and then show that unbalanced expanders are essentially equivalent to RIPp matrices. From known deterministic constructions for such matrices, we obtain new deterministic measurement matrix constructions and algorithms for signal recovery which, compared to previous deterministic algorithms, are superior in either the number of measurements or in noise tolerance. 1.
Sparse recovery using sparse random matrices
, 2008
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach ..."
Abstract

Cited by 41 (4 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and �x # �1 is minimal. It is known that this approach “works” if A is a random dense matrix, chosen from a proper distribution. In this paper, we investigate this procedure for the case where A is binary and very sparse. We show that, both in theory and in practice, sparse matrices are essentially as “good” as the dense ones. At the same time, sparse binary matrices provide additional benefits, such as reduced encoding and decoding time.
Deterministic Sampling and Range Counting in Geometric Data Streams
 In Proc. 20th ACM Sympos. Comput. Geom
, 2004
"... We present memoryefficient deterministic algorithms for constructing #nets and #approximations of streams of geometric data. Unlike probabilistic approaches, these deterministic samples provide guaranteed bounds on their approximation factors. We show how our deterministic samples can be used t ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
We present memoryefficient deterministic algorithms for constructing #nets and #approximations of streams of geometric data. Unlike probabilistic approaches, these deterministic samples provide guaranteed bounds on their approximation factors. We show how our deterministic samples can be used to answer approximate online iceberg geometric queries on data streams. We use these techniques to approximate several robust statistics of geometric data streams, including Tukey depth, simplicial depth, regression depth, the ThielSen estimator, and the least median of squares. Our algorithms use only a polylogarithmic amount of memory, provided the desired approximation factors are inversepolylogarithmic. We also include a lower bound for noniceberg geometric queries.
1 Sparse Recovery Using Sparse Matrices
"... Abstract—We survey algorithms for sparse recovery problems that are based on sparse random matrices. Such matrices has several attractive properties: they support algorithms with low computational complexity, and make it easy to perform incremental updates to signals. We discuss applications to seve ..."
Abstract

Cited by 26 (7 self)
 Add to MetaCart
Abstract—We survey algorithms for sparse recovery problems that are based on sparse random matrices. Such matrices has several attractive properties: they support algorithms with low computational complexity, and make it easy to perform incremental updates to signals. We discuss applications to several areas, including compressive sensing, data stream computing and group testing. I.
On the Utility of PrivacyPreserving Histograms
 In 21st Conference on Uncertainty in Artificial Intelligence (UAI
, 2005
"... In a census, individual respondents give private information to a trusted party (the census bureau), who publishes a sanitized version of the data. There are two fundamentally conflicting requirements: privacy for the respondents and utility of the sanitized data. Note that this framework is inheren ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
In a census, individual respondents give private information to a trusted party (the census bureau), who publishes a sanitized version of the data. There are two fundamentally conflicting requirements: privacy for the respondents and utility of the sanitized data. Note that this framework is inherently noninteractive. Recently, Chawla et al. (TCC’2005) initiated a theoretical study of the census problem and presented an intuitively appealing definition of privacy breach, called isolation, together with a formal specification of what is required from a data sanitization algorithm: access to the sanitized data should not increase an adversary’s ability to isolate any individual. They also showed that if the data are drawn uniformly from a highdimensional hypercube then recursive histogram sanitization can preserve privacy with a high probability. We extend these results in several ways. First, we develop a method for computing a privacypreserving histogram sanitization of “round ” distributions, such as the uniform distribution over a highdimensional ball or sphere. This problem is quite challenging because, unlike for the hypercube, the natural histogram over such a distribution may have long and thin cells that hurt the proof of privacy. We then develop techniques for randomizing the histogram constructions both for the hypercube and the hypersphere. These permit us to apply known results for approximating various quantities of interest (e.g., cost of the minimum spanning tree, or the cost of an optimal solution to the facility location problem over the data points) from histogram counts – in a privacypreserving fashion. 1
On the optimality of the dimensionality reduction method
 in Proc. 47th IEEE Symposium on Foundations of Computer Science (FOCS
"... We investigate the optimality of (1+ɛ)approximation algorithms obtained via the dimensionality reduction method. We show that: • Any data structure for the (1 + ɛ)approximate nearest neighbor problem in Hamming space, which uses constant number of probes to answer each query, must use n Ω(1/ɛ2) sp ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
We investigate the optimality of (1+ɛ)approximation algorithms obtained via the dimensionality reduction method. We show that: • Any data structure for the (1 + ɛ)approximate nearest neighbor problem in Hamming space, which uses constant number of probes to answer each query, must use n Ω(1/ɛ2) space. • Any algorithm for the (1+ɛ)approximate closest substring problem must run in time exponential in 1/ɛ 2−γ for any γ> 0 (unless 3SAT can be solved in subexponential time) Both lower bounds are (essentially) tight. 1.
Efficient sketches for earthmover distance, with applications
 in FOCS
, 2009
"... Abstract — We provide the first sublinear sketching algorithm for estimating the planar EarthMover Distance with a constant approximation. For sets living in the twodimensional grid [∆] 2, we achieve space ∆ ɛ for approximation O(1/ɛ), for any desired 0 < ɛ < 1. Our sketch has immediate applicati ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
Abstract — We provide the first sublinear sketching algorithm for estimating the planar EarthMover Distance with a constant approximation. For sets living in the twodimensional grid [∆] 2, we achieve space ∆ ɛ for approximation O(1/ɛ), for any desired 0 < ɛ < 1. Our sketch has immediate applications to the streaming and nearest neighbor search problems. 1.
Sequential Sparse Matching Pursuit
"... Abstract — We propose a new algorithm, called Sequential Sparse Matching Pursuit (SSMP), for solving sparse recovery problems. The algorithm provably recovers a ksparse approximation to an arbitrary ndimensional signal vector x from only O(k log(n/k)) linear measurements of x. The recovery process ..."
Abstract

Cited by 11 (5 self)
 Add to MetaCart
Abstract — We propose a new algorithm, called Sequential Sparse Matching Pursuit (SSMP), for solving sparse recovery problems. The algorithm provably recovers a ksparse approximation to an arbitrary ndimensional signal vector x from only O(k log(n/k)) linear measurements of x. The recovery process takes time that is only nearlinear in n. Preliminary experiments indicate that the algorithm works well on synthetic and image data, with the recovery quality often outperforming that of more complex algorithms, such as l1 minimization. I.
NearOptimal Sparse Recovery in the L1 norm
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x ∈ Rn from its lowerdimensional sketch Ax ∈ Rm. Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute an appro ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x ∈ Rn from its lowerdimensional sketch Ax ∈ Rm. Specifically, we focus on the sparse recovery problem in the L1 norm: for a parameter k, given the sketch Ax, compute an approximation ˆx of x such that the L1 approximation error ‖x − ˆx‖1 is close to minx ′ ‖x − x ′ ‖1, where x ′ ranges over all vectors with at most k terms. The sparse recovery problem has been subject to extensive research over the last few years. Many solutions to this problem have been discovered, achieving different tradeoffs between various attributes, such as the sketch length, encoding and recovery times. In this paper we provide a sparse recovery scheme which achieves close to optimal performance on virtually all attributes (see Figure 1). In particular, this is the first scheme that guarantees O(k log(n/k)) sketch length, and nearlinear O(n log(n/k)) recovery time simultaneously. It also features low encoding and update times, and is noiseresilient. 1
Sparse recovery using sparse matrices
, 2008
"... We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and ‖x # ‖1 is minimal. It is known that this approach ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We consider the approximate sparse recovery problem, where the goal is to (approximately) recover a highdimensional vector x from its lowerdimensional sketch Ax. A popular way of performing this recovery is by finding x # such that Ax = Ax # , and ‖x # ‖1 is minimal. It is known that this approach “works ” if A is a random dense matrix, chosen from a proper distribution. In this paper, we investigate this procedure for the case where A is binary and very sparse. We show that, both in theory and in practice, sparse matrices are essentially as “good ” as the dense ones. At the same time, sparse binary matrices provide additional benefits, such as reduced encoding and decoding time. 1