Results 1  10
of
41
Signal recovery from random measurements via Orthogonal Matching Pursuit
 IEEE Trans. Inform. Theory
, 2007
"... Abstract. This technical report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement ove ..."
Abstract

Cited by 292 (9 self)
 Add to MetaCart
Abstract. This technical report demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results for OMP, which require O(m 2) measurements. The new results for OMP are comparable with recent results for another algorithm called Basis Pursuit (BP). The OMP algorithm is faster and easier to implement, which makes it an attractive alternative to BP for signal recovery problems. 1.
Signal recovery from partial information via Orthogonal Matching Pursuit.” Submitted to
 IEEE Trans. Inform. Theory
, 2005
"... Abstract. This article demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previou ..."
Abstract

Cited by 149 (8 self)
 Add to MetaCart
Abstract. This article demonstrates theoretically and empirically that a greedy algorithm called Orthogonal Matching Pursuit (OMP) can reliably recover a signal with m nonzero entries in dimension d given O(m ln d) random linear measurements of that signal. This is a massive improvement over previous results for OMP, which require O(m 2) measurements. The new results for OMP are comparable with recent results for another algorithm called Basis Pursuit (BP). The OMP algorithm is much faster and much easier to implement, which makes it an attractive alternative to BP for signal recovery problems. 1.
Searchable encryption revisited: Consistency properties, relation to anonymous ibe, and extensions. Full version of current paper. Available at IACR Cryptology ePrint Archive, http://eprint.iacr.org
"... Abstract. We identify and fill some gaps with regard to consistency (the extent to which false positives are produced) for publickey encryption with keyword search (PEKS). We define computational and statistical relaxations of the existing notion of perfect consistency, show that the scheme of [7] ..."
Abstract

Cited by 81 (3 self)
 Add to MetaCart
Abstract. We identify and fill some gaps with regard to consistency (the extent to which false positives are produced) for publickey encryption with keyword search (PEKS). We define computational and statistical relaxations of the existing notion of perfect consistency, show that the scheme of [7] is computationally consistent, and provide a new scheme that is statistically consistent. We also provide a transform of an anonymous IBE scheme to a secure PEKS scheme that, unlike the previous one, guarantees consistency. Finally we suggest three extensions of the basic notions considered here, namely anonymous HIBE, publickey encryption with temporary keyword search, and identitybased encryption
Indexing Information for Data Forensics
, 2005
"... We introduce novel techniques for organizing the indexing structures of how data is stored so that alterations from an original version can be detected and the changed values specifically identified. We give forensic constructions for several fundamental data structures, including arrays, linked li ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
We introduce novel techniques for organizing the indexing structures of how data is stored so that alterations from an original version can be detected and the changed values specifically identified. We give forensic constructions for several fundamental data structures, including arrays, linked lists, binary search trees, skip lists, and hash tables. Some of our constructions are based on a new reducedrandomness construction for nonadaptive combinatorial group testing.
DESIGNING COMPRESSIVE SENSING DNA MICROARRAYS
"... A Compressive Sensing Microarray (CSM) is a new device for DNAbased identification of target organisms that leverages the nascent theory of Compressive Sensing (CS). In contrast to a conventional DNA microarray, in which each genetic sensor spot is designed to respond to a single target organism, i ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
A Compressive Sensing Microarray (CSM) is a new device for DNAbased identification of target organisms that leverages the nascent theory of Compressive Sensing (CS). In contrast to a conventional DNA microarray, in which each genetic sensor spot is designed to respond to a single target organism, in a CSM each sensor spot responds to a group of targets. As a result, significantly fewer total sensor spots are required. In this paper, we study how to design group identifier probes that simultaneously account for both the constraints from the CS theory and the biochemistry of probetarget DNA hybridization. We employ Belief Propagation as a CS recovery method to estimate target concentrations from the microarray intensities.
New constructions of nonadaptive and errortolerance pooling designs
 Discrete Math
, 2002
"... DingZhu Du We propose two new classes of nonadaptive pooling designs. The first one is guaranteed to be ¡errordetecting ¢¤£¥§ ¦ and thus ¡errorcorrecting, where, a positive integer, is the maximum number of defectives (or positives). Hence, the number of errors which can be detected grows line ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
DingZhu Du We propose two new classes of nonadaptive pooling designs. The first one is guaranteed to be ¡errordetecting ¢¤£¥§ ¦ and thus ¡errorcorrecting, where, a positive integer, is the maximum number of defectives (or positives). Hence, the number of errors which can be detected grows linearly with the number of positives. Also, this construction induces a construction of a binary code with minimum Hamming distance ¨ ¨ at least. The second design � is theanalogue of a known construction on ¡disjunct matrices. 1
DNA array decoding from nonlinear measurements by belief propagation
 in IEEE SSP Workshop
, 2007
"... We propose a signal recovery method using Belief Propagation (BP) for nonlinear Compressed Sensing (CS) and demonstrate its utility in DNA array decoding. In a CS DNA microarray, the array spots identify DNA sequences that are shared between multiple organisms, thereby reducing the number of spots r ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We propose a signal recovery method using Belief Propagation (BP) for nonlinear Compressed Sensing (CS) and demonstrate its utility in DNA array decoding. In a CS DNA microarray, the array spots identify DNA sequences that are shared between multiple organisms, thereby reducing the number of spots required. The sparsity in DNA sequence commonality between different organisms translates to conditions that render Belief Propagation (BP) efficient for signal reconstruction. However, an excessively high concentration of target DNA molecules has a nonlinear effect on the measurements — it causes saturation in the measurement intensities at the array spots. We use a modified BP to estimate the target signal coefficients since it is flexible to handle the nonlinearity unlike ℓ1 decoding or other greedy algorithms and show that the original signal coefficients can be recovered from saturated measurements of their linear combinations.
NonAdaptive Fault Diagnosis for AllOptical Networks via Combinatorial Group Testing on Graphs
"... Abstract—We consider the fault diagnosis problem in alloptical networks, focusing on probing schemes to detect faults. Our work concentrates on nonadaptive probing schemes, in order to meet the stringent time requirements for fault recovery. This fault diagnosis problem motivates a new technical fr ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Abstract—We consider the fault diagnosis problem in alloptical networks, focusing on probing schemes to detect faults. Our work concentrates on nonadaptive probing schemes, in order to meet the stringent time requirements for fault recovery. This fault diagnosis problem motivates a new technical framework that we introduce: group testing with graphbased constraints. Using this framework, we develop several new probing schemes to detect network faults. The efficiency of our schemes often depends on the network topology; in many cases we can show that our schemes are nearoptimal by providing tight lower bounds. I.
Fast Distributed Graph Coloring With ... Colors
"... We consider the problem of deterministic distributed coloring of an nvertex graph with maximum degree \Delta, assuming that every vertex knows a priori only its own label and parameters n and \Delta. The aim is to get a fast algorithm using few colors. Linial [9] showed a vertexcoloring algorit ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We consider the problem of deterministic distributed coloring of an nvertex graph with maximum degree \Delta, assuming that every vertex knows a priori only its own label and parameters n and \Delta. The aim is to get a fast algorithm using few colors. Linial [9] showed a vertexcoloring algorithm working in time O(log n) and using O(\Delta 2 ) colors. We improve both the time and the number of colors simultaneously by showing an algorithm working in time O(log (n=\Delta)) and using O(\Delta) colors. This is the first known O(\Delta)vertexcoloring distributed algorithm which can work faster than in polylogarithmic time. Our method also gives an edgecoloring algorithm with the number of colors and time as above. On the other hand, it follows from Linial [9] that our time of O(\Delta)coloring cannot be improved in general. 1
Comparing the strength of query types in property testing: the case of testing kcolorability
, 2007
"... Abstract. We study the power of four query models in the context of property testing in general graphs, where our main case study is the problem of testing kcolorability. Two query types, which have been studied extensively in the past, are pair queries and neighbor queries. The former corresponds ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Abstract. We study the power of four query models in the context of property testing in general graphs, where our main case study is the problem of testing kcolorability. Two query types, which have been studied extensively in the past, are pair queries and neighbor queries. The former corresponds to asking whether there is an edge between any particular pair of vertices, and the latter to asking for the i th neighbor of a particular vertex. We show that while for pair queries, testing kcolorability requires a number of queries that is a monotone decreasing function in the average degree d, the query complexity in the case of neighbor queries remains roughly the same for every density and for large values of k. We also consider a combined model that allows both types of queries, and we propose a new, stronger, query model, related to the field of Group Testing. We give upper and lower bounds on the query complexity for onesided error in all the models, where the bounds are nearly tight for three of the models. In some of the cases our lower bounds extend to twosided error algorithms. The problem of testing kcolorability was previously studied in the contexts of dense graphs and of sparse graphs, and in our proofs we unify approaches from those cases, and also provide some new tools and techniques that may be of independent interest.