Results 1  10
of
51
Are bitvectors optimal?
"... ... We show lower bounds that come close to our upper bounds (for a large range of n and ffl): Schemes that answer queries with just one bitprobe and error probability ffl must use \Omega ( nffl log(1=ffl) log m) bits of storage; if the error is restricted to queries not in S, then the scheme must u ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
... We show lower bounds that come close to our upper bounds (for a large range of n and ffl): Schemes that answer queries with just one bitprobe and error probability ffl must use \Omega ( nffl log(1=ffl) log m) bits of storage; if the error is restricted to queries not in S, then the scheme must use \Omega ( n2ffl2 log(n=ffl) log m) bits of storage. We also
On The Upper Bound Of The Size Of The rCoverFree Families
, 1994
"... Let T (r; n) denote the maximum number of subsets of an nset satisfying the condition in the title. It is proved in a purely combinatorial way, that for n sufficiently large log 2 T (r; n) n 8 \Delta log 2 r r 2 holds. 1. Introduction The notion of the rcoverfree families was introduced by ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
Let T (r; n) denote the maximum number of subsets of an nset satisfying the condition in the title. It is proved in a purely combinatorial way, that for n sufficiently large log 2 T (r; n) n 8 \Delta log 2 r r 2 holds. 1. Introduction The notion of the rcoverfree families was introduced by Kautz and Singleton in 1964 [17]. They initiated investigating binary codes with the property that the disjunction of any r (r 2) codewords are distinct (UD r codes). This led them to studying the binary codes with the property that none of the codewords is covered by the disjunction of r others (Superimposed codes, ZFD r codes; P. Erdos, P. Frankl and Z. Furedi called the correspondig set system rcoverfree in [7]). Since that many results have been proved about the maximum size of these codes. Various authors studied these problems basically from three different points of view, and these three lines of investigations were almost independent of each other. This is why many results were ...
NonAdaptive Fault Diagnosis for AllOptical Networks via Combinatorial Group Testing on Graphs
"... We consider the fault diagnosis problem in alloptical networks, focusing on probing schemes to detect faults. Our work concentrates on nonadaptive probing schemes, in order to meet the stringent time requirements for fault recovery. This fault diagnosis problem motivates a new technical framework ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
We consider the fault diagnosis problem in alloptical networks, focusing on probing schemes to detect faults. Our work concentrates on nonadaptive probing schemes, in order to meet the stringent time requirements for fault recovery. This fault diagnosis problem motivates a new technical framework that we introduce: group testing with graphbased constraints. Using this framework, we develop several new probing schemes to detect network faults. The efficiency of our schemes often depends on the network topology; in many cases we can show that our schemes are nearoptimal by providing tight lower bounds.
Group Testing with Probabilistic Tests: Theory, Design and Application
"... Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool. In a class ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Identification of defective members of large populations has been widely studied in the statistics community under the name of group testing. It involves grouping subsets of items into different pools and detecting defective members based on the set of test results obtained for each pool. In a classical noiseless group testing setup, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in the sense that the existence of a defective member in a pool results in the test outcome of that pool to be positive. However, this may not be always a valid assumption in some cases of interest. In particular, we consider the case where the defective items in a pool can become independently inactive with a certain probability. Hence, one may obtain a negative test result in a pool despite containing some defective items. As a result, any sampling and reconstruction method should be able to cope with two different types of uncertainty, i.e., the unknown set of defective items and the partially unknown, probabilistic testing procedure. In this work, motivated by the application of detecting infected people in viral epidemics, we design nonadaptive sampling procedures that allow successful identification of the defective items through a set of probabilistic tests. Our design requires only a small number of tests to single out the defective items.
New Bounds for the Language Compression Problem
, 2000
"... The CD complexity of a string x is the length of the shortest polynomial time program which accepts only the string x. The language compression problem consists of giving an upper bound on the CD A n complexity of all strings x in some set A. The best known upper bound for this problem is 2 log(j ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
The CD complexity of a string x is the length of the shortest polynomial time program which accepts only the string x. The language compression problem consists of giving an upper bound on the CD A n complexity of all strings x in some set A. The best known upper bound for this problem is 2 log(jjA n jj) + O(log(n)), due to Buhrman and Fortnow. We show that the constant factor 2 in this bound is optimal. We also give new bounds for a certain kind of random sets R ` f0; 1g n , for which we show an upper bound of log(jjR n jj) + O(log(n)). 1 Introduction Kolmogorov complexity is a notion that measures the amount of regularity in a finite string. It has turned out to be a very useful tool in theoretical computer science. A simple counting argument showing that for each length there exist random strings, i.e. strings with no regularity, has had many applications (see [LV97]). Early in the history of computational complexity resource bounded notions of Kolmogorov complexity were...
The quantum complexity of set membership
 In Proceedings of the 41st Annual IEEE Symposium on Foundations of Computer Science
, 2000
"... We study the quantum complexity of the static set membership problem: given a subset S (S  ≤ n) of a universe of size m ( ≫ n), store it as a table, T: {0,1} r → {0,1}, of bits so that queries of the form ‘Is x in S? ’ can be answered. The goal is to use a small table and yet answer queries using ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
We study the quantum complexity of the static set membership problem: given a subset S (S  ≤ n) of a universe of size m ( ≫ n), store it as a table, T: {0,1} r → {0,1}, of bits so that queries of the form ‘Is x in S? ’ can be answered. The goal is to use a small table and yet answer queries using few bit probes. This problem was considered recently by Buhrman, Miltersen, Radhakrishnan and Venkatesh [BMRV00], who showed lower and upper bounds for this problem in the classical deterministic and randomised models. In this paper, we formulate this problem in the “quantum bit probe model”. We assume that access to the table T is provided by means of a black box (oracle) unitary transform OT that takes the basis state y,b 〉 to the basis state y,b⊕T(y)〉. The query algorithm is allowed to apply OT on any superposition of basis states. We show tradeoff results between space (defined as 2 r) and number of probes (oracle calls) in this model. Our results show that the lower bounds shown in [BMRV00] for the classical model also hold (with minor differences) in the quantum bit probe model. These bounds almost match the classical upper bounds. Our lower bounds are proved using linear algebraic arguments.
On Greedy Algorithms in Coding Theory
, 1996
"... We study a wide class of problems in coding theory for which we consider two different formulations: in terms of incidence matrices and in terms of hypergraphs. These problems are dealt with using a greedy algorithm due to Stein and Lov'asz. Some examples, including constructing covering codes, ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We study a wide class of problems in coding theory for which we consider two different formulations: in terms of incidence matrices and in terms of hypergraphs. These problems are dealt with using a greedy algorithm due to Stein and Lov'asz. Some examples, including constructing covering codes, codes for conflict resolution, separating systems, source encoding with distortion, etc., are given a unified treatment. Under certain conditions derandomization can be performed, leading to an essential reduction in the complexity of the constructions.
Noiseresilient group testing: Limitations and constructions
 In Proceedings of 17th International Symposium on Fundamentals of Computation Theory (FCT
, 2009
"... We study combinatorial group testing schemes for learning dsparse boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noiseresilient scheme in this model can only approximately ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We study combinatorial group testing schemes for learning dsparse boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noiseresilient scheme in this model can only approximately reconstruct the sparse vector. On the positive side, we take this barrier to our advantage and show that approximate reconstruction (within a satisfactory degree of approximation) allows us to break the information theoretic lower bound of ˜ Ω(d 2 log n) that is known for exact reconstruction of dsparse vectors of length n via nonadaptive measurements, by a multiplicative factor ˜ Ω(d). Specifically, we give simple randomized constructions of nonadaptive measurement schemes, with m = O(d log n) measurements, that allow efficient reconstruction of dsparse vectors up to O(d) false positives even in the presence of δm false positives and O(m/d) false negatives within the measurement outcomes, for any constant δ < 1. We show that, information theoretically, none of these parameters can be substantially improved without dramatically affecting the others. Furthermore, we obtain several explicit constructions, in particular one matching the randomized tradeoff but using m = O(d 1+o(1) log n) measurements. We also obtain explicit constructions that allow fast reconstruction in time poly(m), which would be sublinear in n for sufficiently sparse vectors. The main tool used in our construction is the listdecoding view of randomness condensers and extractors. An immediate consequence of our result is an adaptive scheme that runs in only two nonadaptive rounds and exactly reconstructs any dsparse vector using a total O(d log n) measurements, a task that would be impossible in one round and fairly easy in O(log(n/d)) rounds.
Compressed Sensing with Probabilistic Measurements: A Group Testing Solution
"... Abstract — Detection of defective members of large populations has been widely studied in the statistics community under the name “group testing”, a problem which dates back to World War II when it was suggested for syphilis screening. There, the main interest is to identify a small number of infect ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
Abstract — Detection of defective members of large populations has been widely studied in the statistics community under the name “group testing”, a problem which dates back to World War II when it was suggested for syphilis screening. There, the main interest is to identify a small number of infected people among a large population using collective samples. In viral epidemics, one way to acquire collective samples is by sending agents inside the population. While in classical group testing, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in this work we assume that the decoder possesses only partial knowledge about the sampling process. This assumption is justified by observing the fact that in a viral sickness, there is a chance that an agent remains healthy despite having contact with an infected person. Therefore, the reconstruction method has to cope with two different types of uncertainty; namely, identification of the infected population and the partially unknown sampling procedure. In this work, by using a natural probabilistic model for “viral infections”, we design nonadaptive sampling procedures that allow successful identification of the infected population with overwhelming probability 1 − o(1). We propose both probabilistic and explicit design procedures that require a “small ” number of agents to single out the infected individuals. More precisely, for a contamination probability p, the number of agents required by the probabilistic and explicit designs for identification of up to k infected members is bounded by m = O(k 2 (log n)/p 2) and m = O(k 2 (log 2 n)/p 2), respectively. In both cases, a simple decoder is able to successfully identify the infected population in time O(mn). I.
CoverFree Families and Superimposed Codes: Constructions, Bounds, and Applications to Cryptography and Group Testing
"... This paper deals with (s; `)coverfree families or superimposed (s; `)codes. They generalize the concept of superimposed scodes and have several applications for cryptography and group testing. We present a new asymptotic bound on the rate of optimal codes and develop some constructions. ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper deals with (s; `)coverfree families or superimposed (s; `)codes. They generalize the concept of superimposed scodes and have several applications for cryptography and group testing. We present a new asymptotic bound on the rate of optimal codes and develop some constructions.