Results 1  10
of
33
Are bitvectors optimal?
"... ... We show lower bounds that come close to our upper bounds (for a large range of n and ffl): Schemes that answer queries with just one bitprobe and error probability ffl must use \Omega ( nffl log(1=ffl) log m) bits of storage; if the error is restricted to queries not in S, then the scheme must u ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
... We show lower bounds that come close to our upper bounds (for a large range of n and ffl): Schemes that answer queries with just one bitprobe and error probability ffl must use \Omega ( nffl log(1=ffl) log m) bits of storage; if the error is restricted to queries not in S, then the scheme must use \Omega ( n2ffl2 log(n=ffl) log m) bits of storage. We also
An O(k 3 log n)Approximation Algorithm for VertexConnectivity Survivable Network Design
, 2008
"... In the Survivable Network Design problem (SNDP), we are given an undirected graph G(V, E) with costs on edges, along with a connectivity requirement r(u, v) for each pair u, v of vertices. The goal is to find a minimumcost subset E ∗ of edges, that satisfies the given set of pairwise connectivity r ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
In the Survivable Network Design problem (SNDP), we are given an undirected graph G(V, E) with costs on edges, along with a connectivity requirement r(u, v) for each pair u, v of vertices. The goal is to find a minimumcost subset E ∗ of edges, that satisfies the given set of pairwise connectivity requirements. In the edgeconnectivity version we need to ensure that there are r(u, v) edgedisjoint paths for every pair u, v of vertices, while in the vertexconnectivity version the paths are required to be vertexdisjoint. The edgeconnectivity version of SNDP is known to have a 2approximation. However, no nontrivial approximation algorithm has been known so far for the vertex version of SNDP, except for special cases of the problem. We present an extremely simple algorithm to achieve an O(k 3 log n)approximation for this problem, where k denotes the maximum connectivity requirement, and n denotes the number of vertices. We also give a simple proof of the recently discovered O(k 2 log n)approximation result for the singlesource version of vertexconnectivity SNDP. We note that in both cases, our analysis in fact yields slightly better guarantees in that the log n term in the approximation guarantee can be replaced with a log τ term where τ denotes the number of distinct vertices that participate in one or more pairs with a positive connectivity requirement.
New Constructions for Perfect Hash Families and Related Structures using Combinatorial Designs
 J. COMBIN. DESIGNS
, 1999
"... In this paper, we consider explicit constructions of perfect hash families using combinatorial methods. We provide several direct constructions from combinatorial structures related to orthogonal arrays. We also simplify and generalize a recursive construction due to Atici, Magliversas, Stinson and ..."
Abstract

Cited by 16 (7 self)
 Add to MetaCart
In this paper, we consider explicit constructions of perfect hash families using combinatorial methods. We provide several direct constructions from combinatorial structures related to orthogonal arrays. We also simplify and generalize a recursive construction due to Atici, Magliversas, Stinson and Wei [3]. Using similar methods, we also obtain efficient constructions for separating hash families which result in improved existence results for structures such as separating systems, key distribution patterns, group testing algorithms, coverfree families and secure frameproof codes.
Some New Bounds for CoverFree Families
 J. Combin. Theory A
, 1999
"... Let N ((w; r); T ) denote the minimum number of points in a (w; r)coverfree family having T blocks. In this paper, we prove two new lower bounds on N . 1 Introduction Coverfree families were first introduced in 1964 by Kautz and Singleton [9] to investigate superimposed binary codes. These struc ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Let N ((w; r); T ) denote the minimum number of points in a (w; r)coverfree family having T blocks. In this paper, we prove two new lower bounds on N . 1 Introduction Coverfree families were first introduced in 1964 by Kautz and Singleton [9] to investigate superimposed binary codes. These structures have been discussed in several equivalent formulations in subjects such as information theory, combinatorics and group testing by numerous researchers (see, for example, [1, 2, 4, 5, 6, 7, 8, 12]). In 1988, Mitchell and Piper [10] defined the concept of key distribution patterns, which are in fact a generalized type of coverfree family. Some papers giving constructions and bounds for these objects include [3, 4, 11, 14]. Here is the definition of a coverfree family. Definition 1.1 Let X be an nset and let F be a set of subsets (blocks) of X. (X; F) is called a (w; r)coverfree family (or (w; r)CFF) provided that, for any w blocks B 1 ; \Delta \Delta \Delta ; Bw 2 F and any other ...
Generalized coverfree families
 Discrete Math
, 2002
"... Coverfree families have been investigated by many researchers, and several variations of these set systems have been used in diverse applications. In this paper, we introduce a generalization of coverfree families which includes as special cases all of the previouslyused definitions. Then we give ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Coverfree families have been investigated by many researchers, and several variations of these set systems have been used in diverse applications. In this paper, we introduce a generalization of coverfree families which includes as special cases all of the previouslyused definitions. Then we give several bounds and some efficient constructions for these generalized coverfree families. 1
New Bounds for the Language Compression Problem
, 2000
"... The CD complexity of a string x is the length of the shortest polynomial time program which accepts only the string x. The language compression problem consists of giving an upper bound on the CD A n complexity of all strings x in some set A. The best known upper bound for this problem is 2 log(j ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
The CD complexity of a string x is the length of the shortest polynomial time program which accepts only the string x. The language compression problem consists of giving an upper bound on the CD A n complexity of all strings x in some set A. The best known upper bound for this problem is 2 log(jjA n jj) + O(log(n)), due to Buhrman and Fortnow. We show that the constant factor 2 in this bound is optimal. We also give new bounds for a certain kind of random sets R ` f0; 1g n , for which we show an upper bound of log(jjR n jj) + O(log(n)). 1 Introduction Kolmogorov complexity is a notion that measures the amount of regularity in a finite string. It has turned out to be a very useful tool in theoretical computer science. A simple counting argument showing that for each length there exist random strings, i.e. strings with no regularity, has had many applications (see [LV97]). Early in the history of computational complexity resource bounded notions of Kolmogorov complexity were...
Compressed Sensing with Probabilistic Measurements: A Group Testing Solution
"... Abstract — Detection of defective members of large populations has been widely studied in the statistics community under the name “group testing”, a problem which dates back to World War II when it was suggested for syphilis screening. There, the main interest is to identify a small number of infect ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
Abstract — Detection of defective members of large populations has been widely studied in the statistics community under the name “group testing”, a problem which dates back to World War II when it was suggested for syphilis screening. There, the main interest is to identify a small number of infected people among a large population using collective samples. In viral epidemics, one way to acquire collective samples is by sending agents inside the population. While in classical group testing, it is assumed that the sampling procedure is fully known to the reconstruction algorithm, in this work we assume that the decoder possesses only partial knowledge about the sampling process. This assumption is justified by observing the fact that in a viral sickness, there is a chance that an agent remains healthy despite having contact with an infected person. Therefore, the reconstruction method has to cope with two different types of uncertainty; namely, identification of the infected population and the partially unknown sampling procedure. In this work, by using a natural probabilistic model for “viral infections”, we design nonadaptive sampling procedures that allow successful identification of the infected population with overwhelming probability 1 − o(1). We propose both probabilistic and explicit design procedures that require a “small ” number of agents to single out the infected individuals. More precisely, for a contamination probability p, the number of agents required by the probabilistic and explicit designs for identification of up to k infected members is bounded by m = O(k 2 (log n)/p 2) and m = O(k 2 (log 2 n)/p 2), respectively. In both cases, a simple decoder is able to successfully identify the infected population in time O(mn). I.
Deterministic historyindependent strategies for storing information on writeonce memories
 in Proceedings of the 34th International Colloquium on Automata, Languages and Programming
, 2007
"... Abstract Motivated by the challenging task of designing "secure " vote storage mechanisms, we dealwith information storage mechanisms that operate in extremely hostile environments. In such ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract Motivated by the challenging task of designing &quot;secure &quot; vote storage mechanisms, we dealwith information storage mechanisms that operate in extremely hostile environments. In such
Noiseresilient group testing: Limitations and constructions
 In Proceedings of 17th International Symposium on Fundamentals of Computation Theory (FCT
, 2009
"... We study combinatorial group testing schemes for learning dsparse boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noiseresilient scheme in this model can only approximately ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We study combinatorial group testing schemes for learning dsparse boolean vectors using highly unreliable disjunctive measurements. We consider an adversarial noise model that only limits the number of false observations, and show that any noiseresilient scheme in this model can only approximately reconstruct the sparse vector. On the positive side, we take this barrier to our advantage and show that approximate reconstruction (within a satisfactory degree of approximation) allows us to break the information theoretic lower bound of ˜ Ω(d 2 log n) that is known for exact reconstruction of dsparse vectors of length n via nonadaptive measurements, by a multiplicative factor ˜ Ω(d). Specifically, we give simple randomized constructions of nonadaptive measurement schemes, with m = O(d log n) measurements, that allow efficient reconstruction of dsparse vectors up to O(d) false positives even in the presence of δm false positives and O(m/d) false negatives within the measurement outcomes, for any constant δ < 1. We show that, information theoretically, none of these parameters can be substantially improved without dramatically affecting the others. Furthermore, we obtain several explicit constructions, in particular one matching the randomized tradeoff but using m = O(d 1+o(1) log n) measurements. We also obtain explicit constructions that allow fast reconstruction in time poly(m), which would be sublinear in n for sufficiently sparse vectors. The main tool used in our construction is the listdecoding view of randomness condensers and extractors. An immediate consequence of our result is an adaptive scheme that runs in only two nonadaptive rounds and exactly reconstructs any dsparse vector using a total O(d log n) measurements, a task that would be impossible in one round and fairly easy in O(log(n/d)) rounds.
Tracing many users with almost no rate penalty
 IEEE Transactions on Information Theory
, 2007
"... For integers n, r ≥ 2 and 1 ≤ k ≤ r, a family F of subsets of [n] = {1,..., n} is called koutofr multiple user tracing if, given the union of any ℓ ≤ r sets from the family, one can identify at least min(k, ℓ) of them. This is a generalization of superimposed families (k = r) and of single user t ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
For integers n, r ≥ 2 and 1 ≤ k ≤ r, a family F of subsets of [n] = {1,..., n} is called koutofr multiple user tracing if, given the union of any ℓ ≤ r sets from the family, one can identify at least min(k, ℓ) of them. This is a generalization of superimposed families (k = r) and of single user tracing families (k = 1). The study of such families is motivated by problems in molecular biology and communication. In this paper we study the maximum possible cardinality of such families, denoted by h(n, r, k), and show that there exist absolute constants c1, c2, c3, c4> 0 such that min ( c1 c2 r, k2 log h(n,r,k)) ≤ n ≤ min ( c3 c4 log k r, k2). In particular, for all k ≤ √ log h(n,r,k) r, n = Θ(1/r). This improves an estimate of Laczay and Ruszinkó. 1