Results 1  10
of
52
Bit Commitment Using PseudoRandomness
 Journal of Cryptology
, 1991
"... We show how a pseudorandom generator can provide a bit commitment protocol. We also analyze the number of bits communicated when parties commit to many bits simultaneously, and show that the assumption of the existence of pseudorandom generators suffices to assure amortized O(1) bits of communicat ..."
Abstract

Cited by 228 (15 self)
 Add to MetaCart
We show how a pseudorandom generator can provide a bit commitment protocol. We also analyze the number of bits communicated when parties commit to many bits simultaneously, and show that the assumption of the existence of pseudorandom generators suffices to assure amortized O(1) bits of communication per bit commitment.
Construction of asymptotically good, lowrate errorcorrecting codes through pseudorandom graphs
 IEEE Transactions on Information Theory
, 1992
"... A new technique, based on the pseudorandom properties of certain graphs, known as expanders, is used to obtain new simple explicit constructions of asymptotically good codes. In one of the constructions, the expanders are used to enhance Justesen codes by replicating, shuffling and then regrouping ..."
Abstract

Cited by 117 (24 self)
 Add to MetaCart
A new technique, based on the pseudorandom properties of certain graphs, known as expanders, is used to obtain new simple explicit constructions of asymptotically good codes. In one of the constructions, the expanders are used to enhance Justesen codes by replicating, shuffling and then regrouping the code coordinates. For any fixed (small) rate, and for sufficiently large alphabet, the codes thus obtained lie above the Zyablov bound. Using these codes as outer codes in a concatenated scheme, a second asymptotic good construction is obtained which applies to small alphabets (say, GF (2)) as well. Although these concatenated codes lie below Zyablov bound, they are still superior to previouslyknown explicit constructions in the zerorate neighborhood.
List decoding algorithms for certain concatenated codes
 Proc. of the 32nd Annual ACM Symposium on Theory of Computing (STOC
, 2000
"... We give efficient (polynomialtime) listdecoding algorithms for certain families of errorcorrecting codes obtained by “concatenation”. Specifically, we give listdecoding algorithms for codes where the “outer code ” is a ReedSolomon or Algebraicgeometric code and the “inner code ” is a Hadamard ..."
Abstract

Cited by 51 (20 self)
 Add to MetaCart
We give efficient (polynomialtime) listdecoding algorithms for certain families of errorcorrecting codes obtained by “concatenation”. Specifically, we give listdecoding algorithms for codes where the “outer code ” is a ReedSolomon or Algebraicgeometric code and the “inner code ” is a Hadamard code. Codes obtained by such concatenation are the best known constructions of errorcorrecting codes with very large minimum distance. Our decoding algorithms enhance their nice combinatorial properties with algorithmic ones, by decoding these codes up to the currently known bound on their listdecoding “capacity”. In particular, the number of errors that we can correct matches (exactly) the number of errors for which it is known that the list size is bounded by a polynomial in the length of the codewords. 1
Some Applications of Coding Theory in Computational Complexity
, 2004
"... Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory ..."
Abstract

Cited by 49 (2 self)
 Add to MetaCart
Errorcorrecting codes and related combinatorial constructs play an important role in several recent (and old) results in computational complexity theory. In this paper we survey results on locallytestable and locallydecodable errorcorrecting codes, and their applications to complexity theory and to cryptography.
Extracting Randomness via Repeated Condensing
 In Proceedings of the 41st Annual IEEE Symposium on Foundations of Computer Science
, 2000
"... On an input probability distribution with some (min)entropy an extractor outputs a distribution with a (near) maximum entropy rate (namely the uniform distribution). A natural weakening of this concept is a condenser, whose output distribution has a higher entropy rate than the input distribution ( ..."
Abstract

Cited by 43 (16 self)
 Add to MetaCart
On an input probability distribution with some (min)entropy an extractor outputs a distribution with a (near) maximum entropy rate (namely the uniform distribution). A natural weakening of this concept is a condenser, whose output distribution has a higher entropy rate than the input distribution (without losing much of the initial entropy). In this paper we construct efficient explicit condensers. The condenser constructions combine (variants or more efficient versions of) ideas from several works, including the block extraction scheme of [NZ96], the observation made in [SZ94, NT99] that a failure of the block extraction scheme is also useful, the recursive "winwin" case analysis of [ISW99, ISW00], and the error correction of random sources used in [Tre99]. As a natural byproduct, (via repeated iterating of condensers), we obtain new extractor constructions. The new extractors give significant qualitative improvements over previous ones for sources of arbitrary minentropy; they...
Coding for Interactive Communication
 IN PROCEEDINGS OF THE 25TH ANNUAL SYMPOSIUM ON THEORY OF COMPUTING
, 1996
"... Let the input to a computation problem be split between two processors connected by a communication link; and let an interactive protocol ß be known by which, on any input, the processors can solve the problem using no more than T transmissions of bits between them, provided the channel is noiseless ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
Let the input to a computation problem be split between two processors connected by a communication link; and let an interactive protocol ß be known by which, on any input, the processors can solve the problem using no more than T transmissions of bits between them, provided the channel is noiseless in each direction. We study the following question: if in fact the channel is noisy, what is the effect upon the number of transmissions needed in order to solve the computation problem reliably? Technologically this concern is motivated by the increasing importance of communication as a resource in computing, and by the tradeoff in communications equipment between bandwidth, reliability and expense. We treat a model with random channel noise. We describe a deterministic method for simulating noiselesschannel protocols on noisy channels, with only a constant slowdown. This is an analog for general interactive protocols of Shannon's coding theorem, which deals only with data transmission, ...
Algorithmic Complexity in Coding Theory and the Minimum Distance Problem
, 1997
"... We start with an overview of algorithmiccomplexity problems in coding theory We then show that the problem of computing the minimum distance of a binary linear code is NPhard, and the corresponding decision problem is NPcomplete. This constitutes a proof of the conjecture Bedekamp, McEliece, van T ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
We start with an overview of algorithmiccomplexity problems in coding theory We then show that the problem of computing the minimum distance of a binary linear code is NPhard, and the corresponding decision problem is NPcomplete. This constitutes a proof of the conjecture Bedekamp, McEliece, van Tilborg, dating back to 1978. Extensions and applications of this result to other problems in coding theory are discussed.
The complexity of online memory checking
 In Proceedings of the 46th Annual IEEE Symposium on Foundations of Computer Science
, 2005
"... We consider the problem of storing a large file on a remote and unreliable server. To verify that the file has not been corrupted, a user could store a small private (randomized) “fingerprint” on his own computer. This is the setting for the wellstudied authentication problem in cryptography, and t ..."
Abstract

Cited by 33 (3 self)
 Add to MetaCart
We consider the problem of storing a large file on a remote and unreliable server. To verify that the file has not been corrupted, a user could store a small private (randomized) “fingerprint” on his own computer. This is the setting for the wellstudied authentication problem in cryptography, and the required fingerprint size is well understood. We study the problem of sublinear authentication: suppose the user would like to encode and store the file in a way that allows him to verify that it has not been corrupted, but without reading the entire file. If the user only wants to read q bits of the file, how large does the size s of the private fingerprint need to be? We define this problem formally, and show a tight lower bound on the relationship between s and q when the adversary is not computationally bounded, namely: s × q = Ω(n), where n is the file size. This is an easier case of the online memory checking problem, introduced by Blum et al. in 1991, and hence the same (tight) lower bound applies also to that problem. It was previously shown that when the adversary is computationally bounded, under the assumption that oneway functions exist, it is possible to construct much better online memory checkers. T he same is also true for sublinear authentication schemes. We show that the existence of oneway functions is also a necessary condition: even slightly breaking the s × q = Ω(n) lower bound in a computational setting implies the existence of oneway functions. 1
Reducing the Complexity of Reductions
 Computational Complexity
, 1997
"... We prove that the BermanHartmanis isomorphism conjecture is true under AC 0 reductions. More generally, we show three theorems that hold for any complexity class C closed under (uniform) TC 0 computable manyone reductions. Isomorphism: The sets complete for C under AC 0 reductions are all i ..."
Abstract

Cited by 27 (13 self)
 Add to MetaCart
We prove that the BermanHartmanis isomorphism conjecture is true under AC 0 reductions. More generally, we show three theorems that hold for any complexity class C closed under (uniform) TC 0 computable manyone reductions. Isomorphism: The sets complete for C under AC 0 reductions are all isomorphic under isomorphisms computable and invertible by AC 0 circuits of depth three. Gap: The sets that are complete for C under AC 0 and NC 0 reducibility coincide. Stop Gap: The sets that are complete for C under AC 0 [mod 2] and AC 0 reducibility do not coincide. (These theorems hold both in the nonuniform and Puniform settings.) To prove the second theorem for Puniform settings, we show how to derandomize a version of the switching lemma, which may be of independent interest. (We have recently learned that this result is originally due to Ajtai and Wigderson, but it has not been published.) 1 Introduction The notion of complete sets in complexity classes provides one of ...
Asymptotically Good Codes Correcting Insertions, Deletions, and Transpositions
 IEEE Transactions on Information Theory
, 1999
"... We present simple, polynomialtime encodable and decodable codes which are asymptotically good for channels allowing insertions, deletions and transpositions. As a corollary, they achieve exponential error probability in a stochastic model of insertiondeletion. Keywords: error correcting codes, in ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We present simple, polynomialtime encodable and decodable codes which are asymptotically good for channels allowing insertions, deletions and transpositions. As a corollary, they achieve exponential error probability in a stochastic model of insertiondeletion. Keywords: error correcting codes, insertion, deletion, transposition, edit distance, asymptotically good, asynchronous communication. College of Computing, Georgia Institute of Technology, Atlanta GA 303320280. Email: schulman@cc.gatech.edu. y On leave from the University of Texas at Austin. Email: diz@cs.utexas.edu. Supported in part by NSF NYI Grant No. CCR9457799, a David and Lucile Packard Fellowship for Science and Engineering, and an Alfred P. Sloan Research Fellowship. 1 1