Results 1  10
of
47
Expander Codes
 IEEE Transactions on Information Theory
, 1996
"... We present a new class of asymptotically good, linear errorcorrecting codes based upon expander graphs. These codes have linear time sequential decoding algorithms, logarithmic time parallel decoding algorithms with a linear number of processors, and are simple to understand. We present both random ..."
Abstract

Cited by 275 (10 self)
 Add to MetaCart
We present a new class of asymptotically good, linear errorcorrecting codes based upon expander graphs. These codes have linear time sequential decoding algorithms, logarithmic time parallel decoding algorithms with a linear number of processors, and are simple to understand. We present both randomized and explicit constructions for some of these codes. Experimental results demonstrate the extremely good performance of the randomly chosen codes. 1. Introduction We present a new class of error correcting codes derived from expander graphs. These codes have the advantage that they can be decoded very efficiently. That makes them particularly suitable for devices which must decode cheaply, such as compact disk players and remote satellite receivers. We hope that the connection we draw between expander graphs and error correcting codes will stimulate research in both fields. 1.1. Error correcting codes An error correcting code is a mapping from messages to codewords such that the mappi...
Checking Computations in Polylogarithmic Time
, 1991
"... . Motivated by Manuel Blum's concept of instance checking, we consider new, very fast and generic mechanisms of checking computations. Our results exploit recent advances in interactive proof protocols [LFKN92], [Sha92], and especially the MIP = NEXP protocol from [BFL91]. We show that every nondete ..."
Abstract

Cited by 261 (10 self)
 Add to MetaCart
. Motivated by Manuel Blum's concept of instance checking, we consider new, very fast and generic mechanisms of checking computations. Our results exploit recent advances in interactive proof protocols [LFKN92], [Sha92], and especially the MIP = NEXP protocol from [BFL91]. We show that every nondeterministic computational task S(x; y), defined as a polynomial time relation between the instance x, representing the input and output combined, and the witness y can be modified to a task S 0 such that: (i) the same instances remain accepted; (ii) each instance/witness pair becomes checkable in polylogarithmic Monte Carlo time; and (iii) a witness satisfying S 0 can be computed in polynomial time from a witness satisfying S. Here the instance and the description of S have to be provided in errorcorrecting code (since the checker will not notice slight changes). A modification of the MIP proof was required to achieve polynomial time in (iii); the earlier technique yields N O(log log N)...
Lineartime Encodable and Decodable ErrorCorrecting Codes
, 1996
"... We present a new class of asymptotically good, linear errorcorrecting codes. These codes can be both encoded and decoded in linear time. They can also be encoded by logarithmicdepth circuits of linear size and decoded by logarithmic depth circuits of size 0 (n log n). We present both randomized an ..."
Abstract

Cited by 118 (5 self)
 Add to MetaCart
We present a new class of asymptotically good, linear errorcorrecting codes. These codes can be both encoded and decoded in linear time. They can also be encoded by logarithmicdepth circuits of linear size and decoded by logarithmic depth circuits of size 0 (n log n). We present both randomized and explicit constructions of these codes.
Faster Integer Multiplication
 STOC'07
, 2007
"... For more than 35 years, the fastest known method for integer multiplication has been the SchönhageStrassen algorithm running in time O(n log n log log n). Under certain restrictive conditions there is a corresponding Ω(n log n) lower bound. The prevailing conjecture has always been that the complex ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
For more than 35 years, the fastest known method for integer multiplication has been the SchönhageStrassen algorithm running in time O(n log n log log n). Under certain restrictive conditions there is a corresponding Ω(n log n) lower bound. The prevailing conjecture has always been that the complexity of an optimal algorithm is Θ(n log n). We present a major step towards closing the gap from above by presenting an algorithm running in time n log n 2 O(log ∗ n). The main result is for boolean circuits as well as for multitape Turing machines, but it has consequences to other models of computation as well.
A NATURAL AXIOMATIZATION OF COMPUTABILITY AND PROOF OF CHURCH’S THESIS
"... Abstract. Church’s Thesis asserts that the only numeric functions that can be calculated by effective means are the recursive ones, which are the same, extensionally, as the Turingcomputable numeric functions. The Abstract State Machine Theorem states that every classical algorithm is behaviorally e ..."
Abstract

Cited by 21 (10 self)
 Add to MetaCart
Abstract. Church’s Thesis asserts that the only numeric functions that can be calculated by effective means are the recursive ones, which are the same, extensionally, as the Turingcomputable numeric functions. The Abstract State Machine Theorem states that every classical algorithm is behaviorally equivalent to an abstract state machine. This theorem presupposes three natural postulates about algorithmic computation. Here, we show that augmenting those postulates with an additional requirement regarding basic operations gives a natural axiomatization of computability and a proof of Church’s Thesis, as Gödel and others suggested may be possible. In a similar way, but with a different set of basic operations, one can prove Turing’s Thesis, characterizing the effective string functions, and—in particular—the effectivelycomputable functions on string representations of numbers.
Algorithms: A quest for absolute definitions
 Bulletin of the European Association for Theoretical Computer Science
, 2003
"... y Abstract What is an algorithm? The interest in this foundational problem is not only theoretical; applications include specification, validation and verification of software and hardware systems. We describe the quest to understand and define the notion of algorithm. We start with the ChurchTurin ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
y Abstract What is an algorithm? The interest in this foundational problem is not only theoretical; applications include specification, validation and verification of software and hardware systems. We describe the quest to understand and define the notion of algorithm. We start with the ChurchTuring thesis and contrast Church's and Turing's approaches, and we finish with some recent investigations.
Pure versus Impure Lisp
, 1996
"... : The aspect of purity versus impurity that we address involves the absence versus presence of mutation: the use of primitives (RPLACA and RPLACD in Lisp, setcar! and setcdr! in Scheme) that change the state of pairs without creating new pairs. It is well known that cyclic list structures can be c ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
: The aspect of purity versus impurity that we address involves the absence versus presence of mutation: the use of primitives (RPLACA and RPLACD in Lisp, setcar! and setcdr! in Scheme) that change the state of pairs without creating new pairs. It is well known that cyclic list structures can be created by impure programs, but not by pure ones. In this sense, impure Lisp is "more powerful" than pure Lisp. If the inputs and outputs of programs are restricted to be sequences of atomic symbols, however, this difference in computability disappears. We shall show that if the temporal sequence of input and output operations must be maintained (that is, if computations must be "online "), then a difference in complexity remains: for a pure program to do what an impure program does in n steps, O(n log n) steps are sufficient, and in some cases\Omega\Gamma n log n) steps are necessary. * This research was partially supported by an NSERC Operating Grant. 1. Introduction The programming la...
What is a "Pointer Machine"?
 Science of Computer Programming
, 1995
"... A "Pointer Machine" is many things. Authors who consider referring to this term are invited to read the following note first. 1 Introduction In a 1992 paper by Galil and the author we referred to a "pointer machine " model of computation. A subsequent survey of related literature has produced over ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
A "Pointer Machine" is many things. Authors who consider referring to this term are invited to read the following note first. 1 Introduction In a 1992 paper by Galil and the author we referred to a "pointer machine " model of computation. A subsequent survey of related literature has produced over twenty references to papers having to do with "pointer machines", naturally containing a large number of crossreferences. These papers address a range of subjects that range from the model considered in the above paper to some other ones which are barely comparable. The fact that such different notions have been discussed under the heading of "pointer machines" has produced the regrettable effect that cross references are sometimes found to be misleading. Clearly, it is easy for a reader who does not follow a paper carefully to misinterpret its claims when a term that is so illdefined is used. This note is an attempt to rectify the situation. We start with a survey of the different notions...