Results 11  20
of
277
Proofreading tile sets: Error correction for algorithmic selfassembly
 In DNA Based Computers 9, volume 2943 of LNCS
, 2004
"... Abstract. For robust molecular implementation of tilebased algorithmic selfassembly, methods for reducing errors must be developed. Previous studies suggested that by control of physical conditions, such as temperature and the concentration of tiles, errors (ε) can be reduced to an arbitrarily low ..."
Abstract

Cited by 48 (10 self)
 Add to MetaCart
Abstract. For robust molecular implementation of tilebased algorithmic selfassembly, methods for reducing errors must be developed. Previous studies suggested that by control of physical conditions, such as temperature and the concentration of tiles, errors (ε) can be reduced to an arbitrarily low rate – but at the cost of reduced speed (r) for the selfassembly process. For tile sets directly implementing blocked cellular automata, it was shown that r ≈ βε 2 was optimal. Here, we show that an improved construction, which we refer to as proofreading tile sets, can in principle exploit the cooperativity of tile assembly reactions to dramatically improve the scaling behavior to r ≈ βε and better. This suggests that existing DNAbased molecular tile approaches may be improved to produce macroscopic algorithmic crystals with few errors. Generalizations and limitations of the proofreading tile set construction are discussed. 1
Compact ErrorResilient Computational DNA Tiling Assemblies
"... The selfassembly process for bottomup construction of nanostructures is of key importance to the emerging of the new scientific discipline of Nanoscience. For example, the selfassembly of DNA tile nanostructures into 2D and 3D lattices can be used to perform parallel universal computation and to ..."
Abstract

Cited by 48 (10 self)
 Add to MetaCart
The selfassembly process for bottomup construction of nanostructures is of key importance to the emerging of the new scientific discipline of Nanoscience. For example, the selfassembly of DNA tile nanostructures into 2D and 3D lattices can be used to perform parallel universal computation and to manufacture patterned nanostructures from smaller unit components known as DNA tiles. However, selfassemblies at the molecular scale are prone to a quite high rate of error, and the key barrier to largescale experimental implementation of DNA tiling is the high error rate in the selfassembly process. One major challenge to nanostructure selfassembly is to eliminate/limit these errors. The goals of this paper are to develop theoretical methods for compact errorresilient selfassembly, to analyze these by stochastic analysis and computer simulation (at a future date we also intend to demonstrate these errorresilient selfassembly methods by a series of laboratory experiments). Prior work by Winfree provided a innovative approach to decrease tiling selfassembly errors without decreasing the intrinsic error rate # of assembling a single tile, however, his technique resulted in a final structure that is four times the size of the original one. This paper describes various compact errorresilient tiling methods that do not increase the size of the tiling assembly. These methods apply to assembly of boolean arrays which perform input sensitive computations (among other computations). We first describe an errorresilient tiling using 2way overlay redundancy such that a single pad mismatch between a tile and its immediate neighbor forces at least one further pad mismatch between a pair of adjacent tiles in the neighborhood of this tile. This drops the error rate from # to appr...
A System Architecture Solution for Unreliable Nanoelectronic Devices
 IEEE TRANSACTIONS ON NANOTECHNOLOGY
, 2002
"... Due to the manufacturing process, the shrinking of electronic devices will inevitably introduce a growing number of defects and even make these devices more sensitive to external influences. It is, therefore, likely that the emerging nanometerscale devices will eventually suffer from more errors th ..."
Abstract

Cited by 46 (1 self)
 Add to MetaCart
Due to the manufacturing process, the shrinking of electronic devices will inevitably introduce a growing number of defects and even make these devices more sensitive to external influences. It is, therefore, likely that the emerging nanometerscale devices will eventually suffer from more errors than classical silicon devices in large scale integrated circuits. In order to make systems based on nanometerscale devices reliable, the design of faulttolerant architectures will be necessary. Initiated by von Neumann, the NAND multiplexing technique, based on a massive duplication of imperfect devices and randomized imperfect interconnects, had been studied in the past using an extreme high degree of redundancy. In this paper, this NAND multiplexing is extended to a rather low degree of redundancy, and the stochastic Markov nature in the heart of the system is discovered and studied, leading to a comprehensive faulttolerant theory. A system architecture based on NAND multiplexing is investigated by studying the problem of the random background charges in single electron tunneling (SET) circuits. Our evaluation shows that it might be a system solution for an ultra large integration of highly unreliable nanometerscale devices.
Reliable Cellular Automata With SelfOrganization
 JOURNAL OF STATISTICAL PHYSICS
, 1998
"... In a probabilistic cellular automaton in which all local transitions have positive probability, the problem of keeping a bit of information indefinitely is nontrivial, even in an infinite automaton. Still, there is a solution in 2 dimensions, and this solution can be used to construct a simple 3 ..."
Abstract

Cited by 39 (1 self)
 Add to MetaCart
In a probabilistic cellular automaton in which all local transitions have positive probability, the problem of keeping a bit of information indefinitely is nontrivial, even in an infinite automaton. Still, there is a solution in 2 dimensions, and this solution can be used to construct a simple 3dimensional discretetime universal faulttolerant cellular automaton. This technique does not help muchtosolve the following problems: remembering a bit of information in 1 dimension; computing in dimensions lower than 3; computing in any dimension with nonsynchronized transitions. Our more
Coding for Interactive Communication
 IN PROCEEDINGS OF THE 25TH ANNUAL SYMPOSIUM ON THEORY OF COMPUTING
, 1996
"... Let the input to a computation problem be split between two processors connected by a communication link; and let an interactive protocol ß be known by which, on any input, the processors can solve the problem using no more than T transmissions of bits between them, provided the channel is noiseless ..."
Abstract

Cited by 37 (4 self)
 Add to MetaCart
Let the input to a computation problem be split between two processors connected by a communication link; and let an interactive protocol ß be known by which, on any input, the processors can solve the problem using no more than T transmissions of bits between them, provided the channel is noiseless in each direction. We study the following question: if in fact the channel is noisy, what is the effect upon the number of transmissions needed in order to solve the computation problem reliably? Technologically this concern is motivated by the increasing importance of communication as a resource in computing, and by the tradeoff in communications equipment between bandwidth, reliability and expense. We treat a model with random channel noise. We describe a deterministic method for simulating noiselesschannel protocols on noisy channels, with only a constant slowdown. This is an analog for general interactive protocols of Shannon's coding theorem, which deals only with data transmission, ...
Evolution of Corridor Following Behavior in a Noisy World
, 1994
"... Robust behavioral control programs for a simulated 2d vehicle can be constructed by artificial evolution. Corridor following serves here as an example of a behavior to be obtained through evolution. A controller's fitness is judged by its ability to steer its vehicle along a collision free path ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
Robust behavioral control programs for a simulated 2d vehicle can be constructed by artificial evolution. Corridor following serves here as an example of a behavior to be obtained through evolution. A controller's fitness is judged by its ability to steer its vehicle along a collision free path through a simple corridor environment. The controller's inputs are noisy range sensors and its output is a noisy steering mechanism. Evolution determines the quantity and placement of sensors. Noise in fitness tests discourages brittle strategies and leads to the evolution of robust, noisetolerant controllers. Genetic Programming is used to model evolution, the controllers are represented as deterministic computer programs.
Wavelets through a Looking Glass. The World of the Spectrum
, 2001
"... harmonic analysis and wavelets in R n , The Functional and Harmonic Analysis of Wavelets and Frames (San Antonio, 1999) (L.W. Baggett and D.R. Larson, eds.), Contemp. Math., vol. 247, American Mathematical Society, Providence, 1999, pp. 1727. 56 References [BBC+95] A. Barenco, C.H. Bennett, R ..."
Abstract

Cited by 30 (20 self)
 Add to MetaCart
harmonic analysis and wavelets in R n , The Functional and Harmonic Analysis of Wavelets and Frames (San Antonio, 1999) (L.W. Baggett and D.R. Larson, eds.), Contemp. Math., vol. 247, American Mathematical Society, Providence, 1999, pp. 1727. 56 References [BBC+95] A. Barenco, C.H. Bennett, R. Cleve, D.P. DiVincenzo, N. Margolus, P. Shor, T. Sleator, J.A. Smolin, and H. Weinfurter, Elementary gates for quantum computation, Phys. Rev. A 52 (1995), 34573467. [BBGK71] V. Bargmann, P. Butera, L. Girardello, and J.R. Klauder, On the completeness of the coherent states, Rep. Mathematical Phys. 2 (1971), 221228. [BDMT98] G.P. Berman, G.D. Doolen, R. Mainieri, and V.I. Tsifrinovich,
Computation in Noisy Radio Networks
 in Proc. 9th Ann. ACMSIAM Symp. on Discrete Algorithms
"... In this paper we examine noisy radio (broadcast) networks in which every bit transmitted has a certain probability to be flipped. Each processor has some initial input bit, and the goal is to compute a function of the initial inputs. In this model we show a protocol to compute any threshold function ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
In this paper we examine noisy radio (broadcast) networks in which every bit transmitted has a certain probability to be flipped. Each processor has some initial input bit, and the goal is to compute a function of the initial inputs. In this model we show a protocol to compute any threshold function using only a linear number of transmissions. 1 Introduction The influence of noise (or faults) on the complexity of computation was studied in many contexts. In particular people were interested in random noise. In a typical such scenario, it is assumed that the outcome of each operation is noisy with some fixed probability p and all the faults are independent. Usually, if t is the number of operations performed by the computation, then by repeating each operation O(log t) times and taking the majority of the results, one can ensure a constant probability of error at the cost of O(t log t) operations. It is desirable however to obtain a cost of O(t) (i.e., increase only by a constant fa...
Fault Tolerance Techniques for Wireless Ad Hoc Sensor Networks
 in IEEE Sensors
, 2002
"... Embedded sensor network is a system of nodes, each equipped with a certain amount of sensing, actuating, computation, communication, and storage resources. One of the key prerequisites for effective and efficient embedded sensor systems is development of low cost, low overhead, high resilient fault ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
Embedded sensor network is a system of nodes, each equipped with a certain amount of sensing, actuating, computation, communication, and storage resources. One of the key prerequisites for effective and efficient embedded sensor systems is development of low cost, low overhead, high resilient faulttolerance techniques. Cost sensitivity implies that traditional double and triple redundancies are not adequate solutions for embedded sensor systems due to their high cost and high energyconsumption.