Results 1  10
of
490
The Perceptron: A Probabilistic Model for Information Storage and Organization in The Brain
 Psychological Review
, 1958
"... If we are eventually to understand the capability of higher organisms for perceptual recognition, generalization, recall, and thinking, we must first have answers to three fundamental questions: 1. How is information about the physical world sensed, or detected, by the biological system? 2. In what ..."
Abstract

Cited by 1144 (0 self)
 Add to MetaCart
(Show Context)
If we are eventually to understand the capability of higher organisms for perceptual recognition, generalization, recall, and thinking, we must first have answers to three fundamental questions: 1. How is information about the physical world sensed, or detected, by the biological system? 2. In what form is information stored, or remembered? 3. How does information contained in storage, or in memory, influence recognition and behavior? The first of these questions is in the
The Transaction Concept: Virtues and Limitations
, 1981
"... A transaction is a transformation of state which has the properties of atomicity (all or nothing), durability (effects survive failures) and consistency (a correct transformation). The transaction concept is key to the structuring of data management applications. The concept may have applicability ..."
Abstract

Cited by 300 (0 self)
 Add to MetaCart
A transaction is a transformation of state which has the properties of atomicity (all or nothing), durability (effects survive failures) and consistency (a correct transformation). The transaction concept is key to the structuring of data management applications. The concept may have applicability to programming systems in general. This paper restates the transaction concepts and attempts to put several implementation approaches in perspective. It then describes some areas which require further study: (1) the integration of the transaction concept with the notion of abstract data type, (2) some techniques to allow transactions to be composed of sub
Why Do Computers Stop And What Can Be Done About It?
, 1985
"... An analysis of the failure statistics of a commercially available faulttolerant system shows that administration and software are the major contributors to failure. Various approaches to software faulttolerance are then discussed  notably processpairs, transactions and reliable storage. It is p ..."
Abstract

Cited by 264 (0 self)
 Add to MetaCart
An analysis of the failure statistics of a commercially available faulttolerant system shows that administration and software are the major contributors to failure. Various approaches to software faulttolerance are then discussed  notably processpairs, transactions and reliable storage. It is pointed out that faults in production software are often soft (transient) and that a transaction mechanism combined with persistent processpairs provides faulttolerant execution  the key to software faulttolerance.
Faulttolerant quantum computation
 In Proc. 37th FOCS
, 1996
"... It has recently been realized that use of the properties of quantum mechanics might speed up certain computations dramatically. Interest in quantum computation has since been growing. One of the main difficulties in realizing quantum computation is that decoherence tends to destroy the information i ..."
Abstract

Cited by 264 (5 self)
 Add to MetaCart
(Show Context)
It has recently been realized that use of the properties of quantum mechanics might speed up certain computations dramatically. Interest in quantum computation has since been growing. One of the main difficulties in realizing quantum computation is that decoherence tends to destroy the information in a superposition of states in a quantum computer, making long computations impossible. A further difficulty is that inaccuracies in quantum state transformations throughout the computation accumulate, rendering long computations unreliable. However, these obstacles may not be as formidable as originally believed. For any quantum computation with t gates, we show how to build a polynomial size quantum circuit that tolerates O(1 / log c t) amounts of inaccuracy and decoherence per gate, for some constant c; the previous bound was O(1 /t). We do this by showing that operations can be performed on quantum data encoded by quantum errorcorrecting codes without decoding this data. 1.
Reliable quantum computers
 Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
, 1998
"... The new field of quantum error correction has developed spectacularly since its origin less than two years ago. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment. Recovery from errors can work effectively even if occasional mist ..."
Abstract

Cited by 165 (3 self)
 Add to MetaCart
(Show Context)
The new field of quantum error correction has developed spectacularly since its origin less than two years ago. Encoded quantum information can be protected from errors that arise due to uncontrolled interactions with the environment. Recovery from errors can work effectively even if occasional mistakes occur during the recovery procedure. Furthermore, encoded quantum information can be processed without serious propagation of errors. Hence, an arbitrarily long quantum computation can be performed reliably, provided that the average probability of error per quantum gate is less than a certain critical value, the accuracy threshold. A quantum computer storing about 106 qubits, with a probability of error per quantum gate of order 106, would be a formidable factoring engine. Even a smaller lessaccurate quantum computer would be able to perform many useful tasks. This paper is based on a talk presented at the ITP Conference on Quantum Coherence
Paxos made live: an engineering perspective
 In Proc. of PODC
, 2007
"... We describe our experience building a faulttolerant database using the Paxos consensus algorithm. Despite the existing literature in the field, building such a database proved to be nontrivial. We describe selected algorithmic and engineering problems encountered, and the solutions we found for t ..."
Abstract

Cited by 151 (0 self)
 Add to MetaCart
(Show Context)
We describe our experience building a faulttolerant database using the Paxos consensus algorithm. Despite the existing literature in the field, building such a database proved to be nontrivial. We describe selected algorithmic and engineering problems encountered, and the solutions we found for them. Our measurements indicate that we have built a competitive system. 1
Closure and Convergence: A Foundation of FaultTolerant Computing
 IEEE Transactions on Software Engineering
, 1993
"... We give a formal definition of what it means for a system to "tolerate" a class of "faults". The definition consists of two conditions: One, if a fault occurs when the system state is within a set of "legal" states, the resulting state is within some larger set and, if ..."
Abstract

Cited by 135 (30 self)
 Add to MetaCart
We give a formal definition of what it means for a system to "tolerate" a class of "faults". The definition consists of two conditions: One, if a fault occurs when the system state is within a set of "legal" states, the resulting state is within some larger set and, if faults continue occurring, the system state remains within that larger set (Closure). And two, if faults stop occurring, the system eventually reaches a state within the legal set (Convergence). We demonstrate the applicability of our definition for specifying and verifying the faulttolerance properties of a variety of digital and computer systems. Further, using the definition, we obtain a simple classification of faulttolerant systems and discuss methods for their systematic design. as traditionally been studied in the context of specifi...
Special Purpose Parallel Computing
 Lectures on Parallel Computation
, 1993
"... A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing ..."
Abstract

Cited by 82 (6 self)
 Add to MetaCart
A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing [365] demonstrated that, in principle, a single general purpose sequential machine could be designed which would be capable of efficiently performing any computation which could be performed by a special purpose sequential machine. The importance of this universality result for subsequent practical developments in computing cannot be overstated. It showed that, for a given computational problem, the additional efficiency advantages which could be gained by designing a special purpose sequential machine for that problem would not be great. Around 1944, von Neumann produced a proposal [66, 389] for a general purpose storedprogram sequential computer which captured the fundamental principles of...