Results 1 
7 of
7
Models of Computation  Exploring the Power of Computing
"... Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and oper ..."
Abstract

Cited by 76 (6 self)
 Add to MetaCart
Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and operating systems were under development and therefore became both the subject and basis for a great deal of theoretical work. The power of computers of this period was limited by slow processors and small amounts of memory, and thus theories (models, algorithms, and analysis) were developed to explore the efficient use of computers as well as the inherent complexity of problems. The former subject is known today as algorithms and data structures, the latter computational complexity. The focus of theoretical computer scientists in the 1960s on languages is reflected in the first textbook on the subject, Formal Languages and Their Relation to Automata by John Hopcroft and Jeffrey Ullman. This influential book led to the creation of many languagecentered theoretical computer science courses; many introductory theory courses today continue to reflect the content of this book and the interests of theoreticians of the 1960s and early 1970s. Although
Parallel Algorithmic Techniques for Combinatorial Computation
 Ann. Rev. Comput. Sci
, 1988
"... this paper and supplied many helpful comments. This research was supported in part by NSF grants DCR8511713, CCR8605353, and CCR8814977, and by DARPA contract N0003984C0165. ..."
Abstract

Cited by 30 (3 self)
 Add to MetaCart
this paper and supplied many helpful comments. This research was supported in part by NSF grants DCR8511713, CCR8605353, and CCR8814977, and by DARPA contract N0003984C0165.
Computational Complexity of an Optical Model of Computation
, 2005
"... We investigate the computational complexity of an optically inspired model of computation. The model is called the continuous space machine and operates in discrete timesteps over a number of twodimensional complexvalued images of constant size and arbitrary spatial resolution. We define a number ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
We investigate the computational complexity of an optically inspired model of computation. The model is called the continuous space machine and operates in discrete timesteps over a number of twodimensional complexvalued images of constant size and arbitrary spatial resolution. We define a number of optically inspired complexity measures and data representations for the model. We show the growth of each complexity measure under each of the model's operations. We characterise the power of an important discrete restriction of the model. Parallel time on this variant of the model is shown to correspond, within a polynomial, to sequential space on Turing machines, thus verifying the parallel computation thesis. We also give a characterisation of the class NC. As a result the model has computational power equivalent to that of many wellknown parallel models. These characterisations give a method to translate parallel algorithms to optical algorithms and facilitate the application of the complexity theory toolbox to optical computers. Finally we show that another variation on the model is very powerful;
Lower Bounds on the Computational Power of an Optical Model of Computation
, 2005
"... We present lower bounds on the computational power of an optical model of computation called the C2CSM. ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
We present lower bounds on the computational power of an optical model of computation called the C2CSM.
unknown title
"... carrycompletion testing. The best known constant fanin circuit for addition [lo] employs a complicated variant of the carrylookahead method, has linear size and fi(logn) depth with no improvement for random inputs. In fact, for any of the above arithmetic operations over random input, the best k ..."
Abstract
 Add to MetaCart
carrycompletion testing. The best known constant fanin circuit for addition [lo] employs a complicated variant of the carrylookahead method, has linear size and fi(logn) depth with no improvement for random inputs. In fact, for any of the above arithmetic operations over random input, the best known constant fanin circuits have n(logn) depth. Recently, Chandra, Fortune and Lipton [ll] gave an addition circuit with near linear size and constant depth, but with n(n) nodes of unbounded fanin, so they were not practical for VLSI applications. 1.2. Our Circuits for Probabilistic Prefix Computation and Arithmetic The goal of this paper is to develop some fundamental techniques for the design of circuits which take random input. In Section 2 of this paper, we formulate a probabilistic version of the prefix computation problem with random input. This probabilistic prefix computation problem has important practical applications (see Section 3) to arithmetic operations on uniformly distributed random numbers, such as (i) addition or subtraction of two random nbit binary numbers; (ii) multiplication or division of a random nbit binary number by a constant.
Design, Analysis and Implementation of an Adder by Ladner and Fisher
, 1994
"... An addition circuit by Ladner and Fisher is analysed. It has size bounded by (8+6 \Delta 2 \Gammak )n and depth bounded by 2dlog 2 ne+2k+2, where 0 k dlog 2 ne and n is the bit length of the input numbers. Moreover the implementation of an instance of this adder capable of adding two numbers of ..."
Abstract
 Add to MetaCart
An addition circuit by Ladner and Fisher is analysed. It has size bounded by (8+6 \Delta 2 \Gammak )n and depth bounded by 2dlog 2 ne+2k+2, where 0 k dlog 2 ne and n is the bit length of the input numbers. Moreover the implementation of an instance of this adder capable of adding two numbers of 8bit length was timed. The average computing time of its logical design and its layout is 26:8 ns and 94:2 ns, respectively. The layout of this circuit was done on an FPGA by Atmel. Contents 1 Introduction 2 2 Complexity of addition circuits 2 2.1 Problem statement . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 2.2 Bounds on the complexity of the addition circuit . . . . . . . . . . . 3 3 Design and analysis of the adder by Ladner and Fisher 4 3.1 Design of the adder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.2 Parallel prefix computation . . . . . . . . . . . . . . . . . . . . . . . 6 3.3 Complexity of Ladner's and Fisher's adder . . . . . . . . . . . . . ....
Probabilistic Parallel Prefix Computation
, 1993
"... Given inputs x,..., xn, which are independent identically distributed random variables over a domain D, and an associative operation o, the probabilistic prefix computation problem is to compute the product x o x2 o .. o xn and its n  1 prefixes. Instances of this problem are finite state transd ..."
Abstract
 Add to MetaCart
Given inputs x,..., xn, which are independent identically distributed random variables over a domain D, and an associative operation o, the probabilistic prefix computation problem is to compute the product x o x2 o .. o xn and its n  1 prefixes. Instances of this problem are finite state transductions on random inputs, the addition or subtraction of two random nbit binary numbers, and the multiplication or division of a random nbit binary number by a constant.