Results 1  10
of
86
RealTime Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations
"... A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrateandfire neurons in realtime. We propose a new computational model for realtime computing on timevar ..."
Abstract

Cited by 466 (39 self)
 Add to MetaCart
A key challenge for neural modeling is to explain how a continuous stream of multimodal input from a rapidly changing environment can be processed by stereotypical recurrent circuits of integrateandfire neurons in realtime. We propose a new computational model for realtime computing on timevarying input that provides an alternative to paradigms based on Turing machines or attractor neural networks. It does not require a taskdependent construction of neural circuits. Instead it is based on principles of high dimensional dynamical systems in combination with statistical learning theory, and can be implemented on generic evolved or found recurrent circuitry. It is shown that the inherent transient dynamics of the highdimensional dynamical system formed by a sufficiently large and heterogeneous neural circuit may serve as universal analog fading memory. Readout neurons can learn to extract in realtime from the current state of such recurrent neural circuit information about current and past inputs that may be needed for diverse tasks. Stable internal states are not required for giving a stable output, since transient internal states can be transformed by readout neurons into stable target outputs due to the high dimensionality of the dynamical system. Our approach is based on a rigorous
Two computational primitives for algorithmic selfassembly: Copying and counting
 Nano Letters
, 2005
"... Copying and counting are useful primitive operations for computation and construction. We have made DNA crystals that copy and crystals that count as they grow. For counting, 16 oligonucleotides assemble into four DNA Wang tiles that subsequently crystallize on a polymeric nucleating scaffold strand ..."
Abstract

Cited by 70 (6 self)
 Add to MetaCart
Copying and counting are useful primitive operations for computation and construction. We have made DNA crystals that copy and crystals that count as they grow. For counting, 16 oligonucleotides assemble into four DNA Wang tiles that subsequently crystallize on a polymeric nucleating scaffold strand, arranging themselves in a binary counting pattern that could serve as a template for a molecular electronic demultiplexing circuit. Although the yield of counting crystals is low, and pertile error rates in such crystals is roughly 10%, this work demonstrates the potential of algorithmic selfassembly to create complex nanoscale patterns of technological interest. A subset of the tiles for counting form informationbearing DNA tubes that copy bit strings from layer to layer along their length. The challenge of engineering complex devices at the nanometer scale has been approached from two radically different directions. In topdown synthesis, information about the desired structure is imposed by an external apparatus, as in photolithography. In bottomup synthesis, structure arises spontaneously due to chemical and physical forces intrinsic to the molecular components themselves. A significant challenge for bottomup techniques is how to design
On the Computational Power of WinnerTakeAll
, 2000
"... This article initiates a rigorous theoretical analysis of the computational power of circuits that employ modules for computing winnertakeall. Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in com ..."
Abstract

Cited by 52 (9 self)
 Add to MetaCart
This article initiates a rigorous theoretical analysis of the computational power of circuits that employ modules for computing winnertakeall. Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in computational brain models, artificial neural networks, and analog VLSI. Our theoretical analysis shows that winnertakeall is a surprisingly powerful computational module in comparison with threshold gates (= McCullochPitts neurons) and sigmoidal gates. We prove an optimal quadratic lower bound for computing winnertakeall in any feedforward circuit consisting of threshold gates. In addition we show that arbitrary continuous functions can be approximated by circuits employing a single soft winnertakeall gate as their only nonlinear operation. Our
Computational aspects of feedback in neural circuits
 PLOS Computational Biology
, 2007
"... It has previously been shown that generic cortical microcircuit models can perform complex realtime computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more re ..."
Abstract

Cited by 37 (7 self)
 Add to MetaCart
(Show Context)
It has previously been shown that generic cortical microcircuit models can perform complex realtime computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on timevarying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant realtime computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning) to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints.
Models of computation and languages for embedded system design
 IEE Proceedings on Computers and Digital Techniques
"... ..."
PEBBLE GAMES, PROOF COMPLEXITY AND TIMESPACE TRADEOFFS
, 2010
"... Pebble games were extensively studied in the 1970s and 1980s in a number of different contexts. The last decade has seen a revival of interest in pebble games coming from the field of proof complexity. Pebbling has proven to be a useful tool for studying resolutionbased proof systems when compari ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
Pebble games were extensively studied in the 1970s and 1980s in a number of different contexts. The last decade has seen a revival of interest in pebble games coming from the field of proof complexity. Pebbling has proven to be a useful tool for studying resolutionbased proof systems when comparing the strength of different subsystems, showing bounds on proof space, and establishing sizespace tradeoffs. This is a survey of research in proof complexity drawing on results and tools from pebbling, with a focus on proof space lower bounds and tradeoffs between proof size and proof space.
GeneralPurpose Computation with Neural Networks: A Survey of Complexity Theoretic Results
, 2003
"... We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus rec ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
We survey and summarize the literature on the computational aspects of neural network models by presenting a detailed taxonomy of the various models according to their complexity theoretic characteristics. The criteria of classification include the architecture of the network (feedforward versus recurrent), time model (discrete versus continuous), state type (binary versus analog), weight constraints (symmetric versus asymmetric), network size (finite nets versus infinite families), and computation type (deterministic versus probabilistic), among others. The underlying results concerning the computational power and complexity issues of perceptron, radial basis function, winnertakeall, and spiking neural networks are briefly surveyed, with pointers to the relevant literature. In our survey, we focus mainly on the digital computation whose inputs and outputs are binary in nature, although their values are quite often encoded as analog neuron states. We omit the important learning issues.
Evaluation of design strategies for stochastically assembled nanoarray memories
 J. Emerg. Technol. Comput. Syst
, 2005
"... A key challenge facing nanotechnologies is learning to control uncertainty introduced by stochastic selfassembly. In this article, we explore architectural and manufacturing strategies to cope with this uncertainty when assembling nanoarrays, crossbars composed of two orthogonal sets of parallel na ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
A key challenge facing nanotechnologies is learning to control uncertainty introduced by stochastic selfassembly. In this article, we explore architectural and manufacturing strategies to cope with this uncertainty when assembling nanoarrays, crossbars composed of two orthogonal sets of parallel nanowires (NWs) that are differentiated at their time of manufacture. NW deposition is a stochastic process and the NW encodings present in an array cannot be known in advance. We explore the reliable construction of memories from stochastically assembled arrays. This is accomplished by describing several families of NW encodings and developing strategies to map external binary addresses onto internal NW encodings using programmable circuitry. We explore a variety of different mapping strategies and develop probabilistic methods of analysis. This is the first article that makes clear the wide range of choices that are available.
A unified model for multicore architectures
 In Proc. 1st International Forum on NextGeneration Multicore/Manycore Technologies
, 2008
"... With the advent of multicore and many core architectures, we are facing a problem that is new to parallel computing, namely, the management of hierarchical parallel caches. One major limitation of all earlier models is their inability to model multicore processors with varying degrees of sharing of ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
With the advent of multicore and many core architectures, we are facing a problem that is new to parallel computing, namely, the management of hierarchical parallel caches. One major limitation of all earlier models is their inability to model multicore processors with varying degrees of sharing of caches at different levels. We propose a unified memory hierarchy model that addresses these limitations and is an extension of the MHG model developed for a single processor with multimemory hierarchy. We demonstrate that our unified framework can be applied to a number of multicore architectures for a variety of applications. In particular, we derive lower bounds on memory traffic between different levels in the hierarchy for financial and scientific computations. We also give a multicore algorithms for a financial
NetworkOblivious Algorithms
 IN PROC. OF 21ST INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM
, 2007
"... The design of algorithms that can run unchanged yet efficiently on a variety of machines characterized by different degrees of parallelism and communication capabilities is a highly desirable goal. We propose a framework for networkobliviousness based on a model of computation where the only parame ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
The design of algorithms that can run unchanged yet efficiently on a variety of machines characterized by different degrees of parallelism and communication capabilities is a highly desirable goal. We propose a framework for networkobliviousness based on a model of computation where the only parameter is the problem’s input size. Algorithms are then evaluated on a model with two parameters, capturing parallelism and granularity of communication. We show that, for a wide class of networkoblivious algorithms, optimality in the latter model implies optimality in a blockvariant of the Decomposable BSP model, which effectively describes a wide and significant class of parallel platforms. We illustrate our framework by providing optimal networkoblivious algorithms for a few key problems, and also establish some negative results. 1