Results 1  10
of
43
RealTime Computing Without Stable States: A New Framework for Neural Computation Based on Pertubations
, 2001
"... this article show that on the basis of this new paradigm one can now train a stereotypical recurrent network of integrateandfire neurons to carry out basically any realtime computation on spike trains, in fact several such realtime computations in parallel ..."
Abstract

Cited by 270 (26 self)
 Add to MetaCart
this article show that on the basis of this new paradigm one can now train a stereotypical recurrent network of integrateandfire neurons to carry out basically any realtime computation on spike trains, in fact several such realtime computations in parallel
Two computational primitives for algorithmic selfassembly: Copying and counting
 Nano Letters
, 2005
"... Copying and counting are useful primitive operations for computation and construction. We have made DNA crystals that copy and crystals that count as they grow. For counting, 16 oligonucleotides assemble into four DNA Wang tiles that subsequently crystallize on a polymeric nucleating scaffold strand ..."
Abstract

Cited by 53 (5 self)
 Add to MetaCart
Copying and counting are useful primitive operations for computation and construction. We have made DNA crystals that copy and crystals that count as they grow. For counting, 16 oligonucleotides assemble into four DNA Wang tiles that subsequently crystallize on a polymeric nucleating scaffold strand, arranging themselves in a binary counting pattern that could serve as a template for a molecular electronic demultiplexing circuit. Although the yield of counting crystals is low, and pertile error rates in such crystals is roughly 10%, this work demonstrates the potential of algorithmic selfassembly to create complex nanoscale patterns of technological interest. A subset of the tiles for counting form informationbearing DNA tubes that copy bit strings from layer to layer along their length. The challenge of engineering complex devices at the nanometer scale has been approached from two radically different directions. In topdown synthesis, information about the desired structure is imposed by an external apparatus, as in photolithography. In bottomup synthesis, structure arises spontaneously due to chemical and physical forces intrinsic to the molecular components themselves. A significant challenge for bottomup techniques is how to design
On the Computational Power of WinnerTakeAll
, 2000
"... This article initiates a rigorous theoretical analysis of the computational power of circuits that employ modules for computing winnertakeall. Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in com ..."
Abstract

Cited by 33 (7 self)
 Add to MetaCart
This article initiates a rigorous theoretical analysis of the computational power of circuits that employ modules for computing winnertakeall. Computational models that involve competitive stages have so far been neglected in computational complexity theory, although they are widely used in computational brain models, artificial neural networks, and analog VLSI. Our theoretical analysis shows that winnertakeall is a surprisingly powerful computational module in comparison with threshold gates (= McCullochPitts neurons) and sigmoidal gates. We prove an optimal quadratic lower bound for computing winnertakeall in any feedforward circuit consisting of threshold gates. In addition we show that arbitrary continuous functions can be approximated by circuits employing a single soft winnertakeall gate as their only nonlinear operation. Our
Models of Computation and Languages for Embedded System Design
, 2005
"... We review Models of Computation (MoC) and organize them with respect to the time abstraction they use. We distinguish between continuous time, discrete time, synchronous and untimed MoCs. System level models serve a variety of objectives with partially contradicting requirements. Consequently we arg ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
We review Models of Computation (MoC) and organize them with respect to the time abstraction they use. We distinguish between continuous time, discrete time, synchronous and untimed MoCs. System level models serve a variety of objectives with partially contradicting requirements. Consequently we argue that different MoCs are necessary for the various tasks and phases in the design of an embedded system. Moreover, different MoCs have to be integrated to provide a coherent system modeling and analysis environment. We discuss the relation between some popular languages and the reviewed MoCs to find that a given MoC is offered by many languages and a single language can support multiple MoCs. We contend that it is of importance for the quality of tools and overall design productivity, which abstraction levels and which primitive operators are provided in a language. However, we also observe that there are various flexible ways to do this, e.g. by way of heterogeneous frameworks, coordination languages and embedding of different MoCs in the same language.
Computational aspects of feedback in neural circuits
 PLOS Computational Biology
, 2007
"... It has previously been shown that generic cortical microcircuit models can perform complex realtime computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more re ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
It has previously been shown that generic cortical microcircuit models can perform complex realtime computations on continuous input streams, provided that these computations can be carried out with a rapidly fading memory. We investigate the computational capability of such circuits in the more realistic case where not only readout neurons, but in addition a few neurons within the circuit, have been trained for specific tasks. This is essentially equivalent to the case where the output of trained readout neurons is fed back into the circuit. We show that this new model overcomes the limitation of a rapidly fading memory. In fact, we prove that in the idealized case without noise it can carry out any conceivable digital or analog computation on timevarying inputs. But even with noise, the resulting computational model can perform a large class of biologically relevant realtime computations that require a nonfading memory. We demonstrate these computational implications of feedback both theoretically, and through computer simulations of detailed cortical microcircuit models that are subject to noise and have complex inherent dynamics. We show that the application of simple learning procedures (such as linear regression or perceptron learning) to a few neurons enables such circuits to represent time over behaviorally relevant long time spans, to integrate evidence from incoming spike trains over longer periods of time, and to process new information contained in such spike trains in diverse ways according to the current internal state of the circuit. In particular we show that such generic cortical microcircuits with feedback provide a new model for working memory that is consistent with a large set of biological constraints.
Evaluation of design strategies for stochastically assembled nanoarray memories
 J. Emerg. Technol. Comput. Syst
, 2005
"... A key challenge facing nanotechnologies is learning to control uncertainty introduced by stochastic selfassembly. In this article, we explore architectural and manufacturing strategies to cope with this uncertainty when assembling nanoarrays, crossbars composed of two orthogonal sets of parallel na ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
A key challenge facing nanotechnologies is learning to control uncertainty introduced by stochastic selfassembly. In this article, we explore architectural and manufacturing strategies to cope with this uncertainty when assembling nanoarrays, crossbars composed of two orthogonal sets of parallel nanowires (NWs) that are differentiated at their time of manufacture. NW deposition is a stochastic process and the NW encodings present in an array cannot be known in advance. We explore the reliable construction of memories from stochastically assembled arrays. This is accomplished by describing several families of NW encodings and developing strategies to map external binary addresses onto internal NW encodings using programmable circuitry. We explore a variety of different mapping strategies and develop probabilistic methods of analysis. This is the first article that makes clear the wide range of choices that are available.
A unified model for multicore architectures
 In Proc. 1st International Forum on NextGeneration Multicore/Manycore Technologies
, 2008
"... With the advent of multicore and many core architectures, we are facing a problem that is new to parallel computing, namely, the management of hierarchical parallel caches. One major limitation of all earlier models is their inability to model multicore processors with varying degrees of sharing of ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
With the advent of multicore and many core architectures, we are facing a problem that is new to parallel computing, namely, the management of hierarchical parallel caches. One major limitation of all earlier models is their inability to model multicore processors with varying degrees of sharing of caches at different levels. We propose a unified memory hierarchy model that addresses these limitations and is an extension of the MHG model developed for a single processor with multimemory hierarchy. We demonstrate that our unified framework can be applied to a number of multicore architectures for a variety of applications. In particular, we derive lower bounds on memory traffic between different levels in the hierarchy for financial and scientific computations. We also give a multicore algorithms for a financial
Foundations for a Circuit Complexity Theory of Sensory Processing
 in: Advances in Neural Information Processing Systems 13, NIPS 2000 (The
, 2001
"... We introduce total wire length as salient complexity measure for an analysis of the circuit complexity of sensory processing in biological neural systems and neuromorphic engineering. This new complexity measure is applied to a set of basic computational problems that apparently need to be solve ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
We introduce total wire length as salient complexity measure for an analysis of the circuit complexity of sensory processing in biological neural systems and neuromorphic engineering. This new complexity measure is applied to a set of basic computational problems that apparently need to be solved by circuits for translation and scaleinvariant sensory processing.
On the computational power of circuits of spiking neurons
 J. of Physiology (Paris
, 2003
"... It is quite difficult to construct circuits of spiking neurons that can carry out complex computational tasks. On the other hand even randomly connected circuits of spiking neurons can in principle be used for complex computational tasks such as timewarp invariant speech recognition. This is possib ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
It is quite difficult to construct circuits of spiking neurons that can carry out complex computational tasks. On the other hand even randomly connected circuits of spiking neurons can in principle be used for complex computational tasks such as timewarp invariant speech recognition. This is possible because such circuits have an inherent tendency to integrate incoming information in such a way that simple linear readouts can be trained to transform the current circuit activity into the target output for a very large number of computational tasks. Consequently we propose to analyze circuits of spiking neurons in terms of their roles as analog fading memory and nonlinear kernels, rather than as implementations of specific computational operations and algorithms. This article is a sequel to [31], and contains new results about the performance of generic neural microcircuit models for the recognition of speech that is subject to linear
Optimum Binary Search Trees On The Hierarchical Memory Model
, 2001
"... The Hierarchical Memory Model (HMM) of computation is similar to the standard Random Access Machine (RAM) model except that the HMM has a nonuniform memory organized in a hierarchy of levels numbered 1 through h. The cost of accessing a memory location increases with the level number, and accesses ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
The Hierarchical Memory Model (HMM) of computation is similar to the standard Random Access Machine (RAM) model except that the HMM has a nonuniform memory organized in a hierarchy of levels numbered 1 through h. The cost of accessing a memory location increases with the level number, and accesses to memory locations belonging to the same level cost the same. Formally, the cost of a single access to the memory location at address a is given by (a), where : N ! N is the memory cost function, and the h distinct values of model the different levels of the memory hierarchy. We study the problem of constructing and storing a binary search tree (BST) of minimum cost, over a set of keys, with probabilities for successful and unsuccessful searches, on the HMM with an arbitrary number of memory levels, and for the special case h = 2. While the problem of constructing optimum binary search trees has been well studied for the standard RAM model, the additional parameter for the HMM inc...