Results 1  10
of
18
A Fast Stochastic ErrorDescent Algorithm for Supervised Learning and Optimization
 In
, 1993
"... A parallel stochastic algorithm is investigated for errordescent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the modelfree distributed learning mechanism of Dembo and Kaila ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
A parallel stochastic algorithm is investigated for errordescent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the modelfree distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning timevarying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks. 1 Background and Motivation We address general optimization tasks that require finding a set of constant parameter values p i that minimize a given error functional E(p). For supervised learning, the error functional consists of some quantitativ...
A stochastic neural architecture that exploits dynamically reconfigurable FPGAs
 In IEEE Workshop on FPGAs for Custom Computing Machines
, 1993
"... In this paper we present an expandable digital architecture that provides an efficient real time implementation platform for large neural networks. The architecture makes heavy use of the techniques of bit serial stochastic computing to carry out the large number of required parallel synaptic calcul ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
In this paper we present an expandable digital architecture that provides an efficient real time implementation platform for large neural networks. The architecture makes heavy use of the techniques of bit serial stochastic computing to carry out the large number of required parallel synaptic calculations. In this design all real valued quantities are encoded on to stochastic bit streams in which the `1' density is proportional to the given quantity. The actual digital circuitry is simple and highly regular thus allowing very efficient space usage of fine grained FPGAs. Another feature of the design is that the large number of weights required by a neural network are generated by circuitry tailored to each of their specific values, thus saving valuable cells. Whenever one of these values is required to change, the appropriate circuitry must be dynamically reconfigured. This may always be achieved in a fixed and minimum number of cells for a given bit stream resolution. 1 Introduction ...
A Device For Generating Binary Sequences For Stochastic Computing
 Electronics Letters
, 1992
"... A novel technique for the generation of high speed stochastic bit streams in which the `1' density is proportional to a given value. Bit streams of this type are particularly useful in bit serial stochastic computing systems, such as digital stochastic neural networks. The proposed circuitry is high ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
A novel technique for the generation of high speed stochastic bit streams in which the `1' density is proportional to a given value. Bit streams of this type are particularly useful in bit serial stochastic computing systems, such as digital stochastic neural networks. The proposed circuitry is highly suitable for VLSI fabrication. INTRODUCTION The use of stochastic bit streams allows a dramatic simplification of the circuitry required to implement many devices [1], since the multiplication of two values may be performed by computing the bitwise conjunction of two corresponding streams. The highly pipelined digital design described in this paper was developed for use with high speed digital stochastic neural networks, as described in [2, 3, 4]. The proposed stochastic bit stream generator is highly pipelined and thus potentially extremely fast. The individual pipeline stages, known as modulators are simple, and the number used defines the overall resolution of the generated bit strea...
An Exact Hardware Implementation of the Boltzmann Machine
 92563 Rueil Malmaison Cedex
, 1992
"... We present a fast implementation of the Boltzmann Machine based on specialized hardware. This realization faithfully implements the machine; in particular, it avoids the excess parallelism which makes other fast implementations of it only approximate. The current prototype performs 505 million addit ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We present a fast implementation of the Boltzmann Machine based on specialized hardware. This realization faithfully implements the machine; in particular, it avoids the excess parallelism which makes other fast implementations of it only approximate. The current prototype performs 505 million additions and multiplications (or megasynapses) per second. It can emulate the fully connected Boltzmann Machine with up to 1438 variables. Boltzmann Machine's weights are expressed as 16bit two's complement fixedpoint numbers. Our implementation is built on the top of DECperle1, a reconfigurable coprocessor board based on field programmable gate arrays (FPGAs). DECperle1 is closely coupled with its host processor (an ordinary workstation). In our application, it only performs the simplest and most computingintensive part of the Boltzmann Machine algorithm, namely multiplying matrices of numbers by vectors of bits. The other operations (which are complicated, but only require a modest amoun...
Digital Neural Network Implementations
 in Neural Networks, Concepts, Applications, and Implementations, Vol III. Englewood Cliffs
, 1995
"... This chapter gives an overview of existing digital VLSI implementations and discusses techniques for implementing high performance, high capacity digital neural nets. It presents a set of techniques for estimating chip area, performance, and power consumption in the early stages of design to facilit ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This chapter gives an overview of existing digital VLSI implementations and discusses techniques for implementing high performance, high capacity digital neural nets. It presents a set of techniques for estimating chip area, performance, and power consumption in the early stages of design to facilitate architectural exploration. It shows how technology scaling rules can be included in the estimation process. It presents a set of basic building blocks useful in implementing digital networks. It then uses the estimation techniques to predict capacity and performance of a variety of digital architectures. Finally, it discusses implementation strategies for very large networks. 1 Introduction Neural network applications suitable for implementation in VLSI cover a wide spectrum, from dedicated feedforward nets for real time feature detection to general purpose engines for exploring learning algorithms. The DARPA Neural Network Study [76] contains a good discussion of the range of applicati...
A Learning Analog Neural Network Chip with ContinuousTime Recurrent Dynamics
 In
, 1994
"... We present experimental results on supervised learning of dynamical features in an analog VLSI neural network chip. The recurrent network, containing six continuoustime analog neurons and 42 free parameters (connection strengths and thresholds), is trained to generate timevarying outputs approxima ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We present experimental results on supervised learning of dynamical features in an analog VLSI neural network chip. The recurrent network, containing six continuoustime analog neurons and 42 free parameters (connection strengths and thresholds), is trained to generate timevarying outputs approximating given periodic signals presented to the network. The chip implements a stochastic perturbative algorithm, which observes the error gradient along random directions in the parameter space for errordescent learning. In addition to the integrated learning functions and the generation of pseudorandom perturbations, the chip provides for teacher forcing and longterm storage of the volatile parameters. The network learns a 1 kHz circular trajectory in 100 sec. The chip occupies 2mm \Theta 2mm in a 2¯m CMOS process, and dissipates 1:2 mW. 1 Introduction Exact gradientdescent algorithms for supervised learning in dynamic recurrent networks [13] are fairly complex and do not provide for a ...
Accurate and Precise Computation using Analog VLSI, with Applications to Computer Graphics and Neural Networks
, 1993
"... This thesis develops an engineering practice and design methodology to enable us to use CMOS analog VLSI chips to perform more accurate and precise computation. These techniques form the basis of an approach that permits us to build computer graphics and neural network applications using analog VLSI ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
This thesis develops an engineering practice and design methodology to enable us to use CMOS analog VLSI chips to perform more accurate and precise computation. These techniques form the basis of an approach that permits us to build computer graphics and neural network applications using analog VLSI. The nature of the design methodology focuses on defining goals for circuit behavior to be met as part of the design process. To increase the accuracy of analog computation, we develop techniques for creating compensated circuit building blocks, where compensation implies the cancellation of device variations, offsets, and nonlinearities. These compensated building blocks can be used as components in larger and more complex circuits, which can then also be compensated. To this end, we develop techniques for automatically determining appropriate parameters for circuits, using constrained optimization. We also fabricate circuits that implement multidimensional gradient estimation for a grad...
Analog VLSI neural network with digital perturbative learning
 IEEE Transactions on Circuits and Systems II : Analog and Digital Signal Processing
, 2002
"... [2] H. Fan, “A structural view of asymptotic convergence speed of adaptive ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
[2] H. Fan, “A structural view of asymptotic convergence speed of adaptive
Digital Neurochip Design
 In K. Wojtek Przytula and Viktor K. Prasanna, editors, Digital Parallel Implementations of Neural Networks
, 1991
"... Introduction This chapter describes a methodology for designing digital VLSI neurochips which emphasizes area, power, and performance estimation to facilitate architectural exploration in the early stages of design. It first discusses some key aspects of mapping neural net algorithms onto VLSI archi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Introduction This chapter describes a methodology for designing digital VLSI neurochips which emphasizes area, power, and performance estimation to facilitate architectural exploration in the early stages of design. It first discusses some key aspects of mapping neural net algorithms onto VLSI architectures. It then introduces a set of circuit level building blocks commonly used in constructing digital nets. It discusses how to estimate chip area, performance, and power consumption in architectures constructed from these blocks, showing how to include technology scaling rules in the estimation process. It concludes with a detailed discussion of a CMOS implementation of a digital Boltzmann machine. 2 Mapping algorithms to architectures An algorithm is a set of tasks to be applied to data in a specified order to transform inputs and internal state to desired outputs. An architecture is a set of resources and interconnections. Mapping algorithms to architectures
PulseBased Circuits and Methods for Probabilistic Neural Computation
 IN MICRONEURO '99 IEEE CONFERENCE
, 1999
"... This work argues that it should be possible to combine pulsebased VLSI techniques with the relatively simple training rules of the Helmholtz Machine stochastic neural architecture, in order to build an analogue probabilistic hardware model of the latter. An overview of the necessary components is p ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This work argues that it should be possible to combine pulsebased VLSI techniques with the relatively simple training rules of the Helmholtz Machine stochastic neural architecture, in order to build an analogue probabilistic hardware model of the latter. An overview of the necessary components is presented, as well as a design for a pulsewidth modulation oscillator, capable of transforming a current input (which represents the squashed, postsynaptic signal processed by a particular neuron) into the probability associated with the binary state of that neuron. A CMOS hardware prototype has been designed and fabricated, and precautions were taken during the design and simulation stages in order to prevent the oscillators on the same chip from locking together. Apart from testing the hardware prototype, future plans involve the hardware implementation of other modules, such as the synapse, the squashing function and weight changing circuitry.