Results 1  10
of
25
A Fast Stochastic ErrorDescent Algorithm for Supervised Learning and Optimization
 In
, 1993
"... A parallel stochastic algorithm is investigated for errordescent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the modelfree distributed learning mechanism of Dembo and Kaila ..."
Abstract

Cited by 35 (7 self)
 Add to MetaCart
A parallel stochastic algorithm is investigated for errordescent learning and optimization in deterministic networks of arbitrary topology. No explicit information about internal network structure is needed. The method is based on the modelfree distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning timevarying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks. 1 Background and Motivation We address general optimization tasks that require finding a set of constant parameter values p i that minimize a given error functional E(p). For supervised learning, the error functional consists of some quantitativ...
A stochastic neural architecture that exploits dynamically reconfigurable FPGAs
 Pocek (Eds.), IEEE Workshop on FPGAs for Custom Computing Machines, IEEE Computer Society Press, Los Alamitos, CA
, 1993
"... I n this paper we present an ezpandable digital architecture that provides a n eflcient real time implementation platform for large neural networks. The architecture makes heavy use of the techniques of bit serial stochastic computing to carry out the large number of required parallel synaptic ca ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
I n this paper we present an ezpandable digital architecture that provides a n eflcient real time implementation platform for large neural networks. The architecture makes heavy use of the techniques of bit serial stochastic computing to carry out the large number of required parallel synaptic calculations. In this design all real valued quantities are encoded on to stochastic bit streams in which the ‘ I ’ density is proportional to the given quantity. The actual digital circuitry is simple and highly regular thus allowing very eficient space usage of fine grained FPGAs. Another feature of the design is that the large number of weights required by a neural network are generated by circuitry tailored to each of their specific values, thus saving valuable cells. Whenever one of these values is required to change, the appropriate circuitry must be dynamically reconfigured. This may always be achieved in a fized and minimum number of cells for a given bit stream resolution. 1
A Device For Generating Binary Sequences For Stochastic Computing
 Electronics Letters
, 1992
"... A novel technique for the generation of high speed stochastic bit streams in which the `1' density is proportional to a given value. Bit streams of this type are particularly useful in bit serial stochastic computing systems, such as digital stochastic neural networks. The proposed circuitry is ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
A novel technique for the generation of high speed stochastic bit streams in which the `1' density is proportional to a given value. Bit streams of this type are particularly useful in bit serial stochastic computing systems, such as digital stochastic neural networks. The proposed circuitry is highly suitable for VLSI fabrication. INTRODUCTION The use of stochastic bit streams allows a dramatic simplification of the circuitry required to implement many devices [1], since the multiplication of two values may be performed by computing the bitwise conjunction of two corresponding streams. The highly pipelined digital design described in this paper was developed for use with high speed digital stochastic neural networks, as described in [2, 3, 4]. The proposed stochastic bit stream generator is highly pipelined and thus potentially extremely fast. The individual pipeline stages, known as modulators are simple, and the number used defines the overall resolution of the generated bit strea...
An Exact Hardware Implementation of the Boltzmann Machine
 92563 Rueil Malmaison Cedex
, 1992
"... We present a fast implementation of the Boltzmann Machine based on specialized hardware. This realization faithfully implements the machine; in particular, it avoids the excess parallelism which makes other fast implementations of it only approximate. The current prototype performs 505 million addit ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We present a fast implementation of the Boltzmann Machine based on specialized hardware. This realization faithfully implements the machine; in particular, it avoids the excess parallelism which makes other fast implementations of it only approximate. The current prototype performs 505 million additions and multiplications (or megasynapses) per second. It can emulate the fully connected Boltzmann Machine with up to 1438 variables. Boltzmann Machine's weights are expressed as 16bit two's complement fixedpoint numbers. Our implementation is built on the top of DECperle1, a reconfigurable coprocessor board based on field programmable gate arrays (FPGAs). DECperle1 is closely coupled with its host processor (an ordinary workstation). In our application, it only performs the simplest and most computingintensive part of the Boltzmann Machine algorithm, namely multiplying matrices of numbers by vectors of bits. The other operations (which are complicated, but only require a modest amoun...
Digital Neural Network Implementations
 in Neural Networks, Concepts, Applications, and Implementations, Vol III. Englewood Cliffs
, 1995
"... This chapter gives an overview of existing digital VLSI implementations and discusses techniques for implementing high performance, high capacity digital neural nets. It presents a set of techniques for estimating chip area, performance, and power consumption in the early stages of design to facilit ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
This chapter gives an overview of existing digital VLSI implementations and discusses techniques for implementing high performance, high capacity digital neural nets. It presents a set of techniques for estimating chip area, performance, and power consumption in the early stages of design to facilitate architectural exploration. It shows how technology scaling rules can be included in the estimation process. It presents a set of basic building blocks useful in implementing digital networks. It then uses the estimation techniques to predict capacity and performance of a variety of digital architectures. Finally, it discusses implementation strategies for very large networks. 1 Introduction Neural network applications suitable for implementation in VLSI cover a wide spectrum, from dedicated feedforward nets for real time feature detection to general purpose engines for exploring learning algorithms. The DARPA Neural Network Study [76] contains a good discussion of the range of applicati...
A Learning Analog Neural Network Chip with ContinuousTime Recurrent Dynamics
 In
, 1994
"... We present experimental results on supervised learning of dynamical features in an analog VLSI neural network chip. The recurrent network, containing six continuoustime analog neurons and 42 free parameters (connection strengths and thresholds), is trained to generate timevarying outputs approxima ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We present experimental results on supervised learning of dynamical features in an analog VLSI neural network chip. The recurrent network, containing six continuoustime analog neurons and 42 free parameters (connection strengths and thresholds), is trained to generate timevarying outputs approximating given periodic signals presented to the network. The chip implements a stochastic perturbative algorithm, which observes the error gradient along random directions in the parameter space for errordescent learning. In addition to the integrated learning functions and the generation of pseudorandom perturbations, the chip provides for teacher forcing and longterm storage of the volatile parameters. The network learns a 1 kHz circular trajectory in 100 sec. The chip occupies 2mm \Theta 2mm in a 2¯m CMOS process, and dissipates 1:2 mW. 1 Introduction Exact gradientdescent algorithms for supervised learning in dynamic recurrent networks [13] are fairly complex and do not provide for a ...
Analog VLSI neural network with digital perturbative learning
 IEEE Transactions on Circuits and Systems II : Analog and Digital Signal Processing
, 2002
"... [2] H. Fan, “A structural view of asymptotic convergence speed of adaptive ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
[2] H. Fan, “A structural view of asymptotic convergence speed of adaptive
Accurate and Precise Computation using Analog VLSI, with Applications to Computer Graphics and Neural Networks
, 1993
"... This thesis develops an engineering practice and design methodology to enable us to use CMOS analog VLSI chips to perform more accurate and precise computation. These techniques form the basis of an approach that permits us to build computer graphics and neural network applications using analog VLSI ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
This thesis develops an engineering practice and design methodology to enable us to use CMOS analog VLSI chips to perform more accurate and precise computation. These techniques form the basis of an approach that permits us to build computer graphics and neural network applications using analog VLSI. The nature of the design methodology focuses on defining goals for circuit behavior to be met as part of the design process. To increase the accuracy of analog computation, we develop techniques for creating compensated circuit building blocks, where compensation implies the cancellation of device variations, offsets, and nonlinearities. These compensated building blocks can be used as components in larger and more complex circuits, which can then also be compensated. To this end, we develop techniques for automatically determining appropriate parameters for circuits, using constrained optimization. We also fabricate circuits that implement multidimensional gradient estimation for a grad...
Digital Neurochip Design
 In K. Wojtek Przytula and Viktor K. Prasanna, editors, Digital Parallel Implementations of Neural Networks
, 1991
"... Introduction This chapter describes a methodology for designing digital VLSI neurochips which emphasizes area, power, and performance estimation to facilitate architectural exploration in the early stages of design. It first discusses some key aspects of mapping neural net algorithms onto VLSI archi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Introduction This chapter describes a methodology for designing digital VLSI neurochips which emphasizes area, power, and performance estimation to facilitate architectural exploration in the early stages of design. It first discusses some key aspects of mapping neural net algorithms onto VLSI architectures. It then introduces a set of circuit level building blocks commonly used in constructing digital nets. It discusses how to estimate chip area, performance, and power consumption in architectures constructed from these blocks, showing how to include technology scaling rules in the estimation process. It concludes with a detailed discussion of a CMOS implementation of a digital Boltzmann machine. 2 Mapping algorithms to architectures An algorithm is a set of tasks to be applied to data in a specified order to transform inputs and internal state to desired outputs. An architecture is a set of resources and interconnections. Mapping algorithms to architectures
Deltasigma cellular automata for analog VLSI random vector generation
 IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, Vol.46, No.3
, 1999
"... Abstract—We present a class of analog cellular automata for parallel analog random vector generation, including theory on the randomness properties, scalable parallel very large scale integration (VLSI) architectures, and experimental results from an analog VLSI prototype with 64 channels. Linear co ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract—We present a class of analog cellular automata for parallel analog random vector generation, including theory on the randomness properties, scalable parallel very large scale integration (VLSI) architectures, and experimental results from an analog VLSI prototype with 64 channels. Linear congruential coupling between cells produces parallel channels of uniformly distributed random analog values, with statistics that are uncorrelated both across channels and over time. The cell for each random channel essentially implements a switchedcapacitor delta–sigma modulator, and measures 100 m 2 120 min2 m CMOS technology. The 64 cells are connected as a MASH cascade in a chain or ring topology on a twodimensional (2D) grid, and can be rearranged for use in various VLSI applications that require a parallel supply of random analog vectors, such as analog encryption and secure communications, analog builtin selftest, stochastic neural networks, and simulated annealing optimization and learning. Index Terms—Random generation, noise, delta–sigma modulation, cellular automata, analog VLSI, neural networks, switchedcapacitor circuits.