Results 11  20
of
91
A Mean Field Learning Algorithm For Unsupervised Neural Networks
, 1999
"... . We introduce a learning algorithm for unsupervised neural networks based on ideas from statistical mechanics. The algorithm is derived from a mean field approximation for large, layered sigmoid belief networks. We show how to (approximately) infer the statistics of these networks without resort to ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
. We introduce a learning algorithm for unsupervised neural networks based on ideas from statistical mechanics. The algorithm is derived from a mean field approximation for large, layered sigmoid belief networks. We show how to (approximately) infer the statistics of these networks without resort to sampling. This is done by solving the mean field equations, which relate the statistics of each unit to those of its Markov blanket. Using these statistics as target values, the weights in the network are adapted by a local delta rule. We evaluate the strengths and weaknesses of these networks for problems in statistical pattern recognition. 1. Introduction Multilayer neural networks trained by backpropagation provide a versatile framework for statistical pattern recognition. They are popular for many reasons, including the simplicity of the learning rule and the potential for discovering hidden, distributed representations of the problem space. Nevertheless, there are many issues that are...
Canonical Parameterization of Excess Motor Degrees of Freedom with SelfOrganizing Maps
 IEEE Trans. Neural Networks
, 1994
"... The problem of sensorimotor control is underdetermined due to excess (or "redundant") degrees of freedom when there are more joint variables than the minimum needed for positioning an endeffector. A method is presented for solving the nonlinear inverse kinematics problem for a redundant manipulato ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
The problem of sensorimotor control is underdetermined due to excess (or "redundant") degrees of freedom when there are more joint variables than the minimum needed for positioning an endeffector. A method is presented for solving the nonlinear inverse kinematics problem for a redundant manipulator by learning a natural parameterization of the inverse solution manifolds with selforganizing maps. The parameterization approximates the topological structure of the joint space, which is that of a fiber bundle. The fibers represent the "selfmotion manifolds" along which the manipulator can change configuration while keeping the endeffector at a fixed location. The method is demonstrated for the case of the redundant planar manipulator. Data samples along the selfmotion manifolds are selected from a large set of measured inputoutput data. This is done by taking points in the joint space corresponding to endeffector locations near "query points", which define small neighborhoods ...
Toward a Model of Consolidation: The Retention and Transfer of Neural Net Task Knowledge
 in: Proceedings of the INNS World Congress on Neural Networks, edited by
, 1995
"... This paper introduces an architecture of two feedforward backpropagation neural networks and associated software, which we collectively refer to as a consolidation system. ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
This paper introduces an architecture of two feedforward backpropagation neural networks and associated software, which we collectively refer to as a consolidation system.
Perspectives of the high dimensional dynamics of neural microcircuits from the point of view of low dimensional readouts
 Complexity (Special Issue on Complex Adaptive Systems
, 2003
"... We investigate generic models for cortical microcircuits, i.e., recurrent circuits of integrateandfire neurons with dynamic synapses. These complex dynamic systems subserve the amazing information processing capabilities of the cortex, but are at the present time very little understood. We analyze ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We investigate generic models for cortical microcircuits, i.e., recurrent circuits of integrateandfire neurons with dynamic synapses. These complex dynamic systems subserve the amazing information processing capabilities of the cortex, but are at the present time very little understood. We analyze the transient dynamics of models for neural microcircuits from the point of view of one or two readout neurons that collapse the highdimensional transient dynamics of a neural circuit into a one or twodimensional output stream. This stream may for example represent the information that is projected from such circuit to some particular other brain area or actuators. It is shown that simple local learning rules enable a readout neuron to extract from the highdimensional transient dynamics of a recurrent neural circuit quite different lowdimensional projections, which even may contain “virtual attractors ” that are not apparent in the highdimensional dynamics of the circuit itself. Furthermore it is demonstrated that the information extraction capabilities of linear readout neurons are boosted by the computational operations of a sufficiently large preceding neural microcircuit. Hence a generic neural microcircuit may play a similar role for information processing as a kernel for support vector machines in machine learning. We demonstrate that the projection of timevarying inputs into a large recurrent neural circuit enables a linear readout neuron to classify the timevarying circuit inputs with the same power as complex nonlinear classifiers, such as a pool of perceptrons trained by the pdelta rule or a feedforward sigmoidal neural net trained by backprop, provided that the size of the
Dynamics of interacting neural networks
 2000 J. Phys. A: Math. Gen
"... Complex bit sequences generated by a perceptron that learns the opposite of its own prediction are studied, and the success of a student perceptron trained on this sequence is calculated. A system of interacting perceptrons with a directed flow of information is solved analytically. A symmetry break ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
Complex bit sequences generated by a perceptron that learns the opposite of its own prediction are studied, and the success of a student perceptron trained on this sequence is calculated. A system of interacting perceptrons with a directed flow of information is solved analytically. A symmetry breaking phase transition is found with increasing learning rate. A system of interacting perceptrons can develop a good strategy for the problem of adaptive competition known as the minority game. Simple models of neural networks describe a wide variety of phenomena in neurobiology and information theory. Neural networks are systems of elements interacting by adaptive couplings which are trained by a set of examples. After training they function as content addressable associative memory, as classifiers or as prediction algorithms. Using methods of statistical physics many of these phenomena have been elucidated analytically for infinitely large single neural networks [1]. Many phenomena in biology, social science and computer science may be modelled by a
Issues in Learning Global Properties of the Robot Kinematic Mapping
 Proc. IEEE Int'l Conf. Robotics & Automation 205212
, 1993
"... The robot kinematic mapping x = f(`) generally has multiple distinct solution branches f`jf(`) = x d g for a given endeffector location x d , where each branch can have a nontrivial manifold structure (as in the case of a redundant manipulator). Learning techniques which exploit known topological ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
The robot kinematic mapping x = f(`) generally has multiple distinct solution branches f`jf(`) = x d g for a given endeffector location x d , where each branch can have a nontrivial manifold structure (as in the case of a redundant manipulator). Learning techniques which exploit known topological properties of the mapping are used to determine the number and nature of these branches. Specifically, clustering of inputoutput data is used to map out the preimage branches. Also, topology preserving networks are used to learn and parameterize the topology of these branches for certain known classes of manipulators. As a practical consequence, the inverse kinematic mapping can be approximated for each branch separately. 1 Introduction The forward kinematic function f : \Theta n ! W m ae X m ; W m = f (\Theta n ), maps a set of n joint parameters to the mdimensional task space (where typically m n), x = f(`); x 2 W m A classic problem in robot kinematics is to solve the in...
On the Storage Capacity of Attractor Neural Networks With Depressing Synapses
, 2002
"... We compute the capacity of a binary neural network with dynamic depressing synapses to store and retrieve an infinite number of patterns. We use a biologically motivated model of synaptic depression and a standard meanfield approach. We find that at T  0 the critical storage capacity decreases ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
We compute the capacity of a binary neural network with dynamic depressing synapses to store and retrieve an infinite number of patterns. We use a biologically motivated model of synaptic depression and a standard meanfield approach. We find that at T  0 the critical storage capacity decreases with the degree of the depression. We confirm the validity of our main mean field results with numerical simulations.
Prediction of Beta Sheets in Proteins
 Advances in Neural Information Processing Systems 8
, 1996
"... Most current methods for prediction of protein secondary structure use a small window of the protein sequence to predict the structure of the central amino acid. We describe a new method for prediction of the nonlocal structure called fisheet, which consists of two or more fistrands that are conn ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Most current methods for prediction of protein secondary structure use a small window of the protein sequence to predict the structure of the central amino acid. We describe a new method for prediction of the nonlocal structure called fisheet, which consists of two or more fistrands that are connected by hydrogen bonds. Since fi strands are often widely separated in the protein chain, a network with two windows is introduced. After training on a set of proteins the network predicts the sheets well, but there are many false positives. By using a global energy function the fisheet prediction is combined with a local prediction of the three secondary structures ffhelix, fistrand and coil. The energy function is minimized using simulated annealing to give a final prediction. 1 INTRODUCTION Proteins are long sequences of amino acids. There are 20 different amino acids with varying chemical properties, e.g., some are hydrophobic (dislikes water) and some are hydrophilic [1]. It is...
Fuzzy Astronomical Seeing Nowcasts with a Dynamical and Recurrent Connectionist Network
, 1994
"... We assess a neuralbased method for fuzzy astronomical seeing prediction, based on known meteorological variables at the same timepoint. This multiple regression, termed nowcasting [10, 11], will allow the modern telescopes to be preset, a few hours in advance, in the most suited instrumental mode. ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We assess a neuralbased method for fuzzy astronomical seeing prediction, based on known meteorological variables at the same timepoint. This multiple regression, termed nowcasting [10, 11], will allow the modern telescopes to be preset, a few hours in advance, in the most suited instrumental mode. The data used are extensive meteorological and seeing measurements partly made at Cerro Paranal in Chile, site of the Very Large Telecope (VLT). A fuzzy correspondence analysis is carried out to explore the internal relationships in the data. Then, a time and space recurrent network is used in combination with a fuzzy coding approach to capture the temporal regularities of the seeing series. Such a connectionist network is endowed with an internal dynamic by means of arbitrary recurrent timedelayed connections. The performance of the model is appraised and the results are compared with the fuzzy knearest neighbors method. Keywords  neural networks, nearest neighbor method, time series ...