Results 1  10
of
35
Biological constraints on connectionist modelling
 Connectionism in Perspective
, 1989
"... Many researchers interested in connectionist models accept that such models are "neurally inspired " but do not worry too much about whether their models are biologically realistic. While such a position may be perfectly justifiable, the present paper attempts to illustrate how biological ..."
Abstract

Cited by 76 (8 self)
 Add to MetaCart
Many researchers interested in connectionist models accept that such models are "neurally inspired " but do not worry too much about whether their models are biologically realistic. While such a position may be perfectly justifiable, the present paper attempts to illustrate how biological information can be used to constrain connectionist models. Two particular areas are discussed. The first section deals with visual information processing in the primate and human visual system. It is argued that speed with which visual information is processed imposes major constraints on the architecture and operation of the visual system. In particular, it seems that a great deal of processing must depend on a single bottumup pass. The second section deals with biological aspects of learning algorithms. It is argued that although there is good evidence for certain coactivation related synaptic modification schemes, other learning mechanisms, including backpropagation, are not currently supported by experimental data.
Towards a unified brain theory
, 1981
"... An approach to collective aspects of the neocortical system is formulated by methods of modern nonequilibrium statistical mechanics. Microscopic neuronal synaptic interactions are first spatially averaged over columnar domains. These spatially ordered domains include well formulated fluctuations th ..."
Abstract

Cited by 29 (27 self)
 Add to MetaCart
An approach to collective aspects of the neocortical system is formulated by methods of modern nonequilibrium statistical mechanics. Microscopic neuronal synaptic interactions are first spatially averaged over columnar domains. These spatially ordered domains include well formulated fluctuations that retain contact with the original physical synaptic parameters. They also are a suitable substrate for macroscopic spatialtemporal regions described by FokkerPlanck and Lagrangian formalisms. This development clarifies similarities and differences among previous studies, suggests new analytically supported insights into neocortical function and permits future approximation or elaboration within current paradigms of collective systems.
ConvergenceZone Episodic Memory: Analysis and Simulations
 NEURAL NETWORKS
, 1997
"... Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
Human episodic memory provides a seemingly unlimited storage for everyday experiences, and a retrieval system that allows us to access the experiences with partial activation of their components. The system is believed to consist of a fast, temporary storage in the hippocampus, and a slow, longterm storage within the neocortex. This paper presents a neural network model of the hippocampal episodic memory inspired by Damasio's idea of Convergence Zones. The model consists of a layer of perceptual feature maps and a binding layer. A perceptual feature pattern is coarse coded in the binding layer, and stored on the weights between layers. A partial activation of the stored features activates the binding pattern, which in turn reactivates the entire stored pattern. For many configurations of the model, a theoretical lower bound for the memory capacity can be derived, and it can be an order of magnitude or higher than the number of all units in the model, and several orders of magnitude higher than the number of bindinglayer units. Computational simulations further indicate that the average capacity is an order of magnitude larger than the theoretical lower bound, and making the connectivity between layers sparser causes an even further increase in capacity. Simulations also show that if more descriptive binding patterns are used, the errors tend to be more plausible (patterns are confused with other similar patterns), with a slight cost in capacity. The convergencezone episodic memory therefore accounts for the immediate storage and associative retrieval capability and large capacity of the hippocampal memory, and shows why the memory encoding areas can be much smaller than the perceptual maps, consist of rather coarse computational units, and be only sparsely connected t...
Adaptive Perceptual Pattern Recognition by SelfOrganizing Neural Networks: Context, Uncertainty, Multiplicity, and Scale
 NEURAL NETWORKS
, 1995
"... A new contextsensitive neural network, called an "EXIN" (excitatory+inhibitory) network, is described. EXIN networks selforganize in complex perceptual environments, in the presence of multiple superimposed patterns, multiple scales, and uncertainty. The networks use a new inhibitory learning rule ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
A new contextsensitive neural network, called an "EXIN" (excitatory+inhibitory) network, is described. EXIN networks selforganize in complex perceptual environments, in the presence of multiple superimposed patterns, multiple scales, and uncertainty. The networks use a new inhibitory learning rule, in addition to an excitatory learning rule, to allow superposition of multiple simultaneous neural activations (multiple winners), under strictly regulated circumstances, instead of forcing winnertakeall pattern classifications. The multiple activations represent uncertainty or multiplicity in perception and pattern recognition. Perceptual scission (breaking of linkages) between independent category groupings thus arises and allows effective global contextsensitive segmentation constraint satisfaction, and exclusive credit attribution. A Weber Law neurongrowth rule lets the network learn and classify input patterns despite variations in their spatial scale. Applications of the new techn...
Hidden image separation from incomplete image mixtures by independent component analysis
 In ICPR'96
, 1996
"... It is known that the independent component analysis (ICA) (also called blindsource separation) can be applied only if the number of received signals (sensors) is at least equal to the number of mixed sources, contained in the sensor signals. In this paper an application of the ICA is proposed for hi ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
It is known that the independent component analysis (ICA) (also called blindsource separation) can be applied only if the number of received signals (sensors) is at least equal to the number of mixed sources, contained in the sensor signals. In this paper an application of the ICA is proposed for hidden (secured) image transmission by communication channels. We assume that only a single image mixture istransmitted. A friendly receiver contains the remaining original sources and therefore it can separate the hidden image of lowest energy. The in uence of two non{lossless signal reduction stages, compression by principal component analysis and signal quantization, onto the separation ability is tested. Constraints of the mixing process are discussed that make impossible the hidden image separation without the key images. 1.
Adaptive learning algorithm for principal component analysis with partial data
 In Thirteenth European Meeting on Cybernetics and Systems Research
, 1996
"... In this paper a fast and e cient adaptive learning algorithm for estimation of the principal components is developed. It seems to be especially useful in applications with changing environment, where the learning process has to be repeated in on{line manner. The approach can be called the cascade re ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
In this paper a fast and e cient adaptive learning algorithm for estimation of the principal components is developed. It seems to be especially useful in applications with changing environment, where the learning process has to be repeated in on{line manner. The approach can be called the cascade recursive least square (CRLS) method, as it combines a cascade (hierarchical) neural network scheme for input signal reduction with the RLS (recursive least square) lter for adaptation of learning rates. Successful application of the CRLS method for 2{D image compression{reconstruction and its performance in comparison to other known PCA adaptive algorithms are also documented. 1
Chaotic Neurodynamics for Autonomous Agents
, 2005
"... Mesoscopic level neurodynamics study the collective dynamical behavior of neural populations. Such models are becoming increasingly important in understanding largescale brain processes. Brains exhibit aperiodic oscillations with a much more rich dynamical behavior than fixedpoint and limitcycle ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Mesoscopic level neurodynamics study the collective dynamical behavior of neural populations. Such models are becoming increasingly important in understanding largescale brain processes. Brains exhibit aperiodic oscillations with a much more rich dynamical behavior than fixedpoint and limitcycle approximation allow. Here we present a discretized model inspired by Freeman’s Kset mesoscopic level population model. We show that this version is capable of replicating the important principles of aperiodic/chaotic neurodynamics while being fast enough for use in realtime autonomous agent applications. This simplification of the K model provides many advantages not only in terms of efficiency but in simplicity and its ability to be analyzed in terms of its dynamical properties. We study the discrete version using a multilayer, highly recurrent model of the neural architecture of perceptual brain areas. We use this architecture to develop example action selection mechanisms in an autonomous agent.
Static and Dynamic Attractors of Autoassociative Neural Networks
 in Proc. Int. Conf. on Image Analysis and Processing (ICIAP’97), Vol. II (LNCS
, 1997
"... . In this paper we study the problem of the occurrence of cycles in autoassociative neural networks. We call these cycles dynamic attractors, show when and why they occur and how they can be identified. Of particular interest is the pseudoinverse network with reduced selfconnection. We prove that ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
. In this paper we study the problem of the occurrence of cycles in autoassociative neural networks. We call these cycles dynamic attractors, show when and why they occur and how they can be identified. Of particular interest is the pseudoinverse network with reduced selfconnection. We prove that it has dynamic attractors, which occur with a probability proportional to the number of prototypes and the degree of weight reduction. We show how to predict and avoid them. Keywords: pattern recognition, neural network, pseudoinverse rule, stable state. 1 Introduction Autoassociative neural networks, like those introduced by Amari [1], Kohonen [2], Hopfield [3], Personnez [4], are intensively used for pattern recognition and lowlevel computer vision problems [5, 6, 7]. These problems include identification and categorization of faces, computation of optical flow, static and motion stereo, image restoration, and other illposed according to Hadamard [8] problems. What makes these network...
Stochastic resonance in continuous and spiking neuron models with levy noise. Neural Networks
 IEEE Transactions on
, 2008
"... Abstract—Levy noise can help neurons detect faint or subthreshold signals. Levy noise extends standard Brownian noise to many types of impulsive jumpnoise processes found in real and model neurons as well as in models of finance and other random phenomena. Two new theorems and the Itô calculus show ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
Abstract—Levy noise can help neurons detect faint or subthreshold signals. Levy noise extends standard Brownian noise to many types of impulsive jumpnoise processes found in real and model neurons as well as in models of finance and other random phenomena. Two new theorems and the Itô calculus show that white Levy noise will benefit subthreshold neuronal signal detection if the noise process’s scaled drift velocity falls inside an interval that depends on the threshold values. These results generalize earlier “forbidden interval ” theorems of neuronal “stochastic resonance ” (SR) or noiseinjection benefits. Global and local Lipschitz conditions imply that additive white Levy noise can increase the mutual information or bit count of several feedback neuron models that obey a general stochastic differential equation (SDE). Simulation results show that the same noise benefits still occur for some infinitevariance stable Levy noise processes even though the theorems themselves apply only to finitevariance Levy noise. The Appendix proves the two Itôtheoretic lemmas that underlie the new Levy noisebenefit theorems. Index Terms—Levy noise, jump diffusion, mutual information, neuron models, signal detection, stochastic resonance (SR). I. STOCHASTIC RESONANCE IN NEURAL SIGNAL DETECTION STOCHASTIC RESONANCE (SR) occurs when noise benefits a system rather than harms it. Small amounts of noise can often enhance some forms of nonlinear signal processing
Adaptive stochastic resonance in noisy neurons based on mutual information
 IEEE Trans. Neural Netw
, 2004
"... Abstract—Noise can improve how memoryless neurons process signals and maximize their throughput information. Such favorable use of noise is the socalled “stochastic resonance ” or SR effect at the level of threshold neurons and continuous neurons. This paper presents theoretical and simulation evid ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Abstract—Noise can improve how memoryless neurons process signals and maximize their throughput information. Such favorable use of noise is the socalled “stochastic resonance ” or SR effect at the level of threshold neurons and continuous neurons. This paper presents theoretical and simulation evidence that 1) lone noisy threshold and continuous neurons exhibit the SR effect in terms of the mutual information between random input and output sequences, 2) a new statistically robust learning law can find this entropyoptimal noise level, and 3) the adaptive SR effect is robust against highly impulsive noise with infinite variance. Histograms estimate the relevant probability density functions at each learning iteration. A theorem shows that almost all noise probability density functions produce some SR effect in threshold neurons even if the noise is impulsive and has infinite variance. The optimal noise level in threshold neurons also behaves nonlinearly as the input