Results 1 
6 of
6
An informationmaximization approach to blind separation and blind deconvolution
 NEURAL COMPUTATION
, 1995
"... ..."
Non Linear Neurons in the Low Noise Limit: A Factorial Code Maximizes Information Transfer
, 1994
"... We investigate the consequences of maximizing information transfer in a simple neural network (one input layer, one output layer), focussing on the case of non linear transfer functions. We assume that both receptive fields (synaptic efficacies) and transfer functions can be adapted to the environm ..."
Abstract

Cited by 163 (18 self)
 Add to MetaCart
We investigate the consequences of maximizing information transfer in a simple neural network (one input layer, one output layer), focussing on the case of non linear transfer functions. We assume that both receptive fields (synaptic efficacies) and transfer functions can be adapted to the environment. The main result is that, for bounded and invertible transfer functions, in the case of a vanishing additive output noise, and no input noise, maximization of information (Linsker'sinfomax principle) leads to a factorial code  hence to the same solution as required by the redundancy reduction principle of Barlow. We show also that this result is valid for linear, more generally unbounded, transfer functions, provided optimization is performed under an additive constraint, that is which can be written as a sum of terms, each one being specific to one output neuron. Finally we study the effect of a non zero input noise. We find that, at first order in the input noise, assumed to be small ...
Information Transmission By Networks Of Non Linear Neurons
"... this paper we considered the problem of maximizing information transfer with a network of neurons made of N inputs and p outputs, focussing on the case of non linear transfer functions and arbitrary input distributions. We assumed that both the transfer functions and the synaptic efficacies could be ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
this paper we considered the problem of maximizing information transfer with a network of neurons made of N inputs and p outputs, focussing on the case of non linear transfer functions and arbitrary input distributions. We assumed that both the transfer functions and the synaptic efficacies could be adapted to the environment. The main consequence of our analysis is that, in the limit of small additive output noise (and an even smaller input noise), the infomax principle of Linsker implies the redundancy reduction
Information Processing by a Noisy Binary Channel
"... We study the information processing properties of a binary channel receiving data from a gaussian source. A systematic comparison with linear processing is done. A remarkable property of the binary sytem is that, as the ratio ff between the number of output and input units increases, binary processi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We study the information processing properties of a binary channel receiving data from a gaussian source. A systematic comparison with linear processing is done. A remarkable property of the binary sytem is that, as the ratio ff between the number of output and input units increases, binary processing becomes equivalent to linear processing with a quantization output noise that depends on ff. In this regime , that holds up to O(ff \Gamma4 ) , information processing occurs as if populations of ff binary units cooperate to represent one ffbit output unit. Unsupervised learning of a noisy environment by optimization of the parameters of the binary channel is also considered.
The Partitioning Problem in Unsupervised Learning for NonLinear Neurons
 J. Phys. A
, 1995
"... A closed form solution is found for the learning dynamics of a nonlinear Hebbian neuron presented with with orthogonal patterns at different rates. The basins of attraction for each pattern are calculated as a function of the probability of the patterns being presented. A sample independent probabi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A closed form solution is found for the learning dynamics of a nonlinear Hebbian neuron presented with with orthogonal patterns at different rates. The basins of attraction for each pattern are calculated as a function of the probability of the patterns being presented. A sample independent probability for learning a pattern is found in the limit of a large number of patterns. This function is, to a good approximation, proportional to the probability of the pattern being presented raised to a power, which depends strongly on the total number of patterns and on the nonlinearity of the response of the neuron to a stimuli. There is also a weak dependence on the distributions of the probabilities of the patterns being presented. The implications of this work to more realistic situations are discussed. 1 Introduction Unsupervised learning is an important area of research in the field of neural networks. By using a Hebbian or Hebbian like mechanism, and allowing simple interactions betwee...