Results 1 
5 of
5
Non Linear Neurons in the Low Noise Limit: A Factorial Code Maximizes Information Transfer
, 1994
"... We investigate the consequences of maximizing information transfer in a simple neural network (one input layer, one output layer), focussing on the case of non linear transfer functions. We assume that both receptive fields (synaptic efficacies) and transfer functions can be adapted to the environm ..."
Abstract

Cited by 141 (18 self)
 Add to MetaCart
We investigate the consequences of maximizing information transfer in a simple neural network (one input layer, one output layer), focussing on the case of non linear transfer functions. We assume that both receptive fields (synaptic efficacies) and transfer functions can be adapted to the environment. The main result is that, for bounded and invertible transfer functions, in the case of a vanishing additive output noise, and no input noise, maximization of information (Linsker'sinfomax principle) leads to a factorial code  hence to the same solution as required by the redundancy reduction principle of Barlow. We show also that this result is valid for linear, more generally unbounded, transfer functions, provided optimization is performed under an additive constraint, that is which can be written as a sum of terms, each one being specific to one output neuron. Finally we study the effect of a non zero input noise. We find that, at first order in the input noise, assumed to be small ...
Information Processing by a Perceptron in an Unsupervised Learning Task
, 1993
"... We study the ability of a simple neural network (a perceptron architecture, no hidden units, binary outputs) to process information in the context of an unsupervised learning task. The network is asked to provide the best possible neural representation of a given input distribution, according to som ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
We study the ability of a simple neural network (a perceptron architecture, no hidden units, binary outputs) to process information in the context of an unsupervised learning task. The network is asked to provide the best possible neural representation of a given input distribution, according to some criterion taken from Information Theory. We compare various optimization criteria that have been proposed : maximum information transmission, minimum redundancy and closeness to factorial code. We show that for the perceptron one can compute the maximal information that the code (the output neural representation) can convey about the input. We show that one can use Statistical Mechanics techniques, such as the replica techniques, to compute the typical mutual information between input and output distributions. More precisely, for a Gaussian input source with a given correlation matrix, we compute the typical mutual information when the couplings are chosen randomly. We determine the correl...
Temporal filtering in retinal bipolar cells: Elements of an optimal computation? Biophys
 J
, 1990
"... ABSTRACT Recent experiments indicate that the darkadapted vertebrate visual system can count photons with a reliability limited by dark noise in the rod photoreceptors themselves. This suggests that subsequent layers of the retina, responsible for signal processing, add little if any excess noise a ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
ABSTRACT Recent experiments indicate that the darkadapted vertebrate visual system can count photons with a reliability limited by dark noise in the rod photoreceptors themselves. This suggests that subsequent layers of the retina, responsible for signal processing, add little if any excess noise and extract all the available information. Given the signal and noise characteristics of the photoreceptors, what is the structure of such an optimal processor?We show that optimal estimates of timevarying light intensity can be accomplished by a twostage filter, and we suggest that the first stage should be identified with the filtering which occurs at the first anatomical stage in retinal signal processing, signal transfer from the rod photoreceptor to the bipolar cell. This leads to parameterfree predictions of the bipolar cell response, which are in excellent agreement with experiments comparing rod and bipolar cell dynamics in the same retina. As far as we know this is the first case in which the computationally significant dynamics of a neuron could be predicted rather than modeled.
Duality Between Learning Machines: A Bridge Between Supervised and Unsupervised Learning
, 1992
"... We exhibit a duality between two perceptrons which allows us to compare the theoretical analysis of supervised and unsupervised learning tasks. The first perceptron has one output and is asked to learn a classification of p patterns. The second (dual) perceptron has p outputs and is asked to transmi ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We exhibit a duality between two perceptrons which allows us to compare the theoretical analysis of supervised and unsupervised learning tasks. The first perceptron has one output and is asked to learn a classification of p patterns. The second (dual) perceptron has p outputs and is asked to transmit as much information as possible on a distribution of inputs. We show in particular that the maximum information that can be stored in the couplings for the supervised learning task is equal to the maximum information that can be transmitted by the dual perceptron. * Laboratoire associ'e au C.N.R.S. (U.R.A. 1306), `a l'E.N.S. et aux Universit'es Paris VI et Paris VII. 1 Introduction Supervised and unsupervised learning are the two main research themes in the study of formal neural networks. In the first case, one is given a set of inputoutput pairs which have to be learned by a neural network (usually of a given architecture). One may be interested in the performance of the network as ...
Information Transmission By Networks Of Non Linear Neurons
"... this paper we considered the problem of maximizing information transfer with a network of neurons made of N inputs and p outputs, focussing on the case of non linear transfer functions and arbitrary input distributions. We assumed that both the transfer functions and the synaptic efficacies could be ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
this paper we considered the problem of maximizing information transfer with a network of neurons made of N inputs and p outputs, focussing on the case of non linear transfer functions and arbitrary input distributions. We assumed that both the transfer functions and the synaptic efficacies could be adapted to the environment. The main consequence of our analysis is that, in the limit of small additive output noise (and an even smaller input noise), the infomax principle of Linsker implies the redundancy reduction