## Non Linear Neurons in the Low Noise Limit: A Factorial Code Maximizes Information Transfer (1994)

Citations: | 140 - 18 self |

### BibTeX

@MISC{Nadal94nonlinear,

author = {Jean-pierre Nadal and Nestor Parga},

title = {Non Linear Neurons in the Low Noise Limit: A Factorial Code Maximizes Information Transfer},

year = {1994}

}

### Years of Citing Articles

### OpenURL

### Abstract

We investigate the consequences of maximizing information transfer in a simple neural network (one input layer, one output layer), focussing on the case of non linear transfer functions. We assume that both receptive fields (synaptic efficacies) and transfer functions can be adapted to the environment. The main result is that, for bounded and invertible transfer functions, in the case of a vanishing additive output noise, and no input noise, maximization of information (Linsker'sinfomax principle) leads to a factorial code - hence to the same solution as required by the redundancy reduction principle of Barlow. We show also that this result is valid for linear, more generally unbounded, transfer functions, provided optimization is performed under an additive constraint, that is which can be written as a sum of terms, each one being specific to one output neuron. Finally we study the effect of a non zero input noise. We find that, at first order in the input noise, assumed to be small ...