Results 1 
8 of
8
Homeostasis And Learning Through SpikeTiming Dependent Plasticity
, 2004
"... Synaptic plasticity is thought to be the neuronal correlate of learning. Moreover, modification of synapses contributes to the activitydependent homeostatic maintenance of neurons and neural networks. In this chapter, we review theories of synaptic plasticity and show that both homeostatic control ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Synaptic plasticity is thought to be the neuronal correlate of learning. Moreover, modification of synapses contributes to the activitydependent homeostatic maintenance of neurons and neural networks. In this chapter, we review theories of synaptic plasticity and show that both homeostatic control of activity and detection of correlations in the presynaptic input can arise from spiketiming dependent plasticity (STDP). Relations to classical ratebased Hebbian learning are discussed.
Learning With Bounded Synapses Generates Synaptic Democracy and Balanced Neurons
, 2003
"... Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memory. Here we show that this for ..."
Abstract
 Add to MetaCart
Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memory. Here we show that this forgetting can be avoided by additional simple constraints. We consider Hebbian plasticity of excitatory synapses which modifies a synapse only if the postsynaptic response does not match the desired output. With this learning rule the original memory capacity with unbounded weights is regained, provided there is (1) some global inhibition, (2) a small learning rate, and (3) a small neuronal threshold. We prove, in the form of a generalized perceptron convergence theorem, that under these constraints a neuron learns to classify any linearly separable set of patterns. The maximal storage capacity is also reestablished if the synapses are distributed over a spatially extended dendritic tree, provided that distal synapses are allowed to attain stronger weights. After successful learning, excitation will roughly balance inhibition. Moreover, learning a large number of patterns urges the synapses to acquire similar strengths when measured in the soma. The fact that synapses saturate has the additional benefit that nonseparable patterns, e.g. similar patterns with contradicting outputs, eventually generate a subthreshold response, and therefore silence neurons which can not provide any information.
unknown title
"... This paper is made available online in accordance with publisher policies. Please scroll down to view the document itself. Please refer to the repository record for this item and our policy information available from the repository home page for further information. To see the final version of this ..."
Abstract
 Add to MetaCart
(Show Context)
This paper is made available online in accordance with publisher policies. Please scroll down to view the document itself. Please refer to the repository record for this item and our policy information available from the repository home page for further information. To see the final version of this paper please visit the publisher’s website. Access to the published version may require a subscription.
LETTER Communicated by Misha Tsodyks Learning Only When Necessary: Better Memories of Correlated Patterns in Networks with Bounded Synapses
"... Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this f ..."
Abstract
 Add to MetaCart
(Show Context)
Learning in a neuronal network is often thought of as a linear superposition of synaptic modifications induced by individual stimuli. However, since biological synapses are naturally bounded, a linear superposition would cause fast forgetting of previously acquired memories. Here we show that this forgetting can be avoided by introducing additional constraints on the synaptic and neural dynamics. We consider Hebbian plasticity of excitatory synapses. A synapse is modified only if the postsynaptic response does not match the desired output. With this learning rule, the original memory performances with unbounded weights are regained, provided that (1) there is some global inhibition, (2) the learning rate is small, and (3) the neurons can discriminate small differences in the total synaptic input (e.g., by making the neuronal threshold small compared to the total postsynaptic input). We prove in the form of a generalized perceptron convergence theorem that under these constraints, a neuron learns to classify any linearly separable set of patterns, including a wide class of highly correlated random patterns. During the learning process, excitation becomes roughly balanced by inhibition, and the neuron classifies the patterns on the basis of small differences around this balance. The fact that synapses saturate has the additional benefit that nonlinearly separable patterns, such as similar patterns with contradicting outputs, eventually generate a subthreshold response, and therefore silence neurons that cannot provide any information. 1
Synaptic plasticity
, 2009
"... Information transfer Neural computation Learning algorithm Dynamical system update rule are sometimes used, but they are still typically local may also use spiketiming dependent rules, but these are also hat if ht be te on previous learning. We have proposed that the basic task of the ..."
Abstract
 Add to MetaCart
Information transfer Neural computation Learning algorithm Dynamical system update rule are sometimes used, but they are still typically local may also use spiketiming dependent rules, but these are also hat if ht be te on previous learning. We have proposed that the basic task of the