Results 1  10
of
142
Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3
 J. Neurosci
, 1995
"... Hippocampal region CA3 contains strong recurrent excitation mediated by synapses of the longitudinal association fibers. These recurrent excitatory connections may play a dominant role in determining the information processing characteristics of this region. However, they result in feedback dynam ..."
Abstract

Cited by 83 (10 self)
 Add to MetaCart
Hippocampal region CA3 contains strong recurrent excitation mediated by synapses of the longitudinal association fibers. These recurrent excitatory connections may play a dominant role in determining the information processing characteristics of this region. However, they result in feedback dynamics that may cause both runaway excitatory activity and runaway synaptic modification. Previous models of recurrent excitation have prevented unbounded activity using biologically unrealistic techniques. Here, the activation of feedback inhibition is shown to prevent unbounded activity, allowing stable activity states during recall and learning. In the model, cholinergic suppression of synaptic transmission at excitatory feedback synapses is shown to determine the extent to which activity depends upon new features of the afferent input versus components
The Role of Constraints in Hebbian Learning
 NEURAL COMPUTATION
, 1994
"... Models of unsupervised correlationbased (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limi ..."
Abstract

Cited by 63 (4 self)
 Add to MetaCart
Models of unsupervised correlationbased (Hebbian) synaptic plasticity are typically unstable: either all synapses grow until each reaches the maximum allowed strength, or all synapses decay to zero strength. A common method of avoiding these outcomes is to use a constraint that conserves or limits the total synaptic strength over a cell. We study the dynamical effects of such constraints. Two methods of enforcing a constraint are distinguished, multiplicative and subtractive. For otherwise linear learning rules, multiplicative enforcement of a constraint results in dynamics that converge to the principal eigenvector of the operator determining unconstrained synaptic development. Subtractive enforcement, in contrast, typically leads to a final state in which almost all synaptic strengths reach either the maximum or minimum allowed value. This final state is often dominated by weight configurations other than the principal eigenvector of the unconstrained operator. Multiplica...
Statistically Efficient Estimation Using Population Coding
, 1998
"... Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient (the variance of the estimate is much larger than the smallest possible variance) or biologically implausible ..."
Abstract

Cited by 57 (9 self)
 Add to MetaCart
Coarse codes are widely used throughout the brain to encode sensory and motor variables. Methods designed to interpret these codes, such as population vector analysis, are either inefficient (the variance of the estimate is much larger than the smallest possible variance) or biologically implausible, like maximum likelihood. Moreover, these methods attempt to compute a scalar or vector estimate of the encoded variable. Neurons are faced with a similar estimation problem. They must read out the responses of the presynaptic neurons, but, by contrast, they typically encode the variable with a further population code rather than as a scalar. We show how a nonlinear recurrent network can be used to perform estimation in a nearoptimal way while keeping the estimate in a coarse code format. This work suggests that lateral connections in the cortex may be involved in cleaning up uncorrelated noise among neurons representing similar variables.
An analysis of the war of attrition and the allpay auction
 Journal of Economic Theory
, 1997
"... We study the war of attrition and the allpay auction when players ’ signals are affiliated and symmetrically distributed. We (a) find sufficient conditions for the existence of symmetric monotonicequilibrium bidding strategies; and (b) examine the performance of these auction forms in terms of the ..."
Abstract

Cited by 55 (1 self)
 Add to MetaCart
We study the war of attrition and the allpay auction when players ’ signals are affiliated and symmetrically distributed. We (a) find sufficient conditions for the existence of symmetric monotonicequilibrium bidding strategies; and (b) examine the performance of these auction forms in terms of the expected revenue accruing to the seller. Under our conditions the war of attrition raises greater expected revenue than all other other known sealed bid auction forms. 1
Extracting and Representing Qualitative Behaviors of Complex Systems in Phase Spaces
, 1991
"... This paper describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Res ..."
Abstract

Cited by 45 (16 self)
 Add to MetaCart
This paper describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology. Support for the Laboratory's artificial intelligence research is provided in part by the Advanced Research Projects Agency of the Department of Defense under Office of Naval Research contract N0001489 J3202, and in part by the National Science Foundation grant MIP9001651. The author is also supported by a G.Y. Chu Fellowship
2000): “Uniqueness and Existence of Equilibrium in Auctions with a reserve Price
 Games and Economics Behavior
"... We prove existence and uniqueness of equilibrium for a general class of twoplayer bidding games. We apply our results to the first price auction, the combination of first and second price auctions, the war of attrition, the all pay auction, as well as combinations of the latter two auction forms. We ..."
Abstract

Cited by 34 (1 self)
 Add to MetaCart
We prove existence and uniqueness of equilibrium for a general class of twoplayer bidding games. We apply our results to the first price auction, the combination of first and second price auctions, the war of attrition, the all pay auction, as well as combinations of the latter two auction forms. We also treat the first price auction without risk neutrality. Our results deal with the asymmetric, affiliated common values environment. In the case where signals are independent our results apply to all equilibria. When signals are not independent, our uniqueness results hold in the class of nondecreasing strategy equilibria. Journal of Economic
A Survey of ContinuousTime Computation Theory
 Advances in Algorithms, Languages, and Complexity
, 1997
"... Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists o ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Motivated partly by the resurgence of neural computation research, and partly by advances in device technology, there has been a recent increase of interest in analog, continuoustime computation. However, while specialcase algorithms and devices are being developed, relatively little work exists on the general theory of continuoustime models of computation. In this paper, we survey the existing models and results in this area, and point to some of the open research questions. 1 Introduction After a long period of oblivion, interest in analog computation is again on the rise. The immediate cause for this new wave of activity is surely the success of the neural networks "revolution", which has provided hardware designers with several new numerically based, computationally interesting models that are structurally sufficiently simple to be implemented directly in silicon. (For designs and actual implementations of neural models in VLSI, see e.g. [30, 45]). However, the more fundamental...
Geometric Integration Using Discrete Gradients
, 1998
"... This paper discusses the discrete analogue of the gradient of a function and shows how discrete gradients can be used in the numerical integration of ordinary differential equations (ODE's). Given an ODE and one or more first integrals (i.e., constants of the motion) and/or Lyapunov functions, it is ..."
Abstract

Cited by 28 (15 self)
 Add to MetaCart
This paper discusses the discrete analogue of the gradient of a function and shows how discrete gradients can be used in the numerical integration of ordinary differential equations (ODE's). Given an ODE and one or more first integrals (i.e., constants of the motion) and/or Lyapunov functions, it is shown that the ODE can be rewritten as a `lineargradient system.' Discrete gradients are used to construct discrete approximations to the ODE which preserve the first integrals and Lyapunov functions exactly. The method applies to all Hamiltonian, Poisson, and gradient systems, and also to many dissipative systems (those with a known first integral or Lyapunov function).
Language Evolution by Iterated Learning With Bayesian Agents
, 2007
"... Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Ba ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widelyused Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectationmaximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners.
Numerical Analysis of Dynamical Systems
, 1995
"... This article reviews the application of various notions from the theory of dynamical systems to the analysis of numerical approximation of initial value problems over long time intervals. Standard error estimates comparing individual trajectories are of no direct use in this context since the error ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
This article reviews the application of various notions from the theory of dynamical systems to the analysis of numerical approximation of initial value problems over long time intervals. Standard error estimates comparing individual trajectories are of no direct use in this context since the error constant typically grows like the exponential of the time interval under consideration. Instead of comparing trajectories, the effect of discretization on various sets which are invariant under the evolution of the underlying differential equation is studied. Such invariant sets are crucial in determining long time dynamics. The particular invariant sets which are studied are equilibrium points, together with their unstable manifolds and local phase portraits, periodic solutions, quasiperiodic solutions and strange attractors. Particular attention is paid to the development of a unified theory and to the development of an existence theory for invariant sets of the underlying differential equation which may be used directly to construct an analogous existence theory (and hence a simple approximation theory) for the numerical method. To appear in Acta Numerica 1994, Cambridge University Press CONTENTS