• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 28,645
Next 10 →

Application of the Hindmarsh-Rose Neural Model in Electronic Circuits

by Daniel Terence Debolt, Daniel Terence Debolt, Daniel T. Debolt , 2011
"... in electronic circuits ..."
Abstract - Add to MetaCart
in electronic circuits

Desynchronization of Systems of Coupled Hindmarsh-Rose Oscillators *

by A Gjurchinovski , V Urumov , Z Vasilkoski , 2011
"... Abstract. It is widely assumed that neural activity related to synchronous rhythms of large portions of neurons in specific locations of the brain is responsible for the pathology manifested in patients' uncontrolled tremor and other similar diseases. To model such systems Hindmarsh-Rose (HR) ..."
Abstract - Add to MetaCart
Abstract. It is widely assumed that neural activity related to synchronous rhythms of large portions of neurons in specific locations of the brain is responsible for the pathology manifested in patients' uncontrolled tremor and other similar diseases. To model such systems Hindmarsh-Rose (HR

Biological Experimental Observations of an Unnoticed Chaos as Simulated by the Hindmarsh-Rose Model

by Huaguang Gu , 2013
"... An unnoticed chaotic firing pattern, lying between period-1 and period-2 firing patterns, has received little attention over the past 20 years since it was first simulated in the Hindmarsh-Rose (HR) model. In the present study, the rat sciatic nerve model of chronic constriction injury (CCI) was use ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
An unnoticed chaotic firing pattern, lying between period-1 and period-2 firing patterns, has received little attention over the past 20 years since it was first simulated in the Hindmarsh-Rose (HR) model. In the present study, the rat sciatic nerve model of chronic constriction injury (CCI

Techniques for temporal dynamics of neuronal systems: the Hindmarsh-Rose model

by Roberto Barrio , Andrey Shilnikov
"... Abstract A phenomenological system of ODEs proposed by Hindmarsh and Rose [2] for modeling bursting and spiking oscillatory activities in isolated neurons is given by: here, x is treated as the membrane potential, while y and z describe some fast and slow gating variables for ionic currents, resp ..."
Abstract - Add to MetaCart
, and x 0 being viewed as a control parameter delaying and advancing the activation of the slow current in the modeled neuron. In this talk we analyze metamorphoses of oscillatory activities, such as plateau-like and square-wave bursting in the Hindmarsh-Rose model by using a complementary suite

Codimension-two homoclinic bifurcations underlying spike adding in the Hindmarsh-Rose burster

by Daniele Linaro, Alan Champneys, Mathieu Desroches, Marco Storace - SIAM J. Appl. Dyn. Syst
"... The well-studied Hindmarsh-Rose model of neural action potential is revisited from the point of view of global bifurcation analysis. This slow-fast system of three paremeterised differential equations is arguably the simplest reduction of Hodgkin-Huxley models capable of exhibiting all qualitatively ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
The well-studied Hindmarsh-Rose model of neural action potential is revisited from the point of view of global bifurcation analysis. This slow-fast system of three paremeterised differential equations is arguably the simplest reduction of Hodgkin-Huxley models capable of exhibiting all

Imagenet classification with deep convolutional neural networks.

by Alex Krizhevsky , Ilya Sutskever , Geoffrey E Hinton - In Advances in the Neural Information Processing System, , 2012
"... Abstract We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than the pr ..."
Abstract - Cited by 1010 (11 self) - Add to MetaCart
Abstract We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0% which is considerably better than

Active Learning with Statistical Models

by David A. Cohn, Zoubin Ghahramani, Michael I. Jordan , 1995
"... For manytypes of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992# Cohn, 1994]. We then showhow the same principles may be used to select data for two alternative, statist ..."
Abstract - Cited by 679 (10 self) - Add to MetaCart
For manytypes of learners one can compute the statistically "optimal" way to select data. We review how these techniques have been used with feedforward neural networks [MacKay, 1992# Cohn, 1994]. We then showhow the same principles may be used to select data for two alternative

A Model of Saliency-based Visual Attention for Rapid Scene Analysis

by Laurent Itti, Christof Koch, Ernst Niebur , 1998
"... A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing salie ..."
Abstract - Cited by 1748 (72 self) - Add to MetaCart
A visual attention system, inspired by the behavior and the neuronal architecture of the early primate visual system, is presented. Multiscale image features are combined into a single topographical saliency map. A dynamical neural network then selects attended locations in order of decreasing

A Neural Probabilistic Language Model

by Yoshua Bengio, Réjean Ducharme, Pascal Vincent, Christian Jauvin - JOURNAL OF MACHINE LEARNING RESEARCH , 2003
"... A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen ..."
Abstract - Cited by 447 (19 self) - Add to MetaCart
is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.

Learning to rank using gradient descent

by Chris Burges, Tal Shaked, Erin Renshaw, Matt Deeds, Nicole Hamilton, Greg Hullender - In ICML , 2005
"... We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data f ..."
Abstract - Cited by 534 (17 self) - Add to MetaCart
We investigate using gradient descent methods for learning ranking functions; we propose a simple probabilistic cost function, and we introduce RankNet, an implementation of these ideas using a neural network to model the underlying ranking function. We present test results on toy data and on data
Next 10 →
Results 1 - 10 of 28,645
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University