Results 1  10
of
3,087
Biol. Cybern. 81, 415±424 (1999) Nonlinear EEG analysis based on a neural mass model
, 1999
"... Abstract. The wellknown neural mass model described ..."
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 676 (15 self)
 Add to MetaCart
. Introduction The task of calculating posterior marginals on nodes in an arbitrary Bayesian network is known to be NP hard In this paper we investigate the approximation performance of "loopy belief propagation". This refers to using the wellknown Pearl polytree algorithm [12] on a Bayesian network
Regularization Theory and Neural Networks Architectures
 Neural Computation
, 1995
"... We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Ba ..."
Abstract

Cited by 395 (32 self)
 Add to MetaCart
We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial
A Unifying Review of Linear Gaussian Models
, 1999
"... Factor analysis, principal component analysis, mixtures of gaussian clusters, vector quantization, Kalman filter models, and hidden Markov models can all be unified as variations of unsupervised learning under a single basic generative model. This is achieved by collecting together disparate observa ..."
Abstract

Cited by 351 (18 self)
 Add to MetaCart
that factor analysis and mixtures of gaussians can be implemented in autoencoder neural networks and learned using squared error plus the same regularization term. We introduce a new model for static data, known as sensible principal component analysis, as well as a novel concept of spatially adaptive
4220076m 415..424
"... Abstract. The wellknown neural mass model described by Lopes da Introduction It has been recognized for some years A number of authors have attempted to model the EEG at a macroscopic level by means of sets of dierential equations or integraldierential equations. What has been lacking is a dir ..."
Abstract
 Add to MetaCart
Abstract. The wellknown neural mass model described by Lopes da Introduction It has been recognized for some years A number of authors have attempted to model the EEG at a macroscopic level by means of sets of dierential equations or integraldierential equations. What has been lacking is a
Sparse deep belief net model for visual area V2
 Advances in Neural Information Processing Systems 20
, 2008
"... Abstract 1 Motivated in part by the hierarchical organization of the neocortex, a number of recently proposed algorithms have tried to learn hierarchical, or “deep, ” structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed ..."
Abstract

Cited by 164 (19 self)
 Add to MetaCart
, similar to the gabor functions known to model simple cell receptive fields in area V1. Further, the second layer in our model encodes various combinations of the first layer responses in the data. Specifically, it picks up both collinear (“contour”) features as well as corners and junctions. More
Speech sound acquisition, coarticulation, and rate effects in a neural network model of speech production
 Psychological Review
, 1995
"... This article describes a neural network model of speech motor skill acquisition and speech production that explains a wide range of data on variability, motor equivalence, coarticulation, and rate effects. Model parameters are learned during a babbling phase. To explain how infants learn languagesp ..."
Abstract

Cited by 121 (25 self)
 Add to MetaCart
control processes for the 2 sound types. Anticipatory coarticulation arises when targets are reduced in size on the basis of context; this generalizes the wellknown lookahead model of coarticulation. Computer simulations verify the model's properties. The primary goal of the modeling work described
A neural mass model for MEG/EEG: coupling and neuronal dynamics
 NeuroImage
, 2003
"... Although MEG/EEG signals are highly variable, systematic changes in distinct frequency bands are commonly encountered. These frequencyspecific changes represent robust neural correlates of cognitive or perceptual processes (for example, alpha rhythms emerge on closing the eyes). However, their func ..."
Abstract

Cited by 81 (21 self)
 Add to MetaCart
Although MEG/EEG signals are highly variable, systematic changes in distinct frequency bands are commonly encountered. These frequencyspecific changes represent robust neural correlates of cognitive or perceptual processes (for example, alpha rhythms emerge on closing the eyes). However
Bayesian computation in recurrent neural circuits
 Neural Computation
, 2004
"... A large number of human psychophysical results have been successfully explained in recent years using Bayesian models. However, the neural implementation of such models remains largely unclear. In this paper, we show that a network architecture commonly used to model the cerebral cortex can implem ..."
Abstract

Cited by 94 (4 self)
 Add to MetaCart
and correctly estimate stimulus orientation in the presence of significant noise. In the case of motion detection, we show that the resulting model network exhibits direction selectivity and correctly computes the posterior probabilities over motion direction and position. When used to solve the wellknown
STATISTICAL LANGUAGE MODELS BASED ON NEURAL NETWORKS
, 2012
"... Statistical language models are crucial part of many successful applications, such as automatic speech recognition and statistical machine translation (for example wellknown ..."
Abstract

Cited by 49 (6 self)
 Add to MetaCart
Statistical language models are crucial part of many successful applications, such as automatic speech recognition and statistical machine translation (for example wellknown
Results 1  10
of
3,087