Results 1  10
of
37
The Neural Code of the Retina
, 1999
"... this article. the mean light level and discards information about the absolute intensity in the image. The purpose of both The only effective remedy will be to study visual processing under conditions of natural stimulation. There operations would be to remove from the image behaviorhas been a gener ..."
Abstract

Cited by 50 (2 self)
 Add to MetaCart
this article. the mean light level and discards information about the absolute intensity in the image. The purpose of both The only effective remedy will be to study visual processing under conditions of natural stimulation. There operations would be to remove from the image behaviorhas been a general reluctance to use natural images or ally uninteresting aspects that are mostly dependent on movies in vision research, mostly due to the seemingly the conditions of illumination or the average structure intractable complexity of natural scenes, the need to of the environment, while preserving and emphasizing consider the animal's eye movements, and the obvious the differences between objects in the visual scene. One bias that results from choosing any one stimulus from can speculate further that each successive stage of the such a large set. On the other hand, given the large early visual system adapts toand consequently disuncertainties about what actually happens during natu cardswhat appear to be constants in the neural repreral vision, studying the response to even one or a few sentation from the previous stage (Barlow, 1990).
Neural mechanisms for processing binocular information
 II. Complex cells. J. Neurophysiol
, 1999
"... mechanisms for processing binocular information. I. Simple cells. J. ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
mechanisms for processing binocular information. I. Simple cells. J.
Fast and slow contrast adaptation in retinal circuitry
 Neuron
, 2002
"... The visual system adapts to the magnitude of intensity fluctuations, and this process begins in the retina. Following the switch from a lowcontrast environment to one of high contrast, ganglion cell sensitivity declines in two distinct phases: a fast change occurs in �0.1 s, and a slow decrease ov ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
The visual system adapts to the magnitude of intensity fluctuations, and this process begins in the retina. Following the switch from a lowcontrast environment to one of high contrast, ganglion cell sensitivity declines in two distinct phases: a fast change occurs in �0.1 s, and a slow decrease over �10 s. To examine where these modulations arise, we recorded intracellularly from every major cell type in the salamander retina. Certain bipolar and amacrine cells, and all ganglion cells, adapted to contrast. Generally, these neurons showed both fast and slow adaptation. Fast effects of a contrast increase included accelerated kinetics, decreased sensitivity, and a depolarization of the baseline membrane potential. Slow adaptation did not affect kinetics, but produced a gradual hyperpolarization. This hyperpolarization can account for slow adaptation in the spiking output of ganglion cells.
Blind Inversion of Wiener Systems
 IEEE Trans. on Signal Processing
, 1999
"... A system in which a linear dynamic part is followed by a nonlinear memoryless distortion, a Wiener system, is blindly inverted. This kind of systems can be modelised as a postnonlinear mixture, and using some results about these mixtures, an efficient algorithm is proposed. Results in a hard situati ..."
Abstract

Cited by 18 (7 self)
 Add to MetaCart
A system in which a linear dynamic part is followed by a nonlinear memoryless distortion, a Wiener system, is blindly inverted. This kind of systems can be modelised as a postnonlinear mixture, and using some results about these mixtures, an efficient algorithm is proposed. Results in a hard situation are presented, and illustrate the efficiency of this algorithm. 1
Source Separation: From Dusk Till Dawn
"... The first part of this paper is concerned by the history of source separation. It include our comments and those of a few other researchers on the development of this new research field. The second part is focused on recent developments of the separation in nonlinear mixtures. ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
The first part of this paper is concerned by the history of source separation. It include our comments and those of a few other researchers on the development of this new research field. The second part is focused on recent developments of the separation in nonlinear mixtures.
Inferring input nonlinearities in neural encoding models
, 2007
"... Draft to be submitted Abstract. We describe a class of models that can be used to predict how the instantaneous firing rate of a neuron varies in response to a dynamic stimulus. These models are based on learned pointwise nonlinear transforms of the stimulus, followed by a temporal linear filtering ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
Draft to be submitted Abstract. We describe a class of models that can be used to predict how the instantaneous firing rate of a neuron varies in response to a dynamic stimulus. These models are based on learned pointwise nonlinear transforms of the stimulus, followed by a temporal linear filtering operation on the transformed inputs. In one case, the transformation is the same for all lagtimes. Thus, this “input nonlinearity ” converts the initial numerical representation of the stimulus (e.g. air pressure) to a new representation which is optimal as input to the subsequent linear model (e.g. decibels). We present algorithms for estimating both the input nonlinearity and the linear weights, including regularization techniques, and for quantifying the experimental uncertainty in these estimates. In another approach, the model is generalized to allow a potentially different nonlinear transform of the stimulus value at each lagtime. Although more general, this model is algorithmically more straightforward to fit. However, it contains many more degrees of freedom, and thus requires considerably more data for accurate and precise estimation. The feasibility of these new methods is demonstrated both on synthetic data, and on responses recorded from a neuron in rodent barrel cortex. The models are shown to predict responses to novel data accurately, and to recover several important neuronal response properties. 1
An efficient approximation to the quadratic Volterra filter and its application in realtime loudspeaker linearization
, 1995
"... Nonlinear filtering based on the Volterra series expansion a powerful and popular approach in signal processing. However, a serious problem is the increased filter complexity as compared to linear filtering. This paper presents an efficient approximation to the 2nd order Volterra filter. The propose ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Nonlinear filtering based on the Volterra series expansion a powerful and popular approach in signal processing. However, a serious problem is the increased filter complexity as compared to linear filtering. This paper presents an efficient approximation to the 2nd order Volterra filter. The proposed filter structure, called Multi Memory Decomposition (MMD), is composed of 3 linear FIR filters and one multiplier. Hence, the number of required filter operations is linear in the filter memory length. MMD coefficient determination with respect to a 2nd order reference kernel is presented. Additionally, block oriented and adaptive algorithms are proposed which calculate the filter weights from input and output measurements of an unknown system. The good performance of the MMD model is demonstrated by simulations and in a realtime application. Therefore, the linearization scheme for the compensation of nonlinear distortions with a preprocessor is introduced. The preprocessor was implemented...
A unifying view of Wiener and Volterra theory and polynomial kernel regression
 Neural Computation
, 2006
"... Volterra and Wiener series are perhaps the best understood nonlinear system representations in signal processing. Although both approaches have enjoyed a certain popularity in the past, their application has been limited to rather lowdimensional and weakly nonlinear systems due to the exponential gr ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Volterra and Wiener series are perhaps the best understood nonlinear system representations in signal processing. Although both approaches have enjoyed a certain popularity in the past, their application has been limited to rather lowdimensional and weakly nonlinear systems due to the exponential growth of the number of terms that have to be estimated. We show that Volterra and Wiener series can be represented implicitly as elements of a reproducing kernel Hilbert space by utilizing polynomial kernels. The estimation complexity of the implicit representation is linear in the input dimensionality and independent of the degree of nonlinearity. Experiments show performance advantages in terms of convergence, interpretability, and system sizes that can be handled. 1
Nonparametric Identification of Wiener Systems by Orthogonal Series
, 1994
"... ... a linear dynamic and a nonlinear memoryless subsystems connected in a cascade, is identified. Both the input signal and disturbance are random, white, and Gaussian. The unknown nonlinear characteristic is strictly monotonous and differentiable and, therefore, the problem of its recovering from i ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
... a linear dynamic and a nonlinear memoryless subsystems connected in a cascade, is identified. Both the input signal and disturbance are random, white, and Gaussian. The unknown nonlinear characteristic is strictly monotonous and differentiable and, therefore, the problem of its recovering from inputoutput observations of the whole system is nonparametric. It is shown that the inverse of the characteristic is a regression function and, next, a class of orthogonal series nonparametric estimates recovering the regression is proposed and analyzed. The estimates apply the trigonometric, Legendre, and Hermite orthogonal functions. Pointwise consistency of all the algorithms is shown. Under some additional smoothness restrictions, the rates of their convergence are examined and compared. An algorithm to identify the impulse response of the linear subsystem is proposed.
Statistical encoding model for a primary motor cortical brainmachine interface
 IEEE Trans Biomed Eng
"... Abstract—A number of studies of the motor system suggest that the majority of primary motor cortical neurons represent simple movementrelated kinematic and dynamic quantities in their timevarying activity patterns. An example of such an encoding relationship is the cosine tuning of firing rate with ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Abstract—A number of studies of the motor system suggest that the majority of primary motor cortical neurons represent simple movementrelated kinematic and dynamic quantities in their timevarying activity patterns. An example of such an encoding relationship is the cosine tuning of firing rate with respect to the direction of hand motion. We present a systematic development of statistical encoding models for movementrelated motor neurons using multielectrode array recordings during a twodimensional (2D) continuous pursuittracking task. Our approach avoids massive averaging of responses by utilizing 2D normalized occupancy plots, cascaded linearnonlinear (LN) system models and a method for describing variability in discrete random systems. We found that the expected firing rate of most movementrelated motor neurons is related to the kinematic values by a linear transformation, with a significant nonlinear distortion in about 1 3 of the neurons. The measured variability of the neural responses is markedly nonPoisson in many neurons and is well captured by a “normalizedGaussian ” statistical model that is defined and introduced here. The statistical model is seamlessly integrated into a nearlyoptimal recursive method for decoding movement from neural responses based on a Sequential Monte Carlo filter. Index Terms—Discrete distribution, LN model, neural decoding, neuroprosthetics, sequential MonteCarlo. I.