Results 1  10
of
13
Sparse coding with an overcomplete basis set: a strategy employed by V1
 Vision Research
, 1997
"... The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive f ..."
Abstract

Cited by 954 (12 self)
 Add to MetaCart
The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcompletei.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are nonorthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the inputoutput function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of nonlinearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in
Learning sparse codes with a mixtureofGaussians prior
 In Advances in Neural Information Processing Systems
, 2000
"... We describe a method for learning an overcomplete set of basis functions for the purpose of modeling sparse structure in images. The sparsity of the basis function coefficients is modeled with a mixtureofGaussians distribution. One Gaussian captures nonactive coefficients with a smallvariance ..."
Abstract

Cited by 53 (2 self)
 Add to MetaCart
(Show Context)
We describe a method for learning an overcomplete set of basis functions for the purpose of modeling sparse structure in images. The sparsity of the basis function coefficients is modeled with a mixtureofGaussians distribution. One Gaussian captures nonactive coefficients with a smallvariance distribution centered at zero, while one or more other Gaussians capture active coefficients with a largevariance distribution. We show that when the prior is in such a form, there exist efficient methods for learning the basis functions as well as the parameters of the prior. The performance of the algorithm is demonstrated on a number of test cases and also on natural images. The basis functions learned on natural images are similar to those obtained with other methods, but the sparse form of the coefficient distribution is much better described. Also, since the parameters of the prior are adapted to the data, no assumption about sparse structure in the images need be made a p...
Learning sparse multiscale image representations
, 2003
"... We describe a method for learning sparse multiscale image representations using a sparse prior distribution over the basis function coecients. The prior consists of a mixture of a Gaussian and a Dirac delta function, and thus encourages coecients to have exact zero values. Coecients for an image ar ..."
Abstract

Cited by 41 (0 self)
 Add to MetaCart
(Show Context)
We describe a method for learning sparse multiscale image representations using a sparse prior distribution over the basis function coecients. The prior consists of a mixture of a Gaussian and a Dirac delta function, and thus encourages coecients to have exact zero values. Coecients for an image are computed by sampling from the resulting posterior distribution with a Gibbs sampler. The learned basis is similar to the Steerable Pyramid basis, and yields slightly higher SNR for the same number of active coecients. Denoising using the learned image model is demonstrated for some standard test images, with results that compare favorably with other denoising methods. 1
Learning Sparse Image Codes using a Wavelet Pyramid Architecture
 Advances in Neural Information Processing Systems 13
"... We show how a wavelet basis may be adapted to best represent natural images in terms of sparse coefficients. The wavelet basis, which may be either complete or overcomplete, is specified by a small number of spatial functions which are repeated across space and combined in a recursive fashion so ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
(Show Context)
We show how a wavelet basis may be adapted to best represent natural images in terms of sparse coefficients. The wavelet basis, which may be either complete or overcomplete, is specified by a small number of spatial functions which are repeated across space and combined in a recursive fashion so as to be selfsimilar across scale. These functions are adapted to minimize the estimated code length under a model that assumes images are composed of a linear superposition of sparse, independent components. When adapted to natural images, the wavelet bases take on different orientations and they evenly tile the orientation domain, in stark contrast to the standard, nonoriented wavelet bases used in image compression. When the basis set is allowed to be overcomplete, it also yields higher coding efficiency than standard wavelet bases. 1 Introduction The general problem we address here is that of learning efficient codes for representing natural images. Our previous work in this...
Learning linear, sparse, factorial codes
 A.I. Memo 1580, MIT Artificial Intelligence Lab
, 1996
"... In previous work (Olshausen & Field 1996), an algorithm was described for learning linear sparse codes which, when trained on natural images, produces a set of basis functions that are spatially localized, oriented, and bandpass (i.e., waveletlike). This note shows how the algorithm maybeinte ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
In previous work (Olshausen & Field 1996), an algorithm was described for learning linear sparse codes which, when trained on natural images, produces a set of basis functions that are spatially localized, oriented, and bandpass (i.e., waveletlike). This note shows how the algorithm maybeinterpreted within a maximumlikelihood framework. Several useful insights emerge from this connection: it makes explicit the relation to statistical independence (i.e., factorial coding), it shows a formal relationship to the algorithm of Bell and Sejnowski (1995), and it suggests how to adapt parameters that were previously fixed.
Principles of image representation in visual cortex,” in The Visual Neurosciences
, 2003
"... The visual cortex is responsible for most of our conscious perception of the visual world, yet we remain largely ignorant of the principles underlying its function despite progress on many fronts of neuroscience. ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
The visual cortex is responsible for most of our conscious perception of the visual world, yet we remain largely ignorant of the principles underlying its function despite progress on many fronts of neuroscience.
Sparse Codes and Spikes
 PROBABILISTIC MODELS OF THE BRAIN: PERCEPTION AND NEURAL FUNCTION
, 2001
"... ..."
CAMERA BASED MOTION ESTIMATION AND RECOGNITION FOR HUMANCOMPUTER INTERACTION
"... Academic dissertation to be presented, with the assent of ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Academic dissertation to be presented, with the assent of
Behavioral/Systems/Cognitive Neural Activity in Macaque Parietal Cortex Reflects Temporal Integration of Visual Motion Signals during Perceptual Decision Making
"... Decisionmaking often requires the accumulation and maintenance of evidence over time. Although the neural signals underlying sensory processing have been studied extensively, little is known about how the brain accrues and holds these sensory signals to guide later actions. Previous work has sugges ..."
Abstract
 Add to MetaCart
Decisionmaking often requires the accumulation and maintenance of evidence over time. Although the neural signals underlying sensory processing have been studied extensively, little is known about how the brain accrues and holds these sensory signals to guide later actions. Previous work has suggested that neural activity in the lateral intraparietal area (LIP) of the monkey brain reflects the formation of perceptual decisions in a random dot directiondiscrimination task in which monkeys communicate their decisions with eyemovement responses. We tested the hypothesis that decisionrelated neural activity in LIP represents the time integral of the momentary motion “evidence. ” By briefly perturbing the strength of the visual motion stimulus during the formation of perceptual decisions, we tested whether this LIP activity reflected a persistent, integrated “memory ” of these brief sensory events. We found that the responses of LIP neurons reflected substantial temporal integration. Brief pulses had persistent effects on both the monkeys ’ choices and the responses of neurons in LIP, lasting up to 800 ms after appearance. These results demonstrate that LIP is involved in neural time integration underlying the accumulation of evidence in this task. Additional analyses suggest that decisionrelated LIP responses, as well as behavioral choices and reaction times, can be explained by nearperfect time integration that stops when a criterion amount of evidence has been accumulated. Temporal integration may be a fundamental computation underlying higher cognitive functions that are dissociated from immediate sensory inputs or motor outputs. Key words: lateral intraparietal area; LIP; reaction time; visual motion; electrophysiology; vision
13 Sparse Codes and Spikes
"... In order to make progress toward understanding the sensory coding strategies employed by the cortex, it will be necessary to draw upon guiding principles that provide us with reasonable ideas for what to expect and what to look for in the neural circuitry. The unifying theme behind all of the chapte ..."
Abstract
 Add to MetaCart
In order to make progress toward understanding the sensory coding strategies employed by the cortex, it will be necessary to draw upon guiding principles that provide us with reasonable ideas for what to expect and what to look for in the neural circuitry. The unifying theme behind all of the chapters in this book is that probabilistic