Results 1  10
of
31
Sparse coding with an overcomplete basis set: a strategy employed by V1
 Vision Research
, 1997
"... The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive f ..."
Abstract

Cited by 957 (12 self)
 Add to MetaCart
The spatial receptive fields of simple cells in mammalian striate cortex have been reasonably well described physiologically and can be characterized as being localized, oriented, and ban@ass, comparable with the basis functions of wavelet transforms. Previously, we have shown that these receptive field properties may be accounted for in terms of a strategy for producing a sparse distribution of output activity in response to natural images. Here, in addition to describing this work in a more expansive fashion, we examine the neurobiological implications of sparse coding. Of particular interest is the case when the code is overcompletei.e., when the number of code elements is greater than the effective dimensionality of the input space. Because the basis functions are nonorthogonal and not linearly independent of each other, sparsifying the code will recruit only those basis functions necessary for representing a given input, and so the inputoutput function will deviate from being purely linear. These deviations from linearity provide a potential explanation for the weak forms of nonlinearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in
Independent Component Filters Of Natural Images Compared With Simple Cells In Primary Visual Cortex
, 1998
"... this article we investigate to what extent the statistical properties of natural images can be used to understand the variation of receptive field properties of simple cells in the mammalian primary visual cortex. The receptive fields of simple cells have been studied extensively (e.g., Hubel & ..."
Abstract

Cited by 361 (0 self)
 Add to MetaCart
(Show Context)
this article we investigate to what extent the statistical properties of natural images can be used to understand the variation of receptive field properties of simple cells in the mammalian primary visual cortex. The receptive fields of simple cells have been studied extensively (e.g., Hubel & Wiesel 1968, DeValois et al. 1982a, DeAngelis et al. 1993): they are localised in space and time, have bandpass characteristics in the spatial and temporal frequency domains, are oriented, and are often sensitive to the direction of motion of a stimulus. Here we will concentrate on the spatial properties of simple cells. Several hypotheses as to the function of these cells have been proposed. As the cells preferentially respond to oriented edges or lines, they can be viewed as edge or line detectors. Their joint localisation in both the spatial domain and the spatial frequency domain has led to the suggestion that they mimic Gabor filters, minimising uncertainty in both domains (Daugman 1980, Marcelja 1980). More recently, the match between the operations performed by simple cells and the wavelet transform has attracted attention (e.g., Field 1993). The approaches based on Gabor filters and wavelets basically consider processing by the visual cortex as a general image processing strategy, relatively independent of detailed assumptions about image statistics. On the other hand, the edge and line detector hypothesis is based on the intuitive notion that edges and lines are both abundant and important in images. This theme of relating simple cell properties with the statistics of natural images was explored extensively by Field (1987, 1994). He proposed that the cells are optimized specifically for coding natural images. He argued that one possibility for such a code, sparse coding...
Nonnegative sparse coding, in
 Proc. IEEE Workshop on Neural Networks for Signal Processing (NNSP’2002), 2002
"... Abstract. Nonnegative sparse coding is a method for decomposing multivariate data into nonnegative sparse components. In this paper we briefly describe the motivation behind this type of data representation and its relation to standard sparse coding and nonnegative matrix factorization. We then gi ..."
Abstract

Cited by 168 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Nonnegative sparse coding is a method for decomposing multivariate data into nonnegative sparse components. In this paper we briefly describe the motivation behind this type of data representation and its relation to standard sparse coding and nonnegative matrix factorization. We then give a simple yet efficient multiplicative algorithm for finding the optimal values of the hidden components. In addition, we show how the basis vectors can be learned from the observed data. Simulations demonstrate the effectiveness of the proposed method.
Independent Component Analysis Of Natural Image Sequences Yields Spatiotemporal Filters Similar To Simple Cells In Primary Visual Cortex
 PROC. R. SOC. LOND. B
, 1998
"... ..."
(Show Context)
Conditions for nonnegative independent component analysis
 IEEE Signal Processing Letters
, 2002
"... We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are nonnegative. We assume that the random variables s i s are wellgrounded in that they have a nonvanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = ..."
Abstract

Cited by 94 (11 self)
 Add to MetaCart
We consider the noiseless linear independent component analysis problem, in the case where the hidden sources s are nonnegative. We assume that the random variables s i s are wellgrounded in that they have a nonvanishing pdf in the (positive) neighbourhood of zero. For an orthonormal rotation y = Wx of prewhitened observations x = QAs, under certain reasonable conditions we show that y is a permutation of the s (apart from a scaling factor) if and only if y is nonnegative with probability 1. We suggest that this may enable the construction of practical learning algorithms, particularly for sparse nonnegative sources.
Unsupervised analysis of polyphonic music by sparse coding
 IEEE Transactions on Neural Networks
, 2006
"... We investigate a datadriven approach to the analysis and transcription of polyphonic music, using a probabilistic model which is able to find sparse linear decompositions of a sequence of shortterm Fourier spectra. The resulting system represents each input spectrum as a weighted sum of a small nu ..."
Abstract

Cited by 43 (4 self)
 Add to MetaCart
(Show Context)
We investigate a datadriven approach to the analysis and transcription of polyphonic music, using a probabilistic model which is able to find sparse linear decompositions of a sequence of shortterm Fourier spectra. The resulting system represents each input spectrum as a weighted sum of a small number of “atomic ” spectra chosen from a larger dictionary; this dictionary is, in turn, learned from the data in such a way as to represent the given training set in an (information theoretically) efficient way. When exposed to examples of polyphonic music, most of the dictionary elements take on the spectral characteristics of individual notes in the music, so that the sparse decomposition can be used to identify the notes in a polyphonic mixture. Our approach differs from other methods of polyphonic analysis based on spectral decomposition by combining all of the following: a) a formulation in terms of an explicitly given probabilistic model, in which the process estimating which notes are present corresponds naturally with the inference of latent variables in the model; b) a particularly simple generative model, motivated by very general considerations about efficient coding, that makes very few assumptions about the musical origins of the signals being processed; and c) the ability to learn a dictionary of atomic spectra (most of which converge to harmonic spectral profiles associated with specific notes) from polyphonic examples alone—no separate training on monophonic examples is required. Index Terms Learning overcomplete dictionaries, polyphonic music, probabilistic modeling, redundancy reduction, sparse factorial coding, unsupervised learning. ©2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any
A ‘nonnegative PCA’ algorithm for independent component analysis, 2002, submitted for publication
"... We consider the task of independent component analysis when the independent sources are known to be nonnegative and wellgrounded, so that they have a nonzero probability density function (pdf) in the region of zero. We propose the use of a "nonnegative principal component analysis (nonnegative ..."
Abstract

Cited by 37 (3 self)
 Add to MetaCart
We consider the task of independent component analysis when the independent sources are known to be nonnegative and wellgrounded, so that they have a nonzero probability density function (pdf) in the region of zero. We propose the use of a "nonnegative principal component analysis (nonnegative PCA) " algorithm, which is a special case of the nonlinear PCA algorithm, but with a rectification nonlinearity, and we conjecture that this algorithm will find such nonnegative wellgrounded independent sources, under reasonable initial conditions. While the algorithm has proved difficult to analyze in the general case, we give some analytical results that are consistent with this conjecture and some numerical simulations that illustrate its operation. Index Terms independent component analysis learning (artificial intelligence) matrix decomposition principal component analysis independent component analysis nonlinear principal component analysis nonnegative PCA algorithm nonnegative matrix factorization nonzero probability density function rectification nonlinearity subspace learning rule ©2004 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any
Blind separation of positive sources by globally convergent gradient search
 NEURAL COMPUTATION
, 2004
"... The instantaneous noisefree linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perh ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
(Show Context)
The instantaneous noisefree linear mixing model in independent component analysis is largely a solved problem under the usual assumption of independent nongaussian sources and full column rank mixing matrix. However, with some prior information on the sources, like positivity, new analysis and perhaps simplified solution methods may yet become possible. In this paper, we consider the task of independent component analysis when the independent sources are known to be nonnegative and wellgrounded, which means that they have a nonzero pdf in the region of zero. It can be shown that in this case, the solution method is basically very simple: an orthogonal rotation of the whitened observation vector into nonnegative outputs will give a positive permutation of the original sources. We propose a cost function whose minimum coincides with nonnegativity and derive the gradient algorithm under the whitening constraint, under which the separating matrix is orthogonal. We further prove that in the Stiefel manifold of 3 orthogonal matrices, the cost function is a Lyapunov function for the matrix gradient flow, implying global convergence. Thus this algorithm is guaranteed to find the nonnegative wellgrounded independent sources. The analysis is complemented by a numerical simulation which illustrates the algorithm.
Natural Image Statistics and Visual Processing
, 1998
"... This thesis focuses on the statistics of natural images. The first question that is to be
answered is: what are natural images and why do we study them. We start with our
definition, and then discuss the properties and uses of natural images. An image is a
projection of an environment, and natural i ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
This thesis focuses on the statistics of natural images. The first question that is to be
answered is: what are natural images and why do we study them. We start with our
definition, and then discuss the properties and uses of natural images. An image is a
projection of an environment, and natural images are those that are taken from a
natural environment, i.e., an environment that is commonly encountered by a
particular organism. This means that these images represent the natural visual input
(natural stimulus) of an eye. In general, images may include optical information
extending over space, time (timevarying images), as well as wavelength (colour
images). In this thesis, however, we restrict ourselves to images of light intensity
(black and white images) that either extend exclusively over space (still images) or
exclusively over time (time series).
The motivation for investigating natural images is to gain a better understanding of
neural processing in visual systems. Natural images and visual processing in
biological systems are linked by the hypothesis that evolution has optimised visual
systems to process natural stimuli. The analysis of the optimal performance of
biological visual systems may inspire the building of artificial visual systems.
Adaptive Lateral Inhibition for NonNegative ICA
 Proceedings of the International Conference on Independent Component Analysis and Blind Signal Separation (ICA 2001)
, 2001
"... We consider the problem of decomposing an observed input matrix (or vector sequence) into the product of a mixing ¡ matrix with a component ¢ matrix, ¡¥ ¢ i.e. where (a) the elements of the mixing matrix and the component matrix are nonnegative, and (b) the underlying components are considered to b ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
We consider the problem of decomposing an observed input matrix (or vector sequence) into the product of a mixing ¡ matrix with a component ¢ matrix, ¡¥ ¢ i.e. where (a) the elements of the mixing matrix and the component matrix are nonnegative, and (b) the underlying components are considered to be observations from an independent source. This is therefore a problem of nonnegative independent component analysis. Under certain reasonable conditions, it appears to be sufficient simply to ensure that the output matrix has diagonal covariance (in addition to the nonnegativity constraints) to find the independent basis. Neither higherorder statistics nor temporal correlations are required. The solution is implemented as a neural network with errorcorrecting forward/backward weights and linear antiHebbian lateral inhibition, and is demonstrated on small artificial data sets including a linear version of the Bars problem.