Results 1  10
of
47
A tutorial on Principal Component Analysis
 Systems Neurobiology Laboratory, Salk Institute for Biological Studies
, 2005
"... Principal component analysis (PCA) is a mainstay of modern data analysis a black box that is widely used but poorly understood. The goal of this paper is to dispel the magic behind this black box. This tutorial focuses on building a solid intuition for how and why principal component analysis works ..."
Abstract

Cited by 90 (0 self)
 Add to MetaCart
Principal component analysis (PCA) is a mainstay of modern data analysis a black box that is widely used but poorly understood. The goal of this paper is to dispel the magic behind this black box. This tutorial focuses on building a solid intuition for how and why principal component analysis works; furthermore, it crystallizes this knowledge by deriving from simple intuitions, the mathematics behind PCA. This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of PCA as well as the when, the how and the why of applying this technique. I.
Estimating Entropy Rates with Bayesian Confidence Intervals
 NEURAL COMPUTATION 17, 1531–1576 (2005)
, 2005
"... The entropy rate quantifies the amount of uncertainty or disorder produced by any dynamical system. In a spiking neuron, this uncertainty translates into the amount of information potentially encoded and thus the subject of intense theoretical and experimental investigation. Estimating this quantity ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
the information rates between sensory stimuli and neural responses in experimental data (Shlens, Kennel, Abarbanel, & Chichilnisky, 2004).
Devise: A deep visualsemantic embedding model
 In NIPS
, 2013
"... Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is ..."
Abstract

Cited by 60 (3 self)
 Add to MetaCart
Modern visual recognition systems are often limited in their ability to scale to large numbers of object categories. This limitation is in part due to the increasing difficulty of acquiring sufficient training data in the form of labeled images as the number of object categories grows. One remedy is to leverage data from other sources – such as text data – both to train visual models and to constrain their predictions. In this paper we present a new deep visualsemantic embedding model trained to identify visual objects using both labeled image data as well as semantic information gleaned from unannotated text. We demonstrate that this model matches stateoftheart performance on the 1000class ImageNet object recognition challenge while making more semantically reasonable errors, and also show that the semantic information can be exploited to make predictions about tens of thousands of image labels not observed during training. Semantic knowledge improves such zeroshot predictions achieving hit rates of up to 18 % across thousands of novel labels never seen by the visual model. 1
Estimating Information Rates with Confidence Intervals in Neural Spike Trains
, 2007
"... Information theory provides a natural set of statistics to quantify the amount of knowledge a neuron conveys about a stimulus. A related work (Kennel, Shlens, Abarbanel, & Chichilnisky, 2005) demonstrated how to reliably estimate, with a Bayesian confidence interval, the entropy rate from a dis ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Information theory provides a natural set of statistics to quantify the amount of knowledge a neuron conveys about a stimulus. A related work (Kennel, Shlens, Abarbanel, & Chichilnisky, 2005) demonstrated how to reliably estimate, with a Bayesian confidence interval, the entropy rate from a
Notes on kullbackleibler divergence and likelihood theory
 System Neurobiology Laboratory, Salk Institute for Biological Studies
, 2007
"... iv ..."
Fast, Accurate Detection of 100,000 Object Classes on a Single Machine
"... Many object detection systems are constrained by the time required to convolve a target image with a bank of filters that code for different aspects of an object’s appearance, such as the presence of component parts. We exploit localitysensitive hashing to replace the dotproduct kernel operator in ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
Many object detection systems are constrained by the time required to convolve a target image with a bank of filters that code for different aspects of an object’s appearance, such as the presence of component parts. We exploit localitysensitive hashing to replace the dotproduct kernel operator in the convolution with a fixed number of hashtable probes that effectively sample all of the filter responses in time independent of the size of the filter bank. To show the effectiveness of the technique, we apply it to evaluate 100,000 deformablepart models requiring over a million (part) filters on multiple scales of a target image in less than 20 seconds using a single multicore processor with 20GB of RAM. This represents a speedup of approximately 20,000 times — four orders of magnitude — when compared with performing the convolutions explicitly on the same hardware. While mean average precision over the full set of 100,000 object classes is around 0.16 due in large part to the challenges in gathering training data and collecting ground truth for so many classes, we achieve a mAP of at least 0.20 on a third of the classes and 0.30 or better on about 20 % of the classes. 1.
A Light Discussion and Derivation of Entropy
, 2014
"... The expression for entropy sometimes appears mysterious – as it often is asserted without justification. This short manuscript contains a discussion of the underlying assumptions behind entropy as well as simple derivation of this ubiquitous quantity. The uncertainty in a set of discrete outcomes is ..."
Abstract
 Add to MetaCart
The expression for entropy sometimes appears mysterious – as it often is asserted without justification. This short manuscript contains a discussion of the underlying assumptions behind entropy as well as simple derivation of this ubiquitous quantity. The uncertainty in a set of discrete outcomes is the entropy. In some text books an explanation for this assertion is often another assertion: the entropy is the average minimum number of yesno questions necessary to identify an item randomly drawn from a known, discrete probability distribution. It would be preferable to avoid these assertions and search for the heart of the matter where does entropy arise from? This manuscript addresses this question by deriving from three simple postulates an expression for entropy. To gain some intuition for these postulates, we discuss the quintessential thought experiment: the uncertainty of rolling a die. How much uncertainty exists in the role of a die? It is not hard to think of some simple intuitions which influence the level of uncertainty. Postulate #1. A larger number of potential outcomes have larger uncertainty. The more number of sides on a die, the harder it is to predict a role and hence the greater the uncertainty. Or conversely, there exists no uncertainty in rolling a singlesided die (a marble?). More precisely, this postulate requires that uncertainty grows monotonically with the number of potential outcomes. Postulate #2. The relative likelihood of each outcome determines the uncertainty. For example, a die which roles a 6 a majority of the time, contains less uncertainty than a standard, unbiased die. The second postulate goes a long way because we can express the uncertainty H as a function of the probability distribution p = {p1, p2,..., pA} dictating the frequency of all A outcomes or, in shorthand, H[p]. Thus, by the first postulate dHdA> 0 since the uncertainty grows monotonically as the number of outcomes increases. Strictly speaking, in order for the derivative to be positive, the derivative must exist in the first place, thus we additionally assume that H is a continuous function. Postulate #3 The weighted uncertainty of independent events must sum. ∗Electronic address:
Modeling the impact of common noise inputs on the network activity of retinal ganglion cells
 J COMPUT NEUROSCI
, 2011
"... ..."
Correlated firing among major ganglion cell types in primate retina
 The Journal of Physiology
, 2011
"... Nontechnical summary This paper examines the correlated firing among multiple ganglion cell types in the retina. For many years it has been known that ganglion cells exhibit a tendency to fire simultaneously more or less frequently than would be predicted by chance. However, the particular patterns ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
Nontechnical summary This paper examines the correlated firing among multiple ganglion cell types in the retina. For many years it has been known that ganglion cells exhibit a tendency to fire simultaneously more or less frequently than would be predicted by chance. However, the particular patterns of correlated activity in the primate retina have been unclear. Here we reveal systematic, distancedependent correlations between different ganglion cell types. For the most part, the patterns of activity are consistent with a model in which noise in cone photoreceptors propagates through common retinal circuitry, creating correlations among ganglion cell signals. Abstract Retinal ganglion cells exhibit substantial correlated firing: a tendency to fire nearly synchronously at rates different from those expected by chance. These correlations suggest that network interactions significantly shape the visual signal transmitted from the eye to the brain. This study describes the degree and structure of correlated firing among the major ganglion cell types in primate retina. Correlated firing among ON and OFF parasol, ON and OFF midget, and small bistratified cells, which together constitute roughly 75 % of the input to higher visual areas, was studied using largescale multielectrode recordings. Correlated firing in the
Results 1  10
of
47