Results 1  10
of
42
Neural coding and decoding: communication channels and quantization
 Network: Computation in Neural Systems
, 2001
"... We present a novel analytical approach for studying neural encoding. As a
first step we model a neural sensory system as a communication channel.
Using the method of typical sequence in this context, we show that a
coding scheme is an almost bijective relation between equivalence classes of
stimulus ..."
Abstract

Cited by 36 (8 self)
 Add to MetaCart
We present a novel analytical approach for studying neural encoding. As a
first step we model a neural sensory system as a communication channel.
Using the method of typical sequence in this context, we show that a
coding scheme is an almost bijective relation between equivalence classes of
stimulus/response pairs. The analysis allows a quantitative determination of the
type of information encoded in neural activity patterns and, at the same time,
identification of the code with which that information is represented. Due to the
high dimensionality of the sets involved, such a relation is extremely difficult
to quantify. To circumvent this problem, and to use whatever limited data set is
available most efficiently, we use another technique from information theory—
quantization. We quantize the neural responses to a reproduction set of small
finite size. Amongmany possible quantizations, we choose one which preserves
as much of the informativeness of the original stimulus/response relation as
possible, through the use of an informationbased distortion function. This
method allows us to study coarse but highly informative approximations of a
coding scheme model, and then to refine them automatically when more data
become available.
Symmetrizing the KullbackLeibler Distance
 IEEE Transactions on Information Theory
, 2000
"... We define a new distance measure the resistoraverage distance between two probability distributions that is closely related to the KullbackLeibler distance. While the KullbackLeibler distance is asymmetric in the two distributions, the resistoraverage distance is not. It arises from geometric ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
We define a new distance measure the resistoraverage distance between two probability distributions that is closely related to the KullbackLeibler distance. While the KullbackLeibler distance is asymmetric in the two distributions, the resistoraverage distance is not. It arises from geometric considerations similar to those used to derive the Chernoff distance. Determining its relation to wellknown distance measures reveals a new way to depict how commonly used distance measures relate to each other. 1 Introduction The KullbackLeibler distance [15, 16] is perhaps the most frequently used informationtheoretic "distance" measure from a viewpoint of theory. If p 0 , p 1 are two probability densities, the KullbackLeibler distance is defined to be D(p 1 #p 0 )= # p 1 (x)log p 1 (x) p 0 (x) dx . (1) In this paper, log() has base two. The KullbackLeibler distance is but one example of the AliSilvey class of informationtheoretic distance measures [1], which are defined to ...
Representational Accuracy of Stochastic Neural Populations
, 2001
"... this article that the choice of a variability model has a major, nontrivial impact on the encoding properties of the neural population. The immense variability of individual response parameters, such as tuning widths or correlation coef#cients, has also been neglected in most previous work. Although ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
this article that the choice of a variability model has a major, nontrivial impact on the encoding properties of the neural population. The immense variability of individual response parameters, such as tuning widths or correlation coef#cients, has also been neglected in most previous work. Although these parameter variations are always found in empirical data, they were considered functionally insignificant, and hence theoretical studies have almost always assumed uniform parameters throughout the population. We will show here that this uniform case is unfavorable in the sense that the introduction of parameter variability improves the encoding performance
Dynamic Analyses of Information Encoding in Neural Ensembles
 Neural Computation
, 2004
"... Neural spike train decoding algorithms and techniques to compute Shannon
mutual information are important methods for analyzing how neural
systems represent biological signals.Decoding algorithms are also one of
several strategies being used to design controls for brainmachine interfaces.
Developin ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Neural spike train decoding algorithms and techniques to compute Shannon
mutual information are important methods for analyzing how neural
systems represent biological signals.Decoding algorithms are also one of
several strategies being used to design controls for brainmachine interfaces.
Developing optimal strategies to desig n decoding algorithms and
compute mutual information are therefore important problems in computational
neuroscience. We present a general recursive lter decoding
algorithm based on a point process model of individual neuron spiking
activity and a linear stochastic statespace model of the biological signal.
We derive from the algorithm new instantaneous estimates of the entropy,
entropy rate, and the mutual information between the signal and
the ensemble spiking activity. We assess the accuracy of the algorithm
by computing, along with the decoding error, the true coverage probability
of the approximate 0.95 condence regions for the individual signal
estimates. We illustrate the new algorithm by reanalyzing the position
and ensemble neural spiking activity of CA1 hippocampal neurons from
two rats foraging in an open circular environment. We compare the performance
of this algorithm with a linear lter constructed by the widely
used reverse correlation method. The median decoding error for Animal
1 (2) during 10 minutes of open foraging was 5.9 (5.5) cm, the median
entropy was 6.9 (7.0) bits, the median information was 9.4 (9.4) bits, and
the true coverage probability for 0.95 condence regions was 0.67 (0.75)
using 34 (32) neurons. These ndings improve signicantly on our previous
results and suggest an integrated approach to dynamically reading
neural codes, measuring their properties, and quantifying the accuracy
with which encoded information is extracted.
An informationtheoretic approach to detecting changes in multidimensional data streams
 In Proc. Symp. on the Interface of Statistics, Computing Science, and Applications
, 2006
"... Abstract An important problem in processing large data streams is detecting changes in the underlying distribution that generates the data. The challenge in designing change detection schemes is making them general, scalable, and statistically sound. In this paper, we take a general,informationthe ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
Abstract An important problem in processing large data streams is detecting changes in the underlying distribution that generates the data. The challenge in designing change detection schemes is making them general, scalable, and statistically sound. In this paper, we take a general,informationtheoretic approach to the change detection problem, which works for multidimensional as well as categorical data. We use relative entropy, also called the KullbackLeiblerdistance, to measure the difference between two given distributions. The KLdistance is known to be related to the optimal error in determining whether the two distributions are the sameand draws on fundamental results in hypothesis testing. The KLdistance also generalizes traditional distance measures in statistics, and has invariance properties that make it ideally suitedfor comparing distributions. Our scheme is general; it is nonparametric and requires no assumptions on the underlyingdistributions. It employs a statistical inference procedure based on the theory of bootstrapping, which allows us to determine whether our measurements are statistically significant. The schemeis also quite flexible from a practical perspective; it can be implemented using any spatial partitioning scheme that scales well with dimensionality. In addition to providing change detections,our method generalizes Kulldorff's spatial scan statistic, allowing us to quantitatively identify specific regions in space where large changes have occurred.We provide a detailed experimental study that demonstrates the generality and efficiency of our approach with different kinds of multidimensional datasets, both synthetic and real. 1 Introduction We are collecting and storing data in unprecedented quantities and varietiesstreams, images, audio, text, metadata descriptions, and even simple numbers. Over time, these data streams change as the underlying processes that generate them change. Some changes are spurious and pertain to glitches in the data. Some are genuine, caused by changes in the underlying distributions. Some changes are gradual and some are more precipitous. We would like to detect changes in a variety of settings:
Color learning on a mobile robot: Towards full autonomy under changing illumination
 In The International Joint Conference on Artificial Intelligence (IJCAI
, 2007
"... A central goal of robotics and AI is to be able to deploy an agent to act autonomously in the real world over an extended period of time. It is commonly asserted that in order to do so, the agent must be able to learn to deal with unexpected environmental conditions. However an ability to learn is n ..."
Abstract

Cited by 19 (10 self)
 Add to MetaCart
A central goal of robotics and AI is to be able to deploy an agent to act autonomously in the real world over an extended period of time. It is commonly asserted that in order to do so, the agent must be able to learn to deal with unexpected environmental conditions. However an ability to learn is not sufficient. For true extended autonomy, an agent must also be able to recognize when to abandon its current model in favor of learning a new one; and how to learn in its current situation. This paper presents a fully implemented example of such autonomy in the context of color map learning on a visionbased mobile robot for the purpose of image segmentation. Past research established the ability of a robot to learn a color map in a single fixed lighting condition when manually given a “curriculum, ” an action sequence designed to facilitate learning. This paper introduces algorithms that enable a robot to i) devise its own curriculum; and ii) recognize when the lighting conditions have changed sufficiently to warrant learning a new color map. 1
A NearestNeighbor Approach to Estimating Divergence between Continuous Random Vectors
, 2006
"... A method for divergence estimation between multidimensional distributions based on nearest neighbor distances is proposed. Given i.i.d. samples, both the bias and the variance of this estimator are proven to vanish as sample sizes go to infinity. In experiments on highdimensional data, the nearest ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
A method for divergence estimation between multidimensional distributions based on nearest neighbor distances is proposed. Given i.i.d. samples, both the bias and the variance of this estimator are proven to vanish as sample sizes go to infinity. In experiments on highdimensional data, the nearest neighbor approach generally exhibits faster convergence compared to previous algorithms based on partitioning.
Toward a Theory of Information Processing
 IEEE Trans. Signal Processing
, 2002
"... Information processing theory endeavors to quantify how well signals encode information and how well systems, by acting on signals, process information. We use informationtheoretic distance measures, the KullbackLeibler distance in particular, to quantify how well signals represent information. ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
Information processing theory endeavors to quantify how well signals encode information and how well systems, by acting on signals, process information. We use informationtheoretic distance measures, the KullbackLeibler distance in particular, to quantify how well signals represent information. The ratio of distances between a system's output and input quantifies the system's information processing properties.
Quantifying Statistical Interdependence by Message Passing on Graphs  PART II: MultiDimensional Point Processes
, 2009
"... Stochastic event synchrony is a technique to quantify the similarity of pairs of signals. First, “events” are extracted from the two given time series. Next, one tries to align events from one time series with events from the other. The better the alignment, the more similar the two time series are ..."
Abstract

Cited by 12 (10 self)
 Add to MetaCart
Stochastic event synchrony is a technique to quantify the similarity of pairs of signals. First, “events” are extracted from the two given time series. Next, one tries to align events from one time series with events from the other. The better the alignment, the more similar the two time series are considered to be. In Part I, onedimensional events are considered, this paper (Paper II) concerns multidimensional events. Although the basic idea is similar, the extension to multidimensional point processes involves a significantly harder combinatorial problem, and therefore, it is nontrivial. Also in the multidimensional, the problem of jointly computing the pairwise alignment and SES parameters is cast as a statistical inference problem. This problem is solved by coordinate descent, more specifically, by alternating the following two steps: (i) one estimates the SES parameters from a given pairwise alignment; (ii) with the resulting estimates, one refines the pairwise alignment. The SES parameters are computed by maximum a posteriori (MAP) estimation (Step 1), in
From pixels to multirobot decisionmaking: A study in uncertainty. Robotics and Autonomous Systems: Special issue on Planning Under Uncertainty in Robotics
 Robotics and Autonomous Systems
, 2006
"... Mobile robots must cope with uncertainty from many sources along the path from interpreting raw sensor inputs to behavior selection to execution of the resulting primitive actions. This article identifies several such sources and introduces methods for i) reducing uncertainty and ii) making decision ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
Mobile robots must cope with uncertainty from many sources along the path from interpreting raw sensor inputs to behavior selection to execution of the resulting primitive actions. This article identifies several such sources and introduces methods for i) reducing uncertainty and ii) making decisions in the face of uncertainty. We present a complete visionbased robotic system that includes several algorithms for learning models that are useful and necessary for planning, and then place particular emphasis on the planning and decisionmaking capabilities of the robot. Specifically, we present models for autonomous color calibration, autonomous sensor and actuator modeling, and an adaptation of particle filtering for improved localization on legged robots. These contributions enable effective planning under uncertainty for robots engaged in goaloriented behavior within a dynamic, collaborative and adversarial environment. Each of our algorithms is fully implemented and tested on a commercial offtheshelf visionbased quadruped robot.