Results 1  10
of
42
Sequential idealobserver analysis of visual discriminations
 Psychological Review
, 1989
"... Visual stimuli contain a limited amount of information that could potentially be used to perform a given visual task. At successive stages of visual processing, some of this information is lost and some is transmitted to higher stages. This article describes a new analysis, based on the concept of t ..."
Abstract

Cited by 62 (4 self)
 Add to MetaCart
Visual stimuli contain a limited amount of information that could potentially be used to perform a given visual task. At successive stages of visual processing, some of this information is lost and some is transmitted to higher stages. This article describes a new analysis, based on the concept of the ideal observer in signal detection theory, that allows one to trace the (low of discrimination information through the initial physiological stages of visual processing, for arbitrary spatiochromatic stimuli. This idealobserver analysis provides a rigorous means of measuring the information content of visual stimuli and of assessing the contribution of specific physiological mechanisms to discrimination performance. Here, the analysis is developed for the physiological mechanisms up to the level of the photoreceptor. It is shown that many psychophysical phenomena previously attributed to neural mechanisms may be explained by variations in the information content of the stimuli and by preneural mechanisms. The purpose of vision is to extract and represent information about the physical environment from the light that is emitted, transmitted, or reflected by objects and surfaces. In order to extract useful information, a visual system must be able to encode
Estimating a StateSpace Model from Point Process Observations
, 2003
"... A widely used signal processing paradigm is the statespace model. The statespace model is defined by two equations: an observation equation that describes how the hidden state or latent process is observed and a state equation that defines the evolution of the process through time. Inspired by neu ..."
Abstract

Cited by 39 (4 self)
 Add to MetaCart
A widely used signal processing paradigm is the statespace model. The statespace model is defined by two equations: an observation equation that describes how the hidden state or latent process is observed and a state equation that defines the evolution of the process through time. Inspired by neurophysiology experiments in which neural spiking activity is induced by an implicit (latent) stimulus, we develop an algorithm to estimate a statespace model observed through point process measurements. We represent the latent process modulating the neural spiking activity as a gaussian autoregressive model driven by an external stimulus. Given the latent process, neural spiking activity is characterized as a general point process defined by its conditional intensity function. We develop an approximate expectationmaximization (EM) algorithm to estimate the unobservable statespace process, its parameters, and the parameters of the point process. The EM algorithm combines a point process recursive nonlinear filter algorithm, the fixed interval smoothing algorithm, and the statespace covariance algorithm to compute the complete data log likelihood efficiently. We use a KolmogorovSmirnov test based on the timerescaling theorem to evaluate agreement between the model and point process data. We illustrate the model with two simulated data examples: an ensemble of Poisson neurons driven by a common stimulus and a single neuron whose conditional intensity function is approximated as a local Bernoulli process.
Optimal Stack Filtering and the Estimation and Structural Approaches to Image Processing
, 1989
"... Rankorder based filters such as stack filters, multilevel and multistage median filters, morphological filters, and order statistic filters have all proven to be very effective at enhancing and restoring images. Perhaps the primary reason for their success is that they can suppress noise without d ..."
Abstract

Cited by 30 (11 self)
 Add to MetaCart
Rankorder based filters such as stack filters, multilevel and multistage median filters, morphological filters, and order statistic filters have all proven to be very effective at enhancing and restoring images. Perhaps the primary reason for their success is that they can suppress noise without destroying important image details such as edges and lines. Two approaches have been used in the past to design rankorder based nonlinear filters to enhance or restore images. They may be called the structural approach and the estimation approach. The first approach requires structural descriptions of the image and the process which has altered it, while the second requires statistical descriptions. The many different classes of rankorder based filters that have been developed over the last few decades are reviewed in the context of these two approaches. One of these filter classes, stack filters, then becomes the focus of the rest of the paper. These filters, which are defined by a weak superposition property and an ordering property, contain all compositions of 2D rankorder operations. The recently developed theory of minimum mean absolute error (MMAE) stack filtering is reviewed and extended to two dimensions. Then, a theory of optimal stack filtering under structural constraints and goals is developed for the structural approach to image processing. These two optimal stack filtering theories are then combined into a single design theory for rankorder based filters.
A Method for Selecting the Bin Size of a Time Histogram
, 2007
"... The time histogram method is the most basic tool for capturing a timedependent rate of neuronal spikes. Generally in the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time histogram to the underlying spike rate has been subjectively selected by ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
The time histogram method is the most basic tool for capturing a timedependent rate of neuronal spikes. Generally in the neurophysiological literature, the bin size that critically determines the goodness of the fit of the time histogram to the underlying spike rate has been subjectively selected by individual researchers. Here, we propose a method for objectively selecting the bin size from the spike count statistics alone, so that the resulting bar or line graph time histogram best represents the unknown underlying spike rate. For a small number of spike sequences generated from a modestly fluctuating rate, the optimal bin size may diverge, indicating that any time histogram is likely to capture a spurious rate. Given a paucity of data, the method presented here can nevertheless suggest how many experimental trials should be added in order to obtain a meaningful timedependent histogram with the required accuracy.
Remarks concerning graphical models for time series and point processes
 Revista de Econometria
, 1996
"... Uma rede estatística é uma cole,cão de nós representando variáveis aleatórias e um conjunto de arestas que ligam os nós. Um modelo estocástico por isso e chamado um modelo gráfico. Estes modelos, de gráficos e redes, sáo particularmente úteis para examinar as dependéncias estatísticas baseadas em co ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Uma rede estatística é uma cole,cão de nós representando variáveis aleatórias e um conjunto de arestas que ligam os nós. Um modelo estocástico por isso e chamado um modelo gráfico. Estes modelos, de gráficos e redes, sáo particularmente úteis para examinar as dependéncias estatísticas baseadas em condi,coes do tipo das que ocorrem frequentemente em economia e estatística. Neste artigo as variáveis aleatórias dos nós serão séries temporais ou processos pontuais. Os casos de gráfos direcionados e nãodirecionados são apresentados. A statistical network is a collection of nodes representing random variables and a set of edges that connect the nodes. A probabilistic model for such is called a graphical model. These models, graphs and networks are particularly useful for examining statistical dependencies based on conditioning as often occurs in economics and statistics. In this paper the nodal random variables will be time series or point proceses. The cases of undirected and directed graphs are focussed on.
Translated Poisson mixture model for stratification learning
 Int. J. Comput. Vision
, 2000
"... A framework for the regularized and robust estimation of nonuniform dimensionality and density in high dimensional noisy data is introduced in this work. This leads to learning stratifications, that is, mixture of manifolds representing different characteristics and complexities in the data set. Th ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
A framework for the regularized and robust estimation of nonuniform dimensionality and density in high dimensional noisy data is introduced in this work. This leads to learning stratifications, that is, mixture of manifolds representing different characteristics and complexities in the data set. The basic idea relies on modeling the high dimensional sample points as a process of Translated Poisson mixtures, with regularizing restrictions, leading to a model which includes the presence of noise. The Translated Poisson distribution is useful to model a noisy counting process, and it is derived from the noiseinduced translation of a regular Poisson distribution. By maximizing the loglikelihood of the process counting the points falling into a local ball, we estimate the local dimension and density. We show that
Diagnosability of Stochastic DiscreteEvent Systems
 IEEE Trans. Autom. Control
, 2005
"... Abstract—We investigate diagnosability of stochastic discreteevent systems. We define the notions of A and AAdiagnosability for stochastic automata; these notions are weaker than the corresponding notion of diagnosability for logical automata introduced by Sampath et al. Through the construction ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Abstract—We investigate diagnosability of stochastic discreteevent systems. We define the notions of A and AAdiagnosability for stochastic automata; these notions are weaker than the corresponding notion of diagnosability for logical automata introduced by Sampath et al. Through the construction of a stochastic diagnoser, we determine offline conditions necessary and sufficient to guarantee Adiagnosability and sufficient to guarantee AAdiagnosability. We also show how the stochastic diagnoser can be used for online diagnosis of failure events. We illustrate the results through two examples from HVAC systems. Index Terms—Discreteevent systems, failure detection, fault diagnosis, probabilistic models, stochastic automata. I.
Stratification learning: Detecting mixed density and dimensionality in high dimensional point clouds
 In Advances in NIPS 19
, 2006
"... The study of point cloud data sampled from a stratification, a collection of manifolds with possible different dimensions, is pursued in this paper. We present a technique for simultaneously soft clustering and estimating the mixed dimensionality and density of such structures. The framework is base ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
The study of point cloud data sampled from a stratification, a collection of manifolds with possible different dimensions, is pursued in this paper. We present a technique for simultaneously soft clustering and estimating the mixed dimensionality and density of such structures. The framework is based on a maximum likelihood estimation of a Poisson mixture model. The presentation of the approach is completed with artificial and real examples demonstrating the importance of extending manifold learning to stratification learning. 1
Forecasting earthquakes and earthquake risk
 International Journal of Forecasting
, 1995
"... This paper reviews issues, models, and methodologies arising out of the problems of predicting earthquakes and forecasting earthquake risk. The emphasis is on statistical methods which attempt to quantify the probability of an earthquake occurring within specified time, space, and magnitude windows. ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
This paper reviews issues, models, and methodologies arising out of the problems of predicting earthquakes and forecasting earthquake risk. The emphasis is on statistical methods which attempt to quantify the probability of an earthquake occurring within specified time, space, and magnitude windows. One recurring theme is that such probabilities are best developed from models which specify a timevarying conditional intensity (conditional probability per unit time, area or volume, and magnitude interval) for every point in the region under study. The paper comprises three introductory sections, and three substantive sections. The former outline the current state of earthquake prediction, earthquakes and their parameters, and the point process background. The latter cover the estimation of background risk, the estimation of timevarying risk, and some specific examples of models and prediction algorithms. The paper concludes with some brief comments on the links between forecasting earthquakes and other forecasting problems.
A recipe for optimizing a timehistogram
 Advances in Neural Information Processing Systems 19
, 2007
"... The timehistogram method is a handy tool for capturing the instantaneous rate of spike occurrence. In most of the neurophysiological literature, the bin size that critically determines the goodness of the fit of the timehistogram to the underlying rate has been selected by individual researchers i ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The timehistogram method is a handy tool for capturing the instantaneous rate of spike occurrence. In most of the neurophysiological literature, the bin size that critically determines the goodness of the fit of the timehistogram to the underlying rate has been selected by individual researchers in an unsystematic manner. We propose an objective method for selecting the bin size of a timehistogram from the spike data, so that the timehistogram best approximates the unknown underlying rate. The resolution of the histogram increases, or the optimal bin size decreases, with the number of spike sequences sampled. It is notable that the optimal bin size diverges if only a small number of experimental trials are available from a moderately fluctuating rate process. In this case, any attempt to characterize the underlying spike rate will lead to spurious results. Given a paucity of data, our method can also suggest how many more trials are needed until the set of data can be analyzed with the required resolution. 1