Results 1 
8 of
8
Detection of an Anomalous Cluster in a Network
, 2010
"... We consider the problem of detecting whether or not in a given sensor network, there is a cluster of sensors which exhibit an “unusual behavior.” Formally, suppose we are given a set of nodes and attach a random variable to each node. We observe a realization of this process and want to decide bet ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
We consider the problem of detecting whether or not in a given sensor network, there is a cluster of sensors which exhibit an “unusual behavior.” Formally, suppose we are given a set of nodes and attach a random variable to each node. We observe a realization of this process and want to decide between the following two hypotheses: under the null, the variables are i.i.d. standard normal; under the alternative, there is a cluster of variables that are i.i.d. normal with positive mean and unit variance, while the rest are i.i.d. standard normal. We also address surveillance settings where each sensor in the network collects information over time. The resulting model is similar, now with a time series attached to each node. We again observetheprocessovertime and want to decide between the null, where all the variables are i.i.d. standard normal; and the alternative, where there is an emerging cluster of i.i.d. normal variables with positive mean and unit variance. The growth models used to represent the emerging cluster are quite general, and in particular include cellular automata used in modelling epidemics. In both settings, we consider classes of clusters that are quite general, for which we obtain a lower bound on their respective minimax detection rate, and show that some form of scan statistic, by far the most popular method in practice, achieves that same rate within a logarithmic factor. Our results are not limited to the normal location model, but generalize to any oneparameter exponential family when the anomalous clusters are large enough.
False Discovery Rates for Random Fields
, 2003
"... This paper extends False Discovery Rates to random fields, where there are uncountably many hypothesis tests. This provides a method for finding local regions in the field where there is a significant signal while controlling either the proportion of area or the number of clusters in which false rej ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This paper extends False Discovery Rates to random fields, where there are uncountably many hypothesis tests. This provides a method for finding local regions in the field where there is a significant signal while controlling either the proportion of area or the number of clusters in which false rejections occur. We develop confidence envelopes for the proportion of false discoveries as a function of the rejection threshold. This yields algorithms for constructing a confidence superset for the locations of the true nulls. From this we derive rejection thresholds that control the mean and quantiles of the proportion of false discoveries. We apply this method to scan statistics and functional neuroimaging.
A SemiLocal Paradigm for Wavelet Denoising
"... Wavelet denoising methods have been proven useful for many one and twodimensional problems. Most existing methods can in principle be carried over to threedimensional problems, such as the denoising of volumetric positron emission tomography (PET) images, but they may not be sufficiently flexible ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Wavelet denoising methods have been proven useful for many one and twodimensional problems. Most existing methods can in principle be carried over to threedimensional problems, such as the denoising of volumetric positron emission tomography (PET) images, but they may not be sufficiently flexible in allowing some regions of an image to be denoised more aggressively than others. In this paper, we propose a semilocal paradigm for wavelet denoising. The semilocal paradigm involves the division of an image into suitable blocks, which are then individually denoised. To denoise the blocks, we use our modification of the generalized cross validation (GCV) technique of Jansen and Bultheel [1] to choose thresholding parameters; we also present risk estimators to guide some of the other choices involved in the implementation. Experiments with phantom PET images show that the semilocal paradigm provides superior denoising compared to standard application of the GCV technique. An asymptotic analysis demonstrates that, under some regularity conditions, semilocal denoising is asymptotically consistent on the logarithmic scale. The paper concludes with a discussion on the nature of semilocal denoising and some topics for future research. Index Terms imaging, logarithmic consistency, positron emission tomography, thresholding EDICS: 2WAVP. We acknowledge the following individuals, who provided software, useful information, or helpful discussion:
Statistical Inference and Visualization in ScaleSpace for Spatially Dependent Images
, 2010
"... SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scalespace visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodnessoffit tests for spatially dependent images. The spatial SiZer ..."
Abstract
 Add to MetaCart
SiZer (SIgnificant ZERo crossing of the derivatives) is a graphical scalespace visualization tool that allows for statistical inferences. In this paper we develop a spatial SiZer for finding significant features and conducting goodnessoffit tests for spatially dependent images. The spatial SiZer utilizes a family of kernel estimates of the image and provides not only exploratory data analysis but also statistical inference with spatial correlation taken into account. It is also capable of comparing the observed image with a specific null model being tested by adjusting the statistical inference using an assumed covariance structure. Pixel locations having statistically significant differences between the image and a given null model are highlighted by arrows. The spatial SiZer is compared with the existing independent SiZer via the analysis of simulated data with and without signal on both planar and spherical domains. We apply the spatial SiZer method to the decadal temperature change over some regions of the Earth.
Summary
, 2009
"... Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial forecasts produced by two competing models exists on average across the entire spatial domain of interest. The null hypothesis is that of no difference, and a spatial loss differe ..."
Abstract
 Add to MetaCart
Under a general loss function, we develop a hypothesis test to determine whether a significant difference in the spatial forecasts produced by two competing models exists on average across the entire spatial domain of interest. The null hypothesis is that of no difference, and a spatial loss differential is created based on the observed data, the two sets of forecasts, and the loss function chosen by the researcher. The test assumes only isotropy and shortrange spatial dependence of the loss differential but does allow it to be nonGaussian, nonzero mean, and spatially correlated. Constant and nonconstant spatial trends in the loss differential are treated in two separate cases. Monte Carlo simulations illustrate the size and power properties of this test, and an example based on daily average wind speeds in Oklahoma is used for illustration. The test is also compared to a waveletbased method presented by Shen et al. (2002) that is designed to test for a spatial signal at every location in the domain.
MICE – Multiplepeak Identification, Characterization and Estimation
, 2005
"... – is a general procedure for estimating a lower bound for the number of components and for estimating their parameters in an additive regression model. The method consists of a series of steps: a preliminary step for separating the signal from the background followed by identification of local maxim ..."
Abstract
 Add to MetaCart
– is a general procedure for estimating a lower bound for the number of components and for estimating their parameters in an additive regression model. The method consists of a series of steps: a preliminary step for separating the signal from the background followed by identification of local maxima up to a noise leveldependent threshold, estimation of the component parameters using an iterative algorithm, and detection of mixtures of components within one local maximum using hypothesis testing. The leading example is a nuclear magnetic resonance (NMR) experiment for protein structure determination. After applying a Fourier transform to the NMR signals, NMR frequency data are multiplepeak data, where each peak corresponds to one component in the additive regression model. In this example, the primary objective is accurate estimation of the location parameters. Key words and phrases: mixture regression model, tensorproduct wavelet decomposition, noise leveldependent threshold, backfitting, mixture detection, nuclear magnetic resonance, protein structure determination.
unknown title
, 2004
"... A comparative evaluation of waveletbased methods for hypothesis testing of brain activation maps ..."
Abstract
 Add to MetaCart
A comparative evaluation of waveletbased methods for hypothesis testing of brain activation maps