Results 1  10
of
98
Distributed detection in sensor networks with packet losses and finite capacity links
 IEEE Transactions on Signal Processing
, 2006
"... We consider a multiobject detection problem over a sensor network (SNET) with limited range multimodal sensors. Limited range sensing environment arises in a sensing field prone to signal attenuation and path losses. The general problem complements the widely considered decentralized detection pro ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
(Show Context)
We consider a multiobject detection problem over a sensor network (SNET) with limited range multimodal sensors. Limited range sensing environment arises in a sensing field prone to signal attenuation and path losses. The general problem complements the widely considered decentralized detection problem where all sensors observe the same object. In this paper we develop a distributed detection approach based on recent development of the false discovery rate (FDR) and the associated BH test procedure. The BH procedure is based on rank ordering of scalar test statistics. We first develop scalar test statistics for multidimensional data to handle multimodal sensor observations and establish its optimality in terms of the BH procedure. We then propose a distributed algorithm in the ideal case of infinite attenuation for identification of sensors that are in the immediate vicinity of an object. We demonstrate communication message scalability to large SNETs by showing that the upper bound on the communication message complexity scales linearly with the number of sensors that are in the vicinity of objects and is independent of the total number of sensors in the SNET. This brings forth an important principle for evaluating the performance of an SNET, namely, the need for scalability of communications and performance with respect to the number of objects or events in an SNET irrespective of the network size. We then account for finite attenuation by modeling sensor observations as corrupted by uncertain interference arising from distant objects and developing robust extensions to our idealized distributed scheme. The robustness properties ensure that both the error performance and communication message complexity degrade gracefully with interference. 1
A Bayesian mixture model for differential gene expression
 Journal of the Royal Statistical Society C
, 2005
"... We propose modelbased inference for differential gene expression, using a nonparametric Bayesian probability model for the distribution of gene intensities under different conditions. The probability model is essentially a mixture of normals. The resulting inference is similar to the empirical Bay ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
We propose modelbased inference for differential gene expression, using a nonparametric Bayesian probability model for the distribution of gene intensities under different conditions. The probability model is essentially a mixture of normals. The resulting inference is similar to the empirical Bayes approach proposed in Efron et al. (2001). The use of fully modelbased inference mitigates some of the necessary limitations of the empirical Bayes method. However, the increased generality of our method comes at a price. Computation is not as straightforward as in the empirical Bayes scheme. But we argue that inference is no more difficult than posterior simulation in traditional nonparametric mixture of normal models. We illustrate the proposed method in two examples, including a simulation study and a microarray experiment to screen for genes with differential expression in colon cancer versus normal tissue (Alon et al., 1999).
An evaluation of thresholding techniques in fMRI analysis
, 2004
"... This paper reviews and compares individual voxelwise thresholding methods for identifying active voxels in singlesubject fMRI datasets. Different error rates are described which may be used to calibrate activation thresholds. We discuss methods which control each of the error rates at a prespecifi ..."
Abstract

Cited by 25 (10 self)
 Add to MetaCart
This paper reviews and compares individual voxelwise thresholding methods for identifying active voxels in singlesubject fMRI datasets. Different error rates are described which may be used to calibrate activation thresholds. We discuss methods which control each of the error rates at a prespecified level a, including simple procedures which ignore spatial correlation among the test statistics as well as more elaborate ones which incorporate this correlation information. The operating characteristics of the methods are shown through a simulation study, indicating that the error rate used has an important impact on the sensitivity of the thresholding method, but that accounting for correlation has little impact. Therefore, the simple procedures described work well for thresholding most singlesubject fMRI experiments and are recommended. The methods are illustrated with a real bilateral finger tapping experiment
Sample size for fdrcontrol in microarray data analysis
 Bioinformatics
, 2005
"... We consider identifying differentially expressing genes between two patient groups using microarray experiment. We propose a sample size calculation method for a specified number of true rejections while controlling the false discovery rate at a desired level. Input parameters for the sample size c ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
We consider identifying differentially expressing genes between two patient groups using microarray experiment. We propose a sample size calculation method for a specified number of true rejections while controlling the false discovery rate at a desired level. Input parameters for the sample size calculation include the allocation proportion in each group, the number of genes in each array, the number of differentially expressing genes, and the effect sizes among the differentially expressing genes. We have a closedform sample size formula if the projected effect sizes are equal among differentially expressing genes. Otherwise, our method requires a numerical method to solve an equation. Simulation studies are conducted to show that the calculated sample sizes are accurate in practical settings. The proposed method is demonstrated with a real study. Key words: Block compound symmetry, Familywise error rate, Prognostic gene, True rejection, Twosample ttest.
False discovery control with pvalue weighting
, 2006
"... We present a method for multiple hypothesis testing that maintains control of the false discovery rate while incorporating prior information about the hypotheses. The prior information takes the form of pvalue weights. If the assignment of weights is positively associated with the null hypotheses b ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
(Show Context)
We present a method for multiple hypothesis testing that maintains control of the false discovery rate while incorporating prior information about the hypotheses. The prior information takes the form of pvalue weights. If the assignment of weights is positively associated with the null hypotheses being false, the procedure improves power, except in cases where power is already near one. Even if the assignment of weights is poor, power is only reduced slightly, as long as the weights are not too large. We also provide a similar method for controlling false discovery exceedance.
False discovery and false nondiscovery rates in singlestep multiple testing procedures
 Ann. Statist
, 2006
"... Results on the false discovery rate (FDR) and the false nondiscovery rate (FNR) are developed for singlestep multiple testing procedures. In addition to verifying desirable properties of FDR and FNR as measures of error rates, these results extend previously known results, providing further insight ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
Results on the false discovery rate (FDR) and the false nondiscovery rate (FNR) are developed for singlestep multiple testing procedures. In addition to verifying desirable properties of FDR and FNR as measures of error rates, these results extend previously known results, providing further insights, particularly under dependence, into the notions of FDR and FNR and related measures. First, considering fixed configurations of true and false null hypotheses, inequalities are obtained to explain how an FDR or FNRcontrolling singlestep procedure, such as a Bonferroni or ˘ Sidák procedure, can potentially be improved. Two families of procedures are then constructed, one that modifies the FDRcontrolling and the other that modifies the FNRcontrolling ˘ Sidák procedure. These are proved to control FDR or FNR under independence less conservatively than the corresponding families that modify the FDR or FNRcontrolling Bonferroni procedure. Results of numerical investigations of the performance of the modified ˘ Sidák FDR procedure over its competitors are presented. Second, considering a mixture model where different configurations of true and false null hypotheses are assumed to have certain probabilities, results are also derived that extend some of Storey’s work to the dependence case.
Exceedance Control of the False Discovery Proportion
"... Multiple testing methods to control the False Discovery Rate (FDR), the expected proportion of falsely rejected null hypotheses among all rejections) have received much attention. It can be valuable instead to control not the mean of this false discovery proportion (FDP) but the probability that the ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
Multiple testing methods to control the False Discovery Rate (FDR), the expected proportion of falsely rejected null hypotheses among all rejections) have received much attention. It can be valuable instead to control not the mean of this false discovery proportion (FDP) but the probability that the FDP exceeds a specified bound. In this paper, we construct a general class of methods for exceedance control of FDP based on inverting tests of uniformity. The method also produces a confidence envelope for the FDP as a function of rejection threshold. We discuss how
Oracle and adaptive compound decision rules for false discovery rate control
 J. Am. Statist. Ass
, 2007
"... We develop a compound decision theory framework for multipletesting problems and derive an oracle rule based on the z values that minimizes the false nondiscovery rate (FNR) subject to a constraint on the false discovery rate (FDR). We show that many commonly used multipletesting procedures, which ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
We develop a compound decision theory framework for multipletesting problems and derive an oracle rule based on the z values that minimizes the false nondiscovery rate (FNR) subject to a constraint on the false discovery rate (FDR). We show that many commonly used multipletesting procedures, which are p value–based, are inefficient, and propose an adaptive procedure based on the z values. The z value–based adaptive procedure asymptotically attains the performance of the z value oracle procedure and is more efficient than the conventional p value–based methods. We investigate the numerical performance of the adaptive procedure using both simulated and real data. In particular, we demonstrate our method in an analysis of the microarray data from a human immunodeficiency virus study that involves testing a large number of hypotheses simultaneously.
ON THE FALSE DISCOVERY RATE AND AN ASYMPTOTICALLY OPTIMAL REJECTION CURVE 1
, 903
"... In this paper we introduce and investigate a new rejection curve for asymptotic control of the false discovery rate (FDR) in multiple hypotheses testing problems. We first give a heuristic motivation for this new curve and propose some procedures related to it. Then we introduce a set of possible as ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
In this paper we introduce and investigate a new rejection curve for asymptotic control of the false discovery rate (FDR) in multiple hypotheses testing problems. We first give a heuristic motivation for this new curve and propose some procedures related to it. Then we introduce a set of possible assumptions and give a unifying short proof of FDR control for procedures based on Simes ’ critical values, whereby certain types of dependency are allowed. This methodology of proof is then applied to other fixed rejection curves including the proposed new curve. Among others, we investigate the problem of finding least favorable parameter configurations such that the FDR becomes largest. We then derive a series of results concerning asymptotic FDR control for procedures based on the new curve and discuss several example procedures in more detail. A main result will be an asymptotic optimality statement for various procedures based on the new curve in the class of fixed rejection curves. Finally, we briefly discuss strict FDR control for a finite number of hypotheses.
Estimation and confidence sets for sparse normal mixtures
, 2005
"... For high dimensional statistical models, researchers have begun to focus on situations which can be described as having relatively few moderately large coefficients. Such situations lead to some very subtle statistical problems. In particular, Ingster and Donoho and Jin have considered a sparse norm ..."
Abstract

Cited by 16 (9 self)
 Add to MetaCart
(Show Context)
For high dimensional statistical models, researchers have begun to focus on situations which can be described as having relatively few moderately large coefficients. Such situations lead to some very subtle statistical problems. In particular, Ingster and Donoho and Jin have considered a sparse normal means testing problem, in which they described the precise demarcation, or the detection boundary. Meinshausen and Rice have shown that it is even possible to estimate consistently the fraction of nonzero coordinates on a subset of the detectable region, but leave unanswered the question of exactly which parts of the detectable region that consistent estimation is possible. In the present paper we develop a new approach for estimating the fraction of nonzero means for problems where the nonzero means are moderately large. We show that the detection region described by Ingster and Donoho and Jin turns out to be the region where it is possible to consistently estimate the expected fraction of nonzero coordinates. This theory is developed further and minimax rates of convergence are derived. A procedure is constructed which attains the optimal rate of convergence in this setting. Furthermore, the procedure also provides an honest lower bound for confidence intervals while minimizing the expected length of such an interval. Simulations are used to enable comparison with the work of Meinshausen and Rice, where a procedure is given but where rates of convergence have not been discussed. Extensions to more general Gaussian mixture models are also given.