Results 11  20
of
157
Truncated product method for combining Pvalues
 Genetic Epidemiol
, 2002
"... We present a new procedure for combining pvalues from a set of L hypothesis tests. Our procedure is to take the product of only those pvalues less than some specified cutoff value and to evaluate the probability of such a product, or a smaller value, under the overall hypothesis that all L hypoth ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
We present a new procedure for combining pvalues from a set of L hypothesis tests. Our procedure is to take the product of only those pvalues less than some specified cutoff value and to evaluate the probability of such a product, or a smaller value, under the overall hypothesis that all L hypotheses are true. We give an explicit formulation for this pvalue, and find by simulation that it can provide high power for detecting departures from the overall hypothesis. We extend the procedure to situations when tests are not independent. We present both real and simulated examples where the method is especially useful. These include exploratory analyses when L is large, such as genomewide scans for markertrait associations and metaanalytic applications that combine information from published studies, with potential for dealing with the “publication bias” phenomenon. Once the overall hypothesis is rejected, an adjustment procedure with strong familywise error protection is available for smaller subsets of hypotheses, down to the individual tests. Key words: meta analysis, multiple tests, genomewide scans, microarrays, Bonferroni. 2 1
Bayesian Maximum a Posteriori Multiple Testing Procedure
 Sankhya
, 2006
"... We consider a Bayesian approach to multiple hypothesis testing. A hierarchical prior model is based on imposing a prior distribution π(k) on the number of hypotheses arising from alternatives (false nulls). We then apply the maximum a posteriori (MAP) rule to find the most likely configuration of nu ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We consider a Bayesian approach to multiple hypothesis testing. A hierarchical prior model is based on imposing a prior distribution π(k) on the number of hypotheses arising from alternatives (false nulls). We then apply the maximum a posteriori (MAP) rule to find the most likely configuration of null and alternative hypotheses. The resulting MAP procedure and its closely related stepup and stepdown versions compare ordered Bayes factors of individual hypotheses with a sequence of critical values depending on the prior. We discuss the relations between the proposed MAP procedure and the existing frequentist and Bayesian counterparts. A more detailed analysis is given for the normal data, where we show, in particular, that by choosing a specific π(k), the MAP procedure can mimic several known familywise error (FWE) and false discovery rate (FDR) controlling procedures. The performance of MAP procedures is illustrated on a simulated example. AMS (2000) subject classification. Primary 62F15, 62F03.
Interdependency of brassinosteroid and auxin signaling in Arabidopsis
 PLoS Biol
, 2004
"... How growth regulators provoke contextspecific signals is a fundamental question in developmental biology. In plants, both auxin and brassinosteroids (BRs) promote cell expansion, and it was thought that they activated this process through independent mechanisms. In this work, we describe a shared a ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
How growth regulators provoke contextspecific signals is a fundamental question in developmental biology. In plants, both auxin and brassinosteroids (BRs) promote cell expansion, and it was thought that they activated this process through independent mechanisms. In this work, we describe a shared auxin:BR pathway required for seedling growth. Genetic, physiological, and genomic analyses demonstrate that response from one pathway requires the function of the other, and that this interdependence does not act at the level of hormone biosynthetic control. Increased auxin levels saturate the BRstimulated growth response and greatly reduce BR effects on gene expression. Integration of these two pathways is downstream from BES1 and Aux/IAA proteins, the last known regulatory factors acting downstream of each hormone, and is likely to occur directly on the promoters of auxin:BR target genes. We have developed a new approach to identify potential regulatory elements acting in each hormone pathway, as well as in the shared auxin:BR pathway. We show that one element highly overrepresented in the promoters of auxin and BRinduced genes is responsive to both hormones and requires BR biosynthesis for normal expression. This work fundamentally alters our view of BR and auxin signaling and describes a powerful new approach to identify regulatory elements required for response to specific stimuli.
Nonparametric Hypothesis Testing for a Spatial Signal
, 2001
"... this article, we propose a procedure called Enhanced FDR (EFDR), which is based on controlling the false discovery rate (FDR) and a concept known as generalized degrees of freedom (GDF). EFDR differs from the standard FDR procedure through its reducing of the number of hypotheses tested. This is don ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
this article, we propose a procedure called Enhanced FDR (EFDR), which is based on controlling the false discovery rate (FDR) and a concept known as generalized degrees of freedom (GDF). EFDR differs from the standard FDR procedure through its reducing of the number of hypotheses tested. This is done in two ways: first, the model is represented more parsimoniously in the wavelet domain, and second, an optimal selection of hypotheses is made using a criterion based on generalized degrees of freedom. Not only does the EFDR procedure tell us whether a spatial signal is present or not, it has an added bonus that, if a signal is deemed present, it can indicate its location and magnitude. We examine EFDR's operating characteristics, and in simulations we show that it outperforms the standard FDR and conventional testing procedures. Finally, the EFDR procedure is applied to an airtemperature data set generated from the Climate System Model (CSM) of the National Center for Atmospheric Research (NCAR), where air temperatures in the 1980s are compared to those in the 1990s. We conclude that temperature change has occurred between the two decades, mostly warming in the central part of the USA and in coastal regions of South America at about 20 S. Key words: Denoising, false discovery rate, generalized degrees of freedom, pixel, power, signal detection, wavelets
Stepup procedures controlling generalized FWER and generalized
, 2007
"... In many applications of multiple hypothesis testing where more than one false rejection can be tolerated, procedures controlling error rates measuring at least k false rejections, instead of at least one, for some fixed k ≥ 1 can potentially increase the ability of a procedure to detect false null h ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
In many applications of multiple hypothesis testing where more than one false rejection can be tolerated, procedures controlling error rates measuring at least k false rejections, instead of at least one, for some fixed k ≥ 1 can potentially increase the ability of a procedure to detect false null hypotheses. The kFWER, a generalized version of the usual familywise error rate (FWER), is such an error rate that has recently been introduced in the literature and procedures controlling it have been proposed. A further generalization of a result on the kFWER is provided in this article. In addition, an alternative and less conservative notion of error rate, the kFDR, is introduced in the same spirit as the kFWER by generalizing the usual false discovery rate (FDR). A kFWER procedure is constructed given any set of increasing constants by utilizing the kth order joint null distributions of the pvalues without assuming any specific form of dependence among all the pvalues. Procedures controlling the kFDR are also developed by using the kth order joint null distributions of the pvalues, first assuming that the sets of null and nonnull pvalues are mutually independent or they are jointly positively dependent in the sense of being multivariate totally positive of order two (MTP2) and then discarding that assumption about the overall dependence among the pvalues. 1. Introduction. Having
Feature Significance for Multivariate Kernel Density Estimation
"... Multivariate kernel density estimation provides information about structure in data. Feature significance is a technique for deciding whether features – such as local extrema – are statistically significant. This paper proposes a framework for feature significance in ddimensional data which combine ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Multivariate kernel density estimation provides information about structure in data. Feature significance is a technique for deciding whether features – such as local extrema – are statistically significant. This paper proposes a framework for feature significance in ddimensional data which combines kernel density derivative estimators and hypothesis tests for modal regions. For the gradient and curvature estimators distributional properties are given, and pointwise test statistics are derived. The hypothesis tests extend the twodimensional feature significance ideas of Godtliebsen et al. (2002). The theoretical framework is complemented by novel visualisation for threedimensional data. Applications to real data sets show that tests based on the kernel curvature estimators perform well in identifying modal regions. These results can be enhanced by corresponding tests with kernel gradient estimators.
Robustness of multiple testing procedures against dependence
 The Annals of Statistics
, 2009
"... An important aspect of multiple hypothesis testing is controlling the significance level, or the level of Type I error. When the test statistics are not independent it can be particularly challenging to deal with this problem, without resorting to very conservative procedures. In this paper we show ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
An important aspect of multiple hypothesis testing is controlling the significance level, or the level of Type I error. When the test statistics are not independent it can be particularly challenging to deal with this problem, without resorting to very conservative procedures. In this paper we show that, in the context of contemporary multiple testing problems, where the number of tests is often very large, the difficulties caused by dependence are less serious than in classical cases. This is particularly true when the null distributions of test statistics are relatively lighttailed, for example, when they can be based on Normal or Student’s t approximations. There, if the test statistics can fairly be viewed as being generated by a linear process, an analysis founded on the incorrect assumption of independence is asymptotically correct as the number of hypotheses diverges. In particular, the point process representing the null distribution of the indices at which statistically significant test results occur is approximately Poisson, just as in the case of independence. The Poisson process also has the same mean as in the independence case, and of course exhibits no clustering of false discoveries. However, this result can fail if the null distributions are particularly heavytailed. There clusters of statistically significant results can occur, even when the null hypothesis is correct. We give an intuitive explanation for these disparate properties in light and heavytailed cases, and provide rigorous theory underpinning the intuition. 1. Introduction. Classical
GonzalezLima F: Energy Hypometabolism in Posterior Cingulate Cortex of Alzheimer’s Patients: Superficial Laminar Cytochrome Oxidase Associated with Disease Duration
 Journal of Neuroscience
"... Among brain regions affected in Alzheimer’s disease (AD), the posterior cingulate shows the earliest and largest decrement in energy metabolism. Positron emission tomography (PET) studies have shown that these decrements appear before the onset of memory deficits or other symptoms in persons at gene ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Among brain regions affected in Alzheimer’s disease (AD), the posterior cingulate shows the earliest and largest decrement in energy metabolism. Positron emission tomography (PET) studies have shown that these decrements appear before the onset of memory deficits or other symptoms in persons at genetic risk for AD. This study compares in vivo imaging results and in situ postmortem analyses by examining the posterior cingulate (area 23) in 15 AD patients and 13 agematched nondemented controls using quantitative cytochrome oxidase histochemistry as an intracellular measure of oxidative energy metabolic capacity. Each of the six layers of the posterior cingulate demonstrated a decline in cytochrome oxidase activity in AD relative to controls, whereas adjacent motor cortex showed no significant differences. This decrement did not appear to be mainly secondary to nonspecific decrement in mitochondrial enzymes,
Controlling error in multiple comparisons, with special attention to the national assessment of educational progress
 Journal of Educational and Behavioral Statistics
, 1999
"... Three alternative procedures to adjust significance levels for multiplicity are the traditional Bonferroni technique, a sequential Bonferroni technique developed by Hochberg (1988), and a sequential approach for controlling the false discovery rate proposed by Benjamini and Hochberg (1995). These p ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Three alternative procedures to adjust significance levels for multiplicity are the traditional Bonferroni technique, a sequential Bonferroni technique developed by Hochberg (1988), and a sequential approach for controlling the false discovery rate proposed by Benjamini and Hochberg (1995). These procedures are illustrated and compared using examples from the National Assessment of Educational Progress (NAEP). A prominent advantage of the Benjamini and Hochberg (BH) procedure, as demonstrated in these examples, is the greater invariance of statistical significance for given comparisons over alternative family sizes. Simulation studies show that all three procedures maintain a false discovery rate bounded above, often grossly, by ct (or c~/2). For both uncorrelated and pairwise families of comparisons, the BH technique is shown to have greater power than the Hochberg or Bonferroni procedures, and its power remains relatively stable as the number of comparisons becomes large, giving it an increasing advantage when many comparisons are involved. We recommend that results from NAEP State Assessments be reported using the BH technique rather than the Bonferroni procedure. Two questions often asked about each of a set of observed comparisons are: (a) should we be confident about the direction or the sign of the corresponding underlying population comparison, and (b) for what interval of values should we be confident that it contains the value for the population comparison? Most
Statistical Inference, The Bootstrap, And Neural Network Modeling With Application To Foreign Exchange Rates
 IEEE TRANS. ON NEURAL NETWORKS
, 2000
"... In this paper we propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural network models. The approac ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
In this paper we propose tests for individual and joint irrelevance of network inputs. Such tests can be used to determine whether an input or group of inputs "belong" in a particular model, thus permitting valid statistical inference based on estimated feedforward neural network models. The approaches employ well known statistical resampling techniques. We conduct a small Monte Carlo Experiment showing that our tests have reasonable level and power behavior, and we apply our methods to examine whether there are predictable regularities in foreign exchange rates. We nd that exchange rates do appear to contain information that is exploitable for enhanced point prediction, but the nature of the predictive relations evolves through time.