Results 1  10
of
14
An exploration of aspects of Bayesian multiple testing
 Journal of Statistical Planning and Inference
, 2005
"... There has been increased interest of late in the Bayesian approach to multiple testing (often called the multiple comparisons problem), motivated by the need to analyze DNA microarray data in which it is desired to learn which of potentially several thousand genes are activated by a particular stimu ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
There has been increased interest of late in the Bayesian approach to multiple testing (often called the multiple comparisons problem), motivated by the need to analyze DNA microarray data in which it is desired to learn which of potentially several thousand genes are activated by a particular stimulus. We study the issue of prior specification for such multiple tests; computation of key posterior quantities; and useful ways to display these quantities. A decisiontheoretic approach is also considered.
Bayesian Maximum a Posteriori Multiple Testing Procedure
 Sankhya
, 2006
"... We consider a Bayesian approach to multiple hypothesis testing. A hierarchical prior model is based on imposing a prior distribution π(k) on the number of hypotheses arising from alternatives (false nulls). We then apply the maximum a posteriori (MAP) rule to find the most likely configuration of nu ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We consider a Bayesian approach to multiple hypothesis testing. A hierarchical prior model is based on imposing a prior distribution π(k) on the number of hypotheses arising from alternatives (false nulls). We then apply the maximum a posteriori (MAP) rule to find the most likely configuration of null and alternative hypotheses. The resulting MAP procedure and its closely related stepup and stepdown versions compare ordered Bayes factors of individual hypotheses with a sequence of critical values depending on the prior. We discuss the relations between the proposed MAP procedure and the existing frequentist and Bayesian counterparts. A more detailed analysis is given for the normal data, where we show, in particular, that by choosing a specific π(k), the MAP procedure can mimic several known familywise error (FWE) and false discovery rate (FDR) controlling procedures. The performance of MAP procedures is illustrated on a simulated example. AMS (2000) subject classification. Primary 62F15, 62F03.
Bayesian selection and clustering of polymorphisms in functionallyrelated genes
 J. AM. STATIST. ASSOC
, 2006
"... In epidemiologic studies, there is often interest in assessing the relationship between polymorphisms in functionallyrelated genes and a health outcome. For each candidate gene, single nucleotide polymorphism (SNP) data are collected at a number of locations, resulting in a large number of possi ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
In epidemiologic studies, there is often interest in assessing the relationship between polymorphisms in functionallyrelated genes and a health outcome. For each candidate gene, single nucleotide polymorphism (SNP) data are collected at a number of locations, resulting in a large number of possible genotypes. Because instabilities can result in analyses that include all the SNPs, dimensionality is typically reduced by conducting single SNP analyses or attempting to identify haplotypes. This article proposes an alternative Bayesian approach for reducing dimensionality. A multilevel Dirichlet process prior is used for the distribution of the SNPspecific regression coefficients within genes, incorporating a variable selectiontype mixture structure in the base measure to allow SNPs with no effect. This structure allows simultaneous selection of important SNPs and clustering of SNPs having similar impact on the health outcome. The methods are illustrated using data from a study
Nonparametric Bayes applications to biostatistics,” Bayesian Nonparametrics: Principles and Practice
 In
, 2010
"... Biomedical research has clearly evolved at a dramatic rate in the past decade, with improvements in technology leading to a fundamental shift in the way in which data are collected and analyzed. Before this paradigm shift, studies were most commonly designed to be simple and to focus on relationship ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Biomedical research has clearly evolved at a dramatic rate in the past decade, with improvements in technology leading to a fundamental shift in the way in which data are collected and analyzed. Before this paradigm shift, studies were most commonly designed to be simple and to focus on relationships among a few variables of primary interest. For example, in
Bayesian data analysis
, 2009
"... Bayesian methods have garnered huge interest in cognitive science as an approach to models of cognition and perception. On the other hand, Bayesian methods for data analysis have not yet made much headway in cognitive science against the institutionalized inertia of 20th century null hypothesis sign ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Bayesian methods have garnered huge interest in cognitive science as an approach to models of cognition and perception. On the other hand, Bayesian methods for data analysis have not yet made much headway in cognitive science against the institutionalized inertia of 20th century null hypothesis significance testing (NHST). Ironically, specific Bayesian models of cognition and perception may not long endure the ravages of empirical verification, but generic Bayesian methods for data analysis will eventually dominate. It is time that Bayesian data analysis became the norm for empirical methods in cognitive science. This article reviews a fatal flaw of NHST and introduces the reader to some benefits of Bayesian data analysis. The article presents illustrative examples of multiple comparisons in Bayesian ANOVA and Bayesian approaches to statistical power.
Bayesian Analysis of Factorial Experiments By Mixture Modelling
, 2000
"... this paper we try our hands at it. One version of the classical theory of factorial experiments, going back to Fisher and further developed by Kempthorne (1955), completely avoids distributional assumptions, assuming only additivity, and uses randomisation to derive the standard tests of hypotheses ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
this paper we try our hands at it. One version of the classical theory of factorial experiments, going back to Fisher and further developed by Kempthorne (1955), completely avoids distributional assumptions, assuming only additivity, and uses randomisation to derive the standard tests of hypotheses about treatment effects. Here, we are interested in the more familiar classical approach via linear modelling and normal distribution theory. The corresponding Bayesian analysis has been developed mainly in the pioneering works of Box & Tiao (1973) and Lindley & Smith (1972). Box & Tiao (1973, Chapter 6) discuss Bayesian analysis of cross classified designs, including fixed, random and mixed effects models. They point out that in a Bayesian approach the appropriate inference procedure for fixed and random effects "depends upon the nature of the prior distribution used to represent the behavior of the factors". They also show (Chapter 7) that shrinkage estimates of specific effects may result when a random effects model is assumed. Lindley & Smith (1972) use a hierarchically structured linear model built on multivariate normal components (special cases of the model are considered by Lindley, 1972 and Smith, 1973), with the focus on estimation of treatment effects. These are authoritative and attractive approaches, albeit with modest compromises to the Bayesian paradigm  in respect of the estimation of the variance components  necessitated by the computational limitations of the time. Nevertheless, the inference is almost entirely estimative: questions about the indistinguishability of factor levels, or more general hypotheses about contrasts, are answered indirectly trough their joint posterior distribution, e.g. by checking whether the hypothesis falls in a highest poster...
Adaptive evolution of conserved noncoding elements in mammals. PLoS Genet. 3: e147. doi: 10.1371/journal.pgen.0030147
, 2007
"... Conserved noncoding elements (CNCs) are an abundant feature of vertebrate genomes. Some CNCs have been shown to act as cisregulatory modules, but the function of most CNCs remains unclear. To study the evolution of CNCs, we have developed a statistical method called the ‘‘shared rates test’ ’ to id ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Conserved noncoding elements (CNCs) are an abundant feature of vertebrate genomes. Some CNCs have been shown to act as cisregulatory modules, but the function of most CNCs remains unclear. To study the evolution of CNCs, we have developed a statistical method called the ‘‘shared rates test’ ’ to identify CNCs that show significant variation in substitution rates across branches of a phylogenetic tree. We report an application of this method to alignments of 98,910 CNCs from the human, chimpanzee, dog, mouse, and rat genomes. We find that;68 % of CNCs evolve according to a null model where, for each CNC, a single parameter models the level of constraint acting throughout the phylogeny linking these five species. The remaining;32 % of CNCs show departures from the basic model including speedups and slowdowns on particular branches and occasionally multiple rate changes on different branches. We find that a subset of the significant CNCs have evolved significantly faster than the local neutral rate on a particular branch, providing strong evidence for adaptive evolution in these CNCs. The distribution of these signals on the phylogeny suggests that adaptive evolution of CNCs occurs in occasional short bursts of evolution. Our analyses suggest a large set of promising targets for future functional studies of adaptation. Citation: Kim SY, Pritchard JK (2007) Adaptive evolution of conserved noncoding elements in mammals. PLoS Genet 3(9): e147. doi:10.1371/journal.pgen.0030147
Bayesian Isotonic Regression for Discrete Outcomes
"... This article proposes a semiparametric Bayesian approach for inference on an unknown isotonic regression function, f(x), characterizing the relationship between a continuous predictor, X, and a response variable, Y, adjusting for covariates, Z. A novel prior formulation is used, which avoids parame ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This article proposes a semiparametric Bayesian approach for inference on an unknown isotonic regression function, f(x), characterizing the relationship between a continuous predictor, X, and a response variable, Y, adjusting for covariates, Z. A novel prior formulation is used, which avoids parametric assumptions on f(x), while enforcing the nondecreasing constraint and assigning positive prior probability to the null hypothesis of no association between X and Y conditional on Z. Through the use of carefully tailored hyperprior distributions, we allow for borrowing of information across different regions of X in estimating of f(x) and in assessing hypotheses about local increases in the function. Due to conjugacy properties, posterior computation is straightforward in a variety of settings, including loglinear models for Poisson data and logistic regression for binary outcomes. The methods are illustrated using a series of simulated data examples.
adaptive
, 2007
"... Analysis and algorithms design for the partition of largescale ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Analysis and algorithms design for the partition of largescale