Results 1 
5 of
5
Sparse graphical models for exploring gene expression data
 Journal of Multivariate Analysis
, 2004
"... DMS0112069. Any opinions, findings, and conclusions or recommendations expressed in this material are ..."
Abstract

Cited by 132 (22 self)
 Add to MetaCart
DMS0112069. Any opinions, findings, and conclusions or recommendations expressed in this material are
Reference analysis
 In Handbook of Statistics 25
, 2005
"... This chapter describes reference analysis, a method to produce Bayesian inferential statements which only depend on the assumed model and the available data. Statistical information theory is used to define the reference prior function as a mathematical description of that situation where data would ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
This chapter describes reference analysis, a method to produce Bayesian inferential statements which only depend on the assumed model and the available data. Statistical information theory is used to define the reference prior function as a mathematical description of that situation where data would best dominate prior knowledge about the quantity of interest. Reference priors are not descriptions of personal beliefs; they are proposed as formal consensus prior functions to be used as standards for scientific communication. Reference posteriors are obtained by formal use of Bayes theorem with a reference prior. Reference prediction is achieved by integration with a reference posterior. Reference decisions are derived by minimizing a reference posterior expected loss. An information theory based loss function, the intrinsic discrepancy, may be used to derive reference procedures for conventional inference problems in scientific investigation, such as point estimation, region estimation and hypothesis testing.
Learning Essential Graph Markov Models from Data
 First European Workshop on Probabilistic Graphical Models
, 2002
"... In a model selection procedure where many models are to be compared, computational e#ciency is critical. For acyclic digraph (ADG) Markov models (aka DAG models or Bayesian networks), each ADG Markov equivalence class can be represented by a unique chain graph, called an essential graph (EG). This p ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
In a model selection procedure where many models are to be compared, computational e#ciency is critical. For acyclic digraph (ADG) Markov models (aka DAG models or Bayesian networks), each ADG Markov equivalence class can be represented by a unique chain graph, called an essential graph (EG). This parsimonious representation might be used to facilitate selection among ADG models. Because EGs combine features of decomposable graphs and ADGs, a scoring metric can be developed for EGs with categorical (multinomial) data. This metric may permit the characterization of local computations directly for EGs, which in turn would yield a learning procedure that does not require transformation to representative ADGs at each step for scoring purposes, nor is the scoring metric constrained by Markov equivalence.
Objective Priors for the Bivariate Normal Model with Multivariate Generalizations 1
, 2006
"... Study of the bivariate normal distribution raises the full range of issues involving objective Bayesian inference, including the different types of objective priors (e.g., Jeffreys, invariant, reference, matching), the different modes of inference (e.g., Bayesian, frequentist, fiducial), and the cri ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Study of the bivariate normal distribution raises the full range of issues involving objective Bayesian inference, including the different types of objective priors (e.g., Jeffreys, invariant, reference, matching), the different modes of inference (e.g., Bayesian, frequentist, fiducial), and the criteria involved in deciding on optimal objective priors (e.g., ease of computation, frequentist performance, marginalization paradoxes). Summary recommendations as to optimal objective priors are made for a variety of inferences involving the bivariate normal distribution. In the course of the investigation, a variety of surprising results were found, including the availability of objective priors that yield exact frequentist inferences for many functions of the bivariate normal parameters, including the correlation coefficient. Several generalizations to the multivariate normal distribution are given. Some key words: Reference priors, matching priors, Jeffreys priors, rightHaar prior, fiducial inference, frequentist coverage, marginalization paradox, rejection sampling, constructive posterior distributions. 1 This research was supported by the National Science Foundation, under grants DMS0103265 and SES
BAYESIAN ANALYSIS OF CONTINGENCY TABLES WITH POSSIBLY ZEROPROBABILITY CELLS
"... Abstract. In this paper we consider a Bayesian analysis of contingency tables allowing for the possibility that cells may have probability zero. In this sense we depart from traditional analyses, such as loglinear modeling that implicitly assume a positivity constraint. This leads us to consider ..."
Abstract
 Add to MetaCart
Abstract. In this paper we consider a Bayesian analysis of contingency tables allowing for the possibility that cells may have probability zero. In this sense we depart from traditional analyses, such as loglinear modeling that implicitly assume a positivity constraint. This leads us to consider mixture models for contingency tables, where the components of the mixture, which we call modelinstances, have distinct support due to the presence of zeroprobability cells. We rely on ideas from polynomial algebra in order to identify the components of the mixture. We also provide some indications to assign prior weights to each instance of the model, as well as describing methods for constructing priors on the parameter space of each instance. We illustrate our methodology by considering the problem of testing for independence in a twoway contingency table using Bayes factors. We conclude the paper with a real application involving a 5 × 2 table involving two structural zeros. 1.