Results 1 
4 of
4
Cumulative distribution networks and the derivativesumproduct algorithm
"... We introduce a new type of graphical model called a ‘cumulative distribution network’ (CDN), which expresses a joint cumulative distribution as a product of local functions. Each local function can be viewed as providing evidence about possible orderings, or rankings, of variables. Interestingly, we ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
We introduce a new type of graphical model called a ‘cumulative distribution network’ (CDN), which expresses a joint cumulative distribution as a product of local functions. Each local function can be viewed as providing evidence about possible orderings, or rankings, of variables. Interestingly, we find that the conditional independence properties of CDNs are quite different from other graphical models. We also describe a messagepassing algorithm that efficiently computes conditional cumulative distributions. Due to the unique independence properties of the CDN, these messages do not in general have a onetoone correspondence with messages exchanged in standard algorithms, such as belief propagation. We demonstrate the application of CDNs for structured ranking learning using a previouslystudied multiplayer gaming dataset. 1
Mixed Cumulative Distribution Networks
"... Directed acyclic graphs (DAGs) are a popular framework to express multivariate probability distributions. Acyclic directed mixed graphs (ADMGs) are generalizations of DAGs that can succinctly capture much richer sets of conditional independencies, and are especially useful in modeling the effects of ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Directed acyclic graphs (DAGs) are a popular framework to express multivariate probability distributions. Acyclic directed mixed graphs (ADMGs) are generalizations of DAGs that can succinctly capture much richer sets of conditional independencies, and are especially useful in modeling the effects of latent variables implicitly. Unfortunately, there are currently no parameterizations of general ADMGs. In this paper, we apply recent work on cumulative distribution networks and copulas to propose one general construction for ADMG models. We consider a simple parameter estimation approach, and report some encouraging experimental results. MGs are. Reading off independence constraints from a ADMG can be done with a procedure essentially identical to dseparation (Pearl, 1988, Richardson and Spirtes, 2002). Given a graphical structure, the challenge is to provide a procedure to parameterize models that correspond to the independence constraints of the graph, as illustrated below. Example 1: Bidirected edges correspond to some hidden common parent that has been marginalized. In the Gaussian case, this has an easy interpretation as constraints in the marginal covariance matrix of the remaining variables. Consider the two graphs below.
Maximumlikelihood learning of cumulative distribution functions on graphs
 13th International Conference on Artificial Intelligence and Statistics, AISTATS
, 2010
"... For many applications, a probability model can be more easily expressed as a cumulative distribution functions (CDF) as compared to the use of probability density or mass functions (PDF/PMFs). One advantage of CDF models is the simplicity of representing multivariate heavytailed distributions. Exam ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
For many applications, a probability model can be more easily expressed as a cumulative distribution functions (CDF) as compared to the use of probability density or mass functions (PDF/PMFs). One advantage of CDF models is the simplicity of representing multivariate heavytailed distributions. Examples of fields that can benefit from the use of graphical models for CDFs include climatology and epidemiology, where datafollowheavytaileddistributions and exhibit spatial correlations so that dependencies between model variables must be accounted for. However, in most cases the problem of learning from data consists of optimizing the loglikelihood function with respect to model parameters where we are required to optimize a logPDF/PMF and not a logCDF. Given a CDF defined on a graph, we present a messagepassing algorithm called the gradientderivativeproduct (GDP) algorithm that allows us to learn the model in terms of the loglikelihood function whereby messages correspond to local gradients of the likelihood with respect to model parameters. We demonstrate the GDP algorithm on realworld rainfall and H1N1 mortality data and weshow that the heavytailed multivariate distributions that arise in these problems can both be naturally parameterized and tractably estimated from data using our algorithm. 1
Exact inference and learning of cumulative distribution functions on loopy graphs
 Neural Information Systems Processing
, 2010
"... graphs ..."