### Citations

1789 | Factor graphs and the sum-product algorithm
- Kschischang, Frey, et al.
(Show Context)
Citation Context ...s to trees and junction trees. The generalization is implemented as a message passing algorithm by [8, 10] named the derivative-sum-product algorithm. Although [8] represents CDNs using factor graphs =-=[13]-=-, neither the usual independence model associate with factor graphs holds in this case (instead the model is equivalent to other already existing notations, as the bi-directed graphs used in [4]), nor... |

1290 |
An Introduction to Copulas
- Nelsen
- 2007
(Show Context)
Citation Context ...d the design of joint distributions where the degree of dependence among variables changes at extreme values of the sample space. For a more detailed overview of copulas and its uses, please refer to =-=[11, 19, 6]-=-. A multivariate copula can in theory be derived from any joint distribution with continuous marginals: if F(X1, . . . ,Xp) is a joint CDF and Fi(·) is the respective marginal CDF of Xi, then F(F−11 (... |

772 |
Probabilistic Networks and Expert Systems
- Cowell, Dawid, et al.
- 1999
(Show Context)
Citation Context ...t i s.t. zi = j are the indices of the set of variables z which are assigned the value of j within the particular term in the summation. From this, we interpret the function pc(u,z)≡ K ∏ j=1 φ j(u,z) =-=(3)-=- as a joint density/mass function over the space [0,1]p×{1,2, . . . ,K}p for a set of random variables U∪Z. This interpretation is warranted by the fact that pc(·) is 3 Please notice that [10] also pr... |

126 | Ancestral graph Markov models.
- Richardson, Spirtes
- 2002
(Show Context)
Citation Context ...se variables a parent of all variables in Si. If hidden variables assigned to different cliques are independent, it follows that the independence constraints among the observed variables of G and G ′ =-=[21]-=- are the same, as defined by standard graphical separation criteria4. See Figure 2 for examples. 4 Known as Global Markov conditions, as described by e.g. [21]. 8 Ricardo Silva U 2U 3U U4 1 2U 2U 3U U... |

103 | The pseudo-marginal approach for efficient Monte Carlo computations
- Andrieu, Roberts
(Show Context)
Citation Context ... even dynamic programming is not a scalable solution for calculating likelihood functions in many models. 1 Introduction Copula functions are cumulative distribution functions (CDFs) in the unit cube =-=[0,1]-=-p with uniform marginals. Copulas allow for the construction of multivariate distributions with arbitrary marginals – a result directly related to the fact that F(X) is uniformly distributed in [0,1],... |

101 | Automatic choice of dimensionality for PCA
- Minka
(Show Context)
Citation Context ...oblem with 100 data points, the method failed spectacularly. That is, the chain hardly ever moved. Far more sophisticated importance distributions will be necessary here. Expectation-propagation (EP) =-=[16]-=- approaches can in principle be developed as alternatives. A particular interesting feature of this problem is that marginal CDFs can be read off easily, and as such energy functions for generalized E... |

100 |
Families of multivariate distributions.
- Marshall, Olkin
- 1988
(Show Context)
Citation Context ...latent variable, and exchangeable over the observations. A detailed account of Archimedean copulas is given by textbooks such as [11, 19], and their relation to exchangeable latent variable models in =-=[15, 7]-=-. Here we provide as an example a latent variable description of the Clayton copula, a popular copula in domains such as finance for allowing stronger dependencies at the lower quantiles of the sample... |

89 |
Vines: A new graphical model for dependent random variables.
- Bedford, Cooke
- 2002
(Show Context)
Citation Context ...ts of z. Finally, let I(·) be the indicator function, where I(x) = 1 if x is a true statement, and zero otherwise. The chain rule states that ∂ pC(u1, . . . ,up) ∂u1 . . .∂up = ∑ z∈Z K ∏ j=1 φ j(u,z) =-=(2)-=- where φ j(u,z)≡ ∂ ∑pi I(zi= j)C j(u a1 j 1 , . . . ,u ap j p ) ∏i s.t. zi= j ∂ui To clarify, the set i s.t. zi = j are the indices of the set of variables z which are assigned the value of j within t... |

79 |
MCMC for doubly-intractable distributions.
- Murray, Ghahramani, et al.
- 2006
(Show Context)
Citation Context ...factorization has a high treewidth, making junction tree inference intractable [3]. In particular, in the latter case Bayesian inference is doubly-intractable (following the terminology introduced by =-=[17]-=-) since the likelihood function cannot be computed. Neither the task of writing new software nor deriving new approximations are easy, with the full junction tree algorithm of [10] being considerably ... |

52 |
Multivariate Models and Dependence Concepts, Chapman & HAll/CRC,
- Joe
- 1997
(Show Context)
Citation Context ...d the design of joint distributions where the degree of dependence among variables changes at extreme values of the sample space. For a more detailed overview of copulas and its uses, please refer to =-=[11, 19, 6]-=-. A multivariate copula can in theory be derived from any joint distribution with continuous marginals: if F(X1, . . . ,Xp) is a joint CDF and Fi(·) is the respective marginal CDF of Xi, then F(F−11 (... |

25 | Binary models for marginal independence.
- Drton, Richardson
- 2008
(Show Context)
Citation Context ...ce of some arguments in the factors (corresponding in (1) to setting some exponents ai j to zero). Independence constraints from such models include those arising from models of marginal independence =-=[4, 5]-=-. Example 1 We first adopt the graphical notation of [4] to describe the factor structure of the cumulative distribution network (CDN) models of Huang and Frey, where a bi-directed edge Um ↔Un is incl... |

23 |
Probabilistic Reasoning in Expert Systems: Networks of Plausible Inference.
- Pearl
- 1988
(Show Context)
Citation Context ...ing in a tractable subgraph of the original graph. Only a subset will be sampled. This can be done in a way analogous to the classic cutset conditioning approach for inference in Markov random fields =-=[20]-=-. In effect, any machinery used to sample from discrete Markov random fields can be imported to the task of sampling Z. Since the method in Section 3 is basically the result of marginalizing Z analyti... |

22 | Learning with tree-averaged densities and distributions
- Kirshner
- 2007
(Show Context)
Citation Context ...s being able to compute F−1i (·), which in many cases is not a tractable problem. Specialized constructions exist, particularly for recipes which use small dimensional copulas as building blocks. See =-=[2, 12]-=- for examples. In this paper, we provide algorithms for performing Bayesian inference using the product of copulas framework of Liebscher [14]. Constructing copulas by multiplying functions of small d... |

19 | An algorithm for maximum likelihood estimation in Gaussian graphical models for marginal independence.
- Drton, Richardson
- 2003
(Show Context)
Citation Context ...ce of some arguments in the factors (corresponding in (1) to setting some exponents ai j to zero). Independence constraints from such models include those arising from models of marginal independence =-=[4, 5]-=-. Example 1 We first adopt the graphical notation of [4] to describe the factor structure of the cumulative distribution network (CDN) models of Huang and Frey, where a bi-directed edge Um ↔Un is incl... |

19 | Construction of asymmetric multivariate copulas
- Liebscher
- 2008
(Show Context)
Citation Context ... use small dimensional copulas as building blocks. See [2, 12] for examples. In this paper, we provide algorithms for performing Bayesian inference using the product of copulas framework of Liebscher =-=[14]-=-. Constructing copulas by multiplying functions of small dimensional copulas is a conceptually simple construction, and does not require the definition of a hierarchy among observed variables as in [2... |

17 | Cumulative distribution networks and the derivative-sum-product algorithm: Models and inference for cumulative distribution functions on graphs
- Huang, Frey
- 2011
(Show Context)
Citation Context ... . ,up)≡ K ∏ j=1 C j(u a1 j 1 , . . . ,u ap j p ) (1) where ai1 + . . .+aiK = 1, ai j ≥ 0 for all 1≤ i≤ p, 1≤ j ≤ K, with each C j(·, . . . , ·) being a copula function. Independently, Huang and Frey =-=[8, 9]-=- derived a product of CDFs model from the point of view of graphical models, where independence constraints arise due to the absence of some arguments in the factors (corresponding in (1) to setting s... |

14 | The hidden life of latent variables: Bayesian learning with mixed graph models.
- Silva, Ghahramani
- 2009
(Show Context)
Citation Context ...rameters are mutually dependend in the posterior since (2) does not factorize in general. This mirrors the behaviour of MCMC algorithms for the Gaussian model of marginal independence as described by =-=[24]-=-. Unlike the Gaussian model, there are no hard constraints on the parameters Bayesian Inference in Cumulative Distribution Fields 7 across different factors. Unlike the Gaussian model, however, factor... |

13 |
Slice Sampling.” The Annals of Statistics 31
- Neal
- 2003
(Show Context)
Citation Context ...y 10 latent variables with each observation Ui having only two parents. We used a Metropolis-Hastings method where each θi is sampled in turn conditioning on all other parameters using slice sampling =-=[18]-=-. Latent variables are sampled one by one using a simple random walk proposal. A gamma (2,2) prior is assigned to each copula parameter independently. Figure 3 illustrates the trace obtained by initia... |

12 |
Sampling archimedean copulas.
- Hofert
- 2008
(Show Context)
Citation Context ...latent variable, and exchangeable over the observations. A detailed account of Archimedean copulas is given by textbooks such as [11, 19], and their relation to exchangeable latent variable models in =-=[15, 7]-=-. Here we provide as an example a latent variable description of the Clayton copula, a popular copula in domains such as finance for allowing stronger dependencies at the lower quantiles of the sample... |

4 | Exact inference and learning for cumulative distribution functions on loopy graphs
- Huang, Jojic, et al.
(Show Context)
Citation Context ... copulas is also a CDF, we need to be able to calculate the likelihood function if Bayesian inference is to take place1. The structure of our contribution is as follows: i. we simplify the results of =-=[10]-=-, by reducing them to standard message passing algorithms as found in the literature of graphical models [3] (Section 3); ii. for intractable likelihood problems, an alternative latent variable repres... |

2 | Latent composite likelihood learning for the structured canonical correlation model
- Silva
- 2012
(Show Context)
Citation Context ...e case where dynamic programming by itself is possible, a modification of (1) using differences instead of differentiation leads to a similar discrete latent variable formulation (see the Appendix of =-=[22]-=-) without the need of any further set of latent variables. However, the corresponding function is not a joint distribution over Z∪U anymore, since differences can generate negative numbers. Bayesian I... |

2 |
Posterior sampling when the normalising constant is unknown
- Walker
- 2011
(Show Context)
Citation Context ...graphical models [3] (Section 3); ii. for intractable likelihood problems, an alternative latent variable representation for the likelihood function is introduced, following in spirit the approach of =-=[25]-=- for solving doubly-intractable Bayesian inference problems by auxiliary variable sampling (Section 4). We start with Section 2, where we discuss with some more detail the product of copulas represent... |

1 | A MCMC approach for learning the structure of Gaussian acyclic directed mixed graphs
- Silva
- 2013
(Show Context)
Citation Context ...H(1), . . . ,H(d)}} given observed data with a sample size of d. We do not consider estimating the shape of the factorization (i.e., the respective graphical model structure learning task) as done in =-=[23]-=-. 5 Illustration We discuss two examples to show the possibilities and difficulties of performing MCMC inference in dense and sparse cumulative distribution fields. For simplicity we treat the exponen... |