Results 1  10
of
21
Normal Linear Regression Models with Recursive Graphical Markov Structure
 J. MULTIVARIATE ANAL
, 1998
"... A multivariate normal statistical model defined by the Markov pr er deter by an acyclic digric admits ar efactorof its likelihood function (LF) into the pr duct of conditional LFs, eachfactor having the for of a classical multivar  linear rear model (# MANOVA model).Her these modelsar extended ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
A multivariate normal statistical model defined by the Markov pr er deter by an acyclic digric admits ar efactorof its likelihood function (LF) into the pr duct of conditional LFs, eachfactor having the for of a classical multivar  linear rear model (# MANOVA model).Her these modelsar extended in anatur way tonor linear rear models whose LFs continue to admit suchr efactorr frr which maximum likelihoodestimator and likelihoodr (LR) test statistics can beder ed by classical linear methods. The centrdistr  of the LR test statisticfor testing one such multivariv norv linear rear model against another isder ed, and there of theseresesion models to blockr enor linear systems is established. It is shown how a collection of nonnested dependentnor linear rear models (# seemingly unringly ringly can be combined into a single multivariv norvlinear rn grear model by imposing apar set of graphical Markov (# conditional independence) restrictions.
Covariance Chains
 Bernoulli
, 2006
"... Covariance matrices which can be arranged in tridiagonal form are called covariance chains. They are used to clarify some issues of parameter equivalence and of independence equivalence for linear models in which a set of latent variables influences a set of observed variables. For this purpose, ort ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
Covariance matrices which can be arranged in tridiagonal form are called covariance chains. They are used to clarify some issues of parameter equivalence and of independence equivalence for linear models in which a set of latent variables influences a set of observed variables. For this purpose, orthogonal decompositions for covariance chains are derived first in explicit form. Covariance chains are also contrasted to concentration chains, for which estimation is explicit and simple. For this purpose, maximumlikelihood equations are derived first for exponential families when some parameters satisfy zero value constraints. From these equations explicit estimates are obtained, which are asymptotically efficient, and they are applied to covariance chains. Simulation results confirm the satisfactory behaviour of the explicit covariance chain estimates also in moderatesize samples.
Multiple testing and error control in Gaussian graphical model selection
 Statistical Science
"... Abstract. Graphical models provide a framework for exploration of multivariate dependence patterns. The connection between graph and statistical model is made by identifying the vertices of the graph with the observed variables and translating the pattern of edges in the graph into a pattern of cond ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract. Graphical models provide a framework for exploration of multivariate dependence patterns. The connection between graph and statistical model is made by identifying the vertices of the graph with the observed variables and translating the pattern of edges in the graph into a pattern of conditional independences that is imposed on the variables ’ joint distribution. Focusing on Gaussian models, we review classical graphical models. For these models the defining conditional independences are equivalent to vanishing of certain (partial) correlation coefficients associated with individual edges that are absent from the graph. Hence, Gaussian graphical model selection can be performed by multiple testing of hypotheses about vanishing (partial) correlation coefficients. We show and exemplify how this approach allows one to perform model selection while controlling error rates for incorrect edge inclusion. Key words and phrases: Acyclic directed graph, Bayesian network, bidirected graph, chain graph, concentration graph, covariance graph, DAG, graphical model, multiple testing, undirected graph. 1.
Compatible Prior Distributions for DAG models
, 2002
"... The application of certain Bayesian techniques, such as the Bayes factor and model averaging, requires the specification of prior distributions on the parameters of alternative models. We propose a new method for constructing compatible priors on the parameters of models nested in a given DAG (Direc ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
The application of certain Bayesian techniques, such as the Bayes factor and model averaging, requires the specification of prior distributions on the parameters of alternative models. We propose a new method for constructing compatible priors on the parameters of models nested in a given DAG (Directed Acyclic Graph) model, using a conditioning approach. We define a class of parameterisations consistent with the modular structure of the DAG and derive a procedure, invariant within this class, which we name reference conditioning.
Identifying Small Mean Reverting Portfolios
, 2008
"... Given multivariate time series, we study the problem of forming portfolios with maximum mean reversion while constraining the number of assets in these portfolios. We show that it can be formulated as a sparse canonical correlation analysis and study various algorithms to solve the corresponding spa ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Given multivariate time series, we study the problem of forming portfolios with maximum mean reversion while constraining the number of assets in these portfolios. We show that it can be formulated as a sparse canonical correlation analysis and study various algorithms to solve the corresponding sparse generalized eigenvalue problems. After discussing penalized parameter estimation procedures, we study the sparsity versus predictability tradeoff and the impact of predictability in various markets.
Sequences of regressions and their independences
, 2012
"... Ordered sequences of univariate or multivariate regressions provide statistical modelsfor analysingdata fromrandomized, possiblysequential interventions, from cohort or multiwave panel studies, but also from crosssectional or retrospective studies. Conditional independences are captured by what we ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Ordered sequences of univariate or multivariate regressions provide statistical modelsfor analysingdata fromrandomized, possiblysequential interventions, from cohort or multiwave panel studies, but also from crosssectional or retrospective studies. Conditional independences are captured by what we name regression graphs, provided the generated distribution shares some properties with a joint Gaussian distribution. Regression graphs extend purely directed, acyclic graphs by two types of undirected graph, one type for components of joint responses and the other for components of the context vector variable. We review the special features and the history of regression graphs, prove criteria for Markov equivalence anddiscussthenotion of simpler statistical covering models. Knowledgeof Markov equivalence provides alternative interpretations of a given sequence of regressions, is essential for machine learning strategies and permits to use the simple graphical criteria of regression graphs on graphs for which the corresponding criteria are in general more complex. Under the known conditions that a Markov equivalent directed acyclic graph exists for any given regression graph, we give a polynomial time algorithm to find one such graph.
Clique Matrices for Statistical Graph Decomposition and Parameterising Restricted Positive Definite Matrices
"... We introduce Clique Matrices as an alternative representation of undirected graphs, being a generalisation of the incidence matrix representation. Here we use clique matrices to decompose a graph into a set of possibly overlapping clusters, defined as wellconnected subsets of vertices. The decomposi ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We introduce Clique Matrices as an alternative representation of undirected graphs, being a generalisation of the incidence matrix representation. Here we use clique matrices to decompose a graph into a set of possibly overlapping clusters, defined as wellconnected subsets of vertices. The decomposition is based on a statistical description which encourages clusters to be well connected and few in number. Inference is carried out using a variational approximation. Clique matrices also play a natural role in parameterising positive definite matrices under zero constraints on elements of the matrix. We show that clique matrices can parameterise all positive definite matrices restricted according to a decomposable graph and form a structured Factor Analysis approximation in the nondecomposable case. 1
A Characterization of Moral Transitive Directed Acyclic Graph Markov models as trees and its properties
, 2000
"... It follows from the known relationships among the dierent classes of graphical Markov models for conditional independence that the intersection of the classes of moral directed acyclic graph models (or decomposable {DEC models), and transitive directed acyclic graph {TDAG models (or lattice cond ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
It follows from the known relationships among the dierent classes of graphical Markov models for conditional independence that the intersection of the classes of moral directed acyclic graph models (or decomposable {DEC models), and transitive directed acyclic graph {TDAG models (or lattice conditional independence {LCI models) is nonempty. This paper shows that the conditional independence models in the intersection can be characterized as labeled trees, where every vertex on the tree corresponds to a single random variable. This fact leads to the de nition of a speci c Markov property for trees and therefore to the introduction of trees as part of the family of graphical Markov Models.