Results 1  10
of
23
Stratified exponential families: Graphical models and model selection
 ANNALS OF STATISTICS
, 2001
"... ..."
(Show Context)
A New Discriminative Kernel from Probabilistic Models
, 2002
"... Recently, Jaakkola and Haussler proposed a method for constructing kernel functions from probabilistic models. Their so called \Fisher kernel" has been combined with discriminative classi ers such as SVM and applied successfully in e.g. DNA and protein analysis. Whereas the Fisher kernel (FK) ..."
Abstract

Cited by 72 (6 self)
 Add to MetaCart
Recently, Jaakkola and Haussler proposed a method for constructing kernel functions from probabilistic models. Their so called \Fisher kernel" has been combined with discriminative classi ers such as SVM and applied successfully in e.g. DNA and protein analysis. Whereas the Fisher kernel (FK) is calculated from the marginal loglikelihood, we propose the TOP kernel derived from Tangent vectors Of Posterior logodds. Furthermore, we develop a theoretical framework on feature extractors from probabilistic models and use it for analyzing the TOP kernel. In experiments our new discriminative TOP kernel compares favorably to the Fisher kernel.
Instrumentality Tests Revisited
 In Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence
, 2001
"... An instrument is a random variable that is uncorrelated with certain (unobserved) error terms and, thus, allows the identification of structural parameters in linear models. In nonlinear models, instrumental variables are useful for deriving bounds on causal effects. Few years ago, Pearl introduced ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
An instrument is a random variable that is uncorrelated with certain (unobserved) error terms and, thus, allows the identification of structural parameters in linear models. In nonlinear models, instrumental variables are useful for deriving bounds on causal effects. Few years ago, Pearl introduced a necessary test for instruments which permits researchers to identify variables that could not serve as instruments. In this paper, we extend Pearl's result in several directions. In particular, we answer in the armative an open conjecture about the nontestability of instruments in models with unrestricted variables, and we devise new tests for models with discrete and continuous variables.
Population Markov Chain Monte Carlo
 Machine Learning
, 2003
"... Stochastic search algorithms inspired by physical and biological systems are applied to the problem of learning directed graphical probability models in the presence of missing observations and hidden variables. For this class of problems, deterministic search algorithms tend to halt at local optima ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
(Show Context)
Stochastic search algorithms inspired by physical and biological systems are applied to the problem of learning directed graphical probability models in the presence of missing observations and hidden variables. For this class of problems, deterministic search algorithms tend to halt at local optima, requiring random restarts to obtain solutions of acceptable quality. We compare three stochastic search algorithms: a MetropolisHastings Sampler (MHS), an Evolutionary Algorithm (EA), and a new hybrid algorithm called Population Markov Chain Monte Carlo, or popMCMC. PopMCMC uses statistical information from a population of MHSs to inform the proposal distributions for individual samplers in the population. Experimental results show that popMCMC and EAs learn more efficiently than the MHS with no information exchange. Populations of MCMC samplers exhibit more diversity than populations evolving according to EAs not satisfying physicsinspired local reversibility conditions. KEY WORDS: Markov Chain Monte Carlo, MetropolisHastings Algorithm, Graphical Probabilistic Models, Bayesian Networks, Bayesian Learning, Evolutionary Algorithms Machine Learning MCMC Issue 1 5/16/01 1.
Transelliptical graphical models
 In Advances in Neural Information Processing Systems
, 2012
"... We advocate the use of a new distribution family—the transelliptical—for robust inference of high dimensional graphical models. The transelliptical family is an extension of the nonparanormal family proposed by Liu et al. (2009). Just as the nonparanormal extends the normal by transforming the varia ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
(Show Context)
We advocate the use of a new distribution family—the transelliptical—for robust inference of high dimensional graphical models. The transelliptical family is an extension of the nonparanormal family proposed by Liu et al. (2009). Just as the nonparanormal extends the normal by transforming the variables using univariate functions, the transelliptical extends the elliptical family in the same way. We propose a nonparametric rankbased regularization estimator which achieves the parametric rates of convergence for both graph recovery and parameter estimation. Such a result suggests that the extra robustness and flexibility obtained by the semiparametric transelliptical modeling incurs almost no efficiency loss. We also discuss the relationship between this work with the transelliptical component analysis proposed by Han and Liu (2012). 1
Inequality constraints in causal models with hidden variables
 In Proceedings of the Seventeenth Annual Conference on Uncertainty in Artificial Intelligence (UAI06
, 2006
"... We present a class of inequality constraints on the set of distributions induced by local interventions on variables governed by a causal Bayesian network, in which some of the variables remain unmeasured. We derive bounds on causal effects that are not directly measured in randomized experiments. W ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
We present a class of inequality constraints on the set of distributions induced by local interventions on variables governed by a causal Bayesian network, in which some of the variables remain unmeasured. We derive bounds on causal effects that are not directly measured in randomized experiments. We derive instrumental inequality type of constraints on nonexperimental distributions. The results have applications in testing causal models with observational or experimental data. 1
A SemiAlgebraic Description of Discrete Naive Bayes Models with Two Hidden Classes
 In Proc. Ninth International Symposium on Artificial Intelligence and Mathematics, Fort
, 2006
"... Discrete Bayesian network models with hidden variables de ne an important class of statistical models. These models are usually de ned parametrically, but can also be described semialgebraically as the solutions in the probability simplex of a nite set of polynomial equations and inequations. In th ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Discrete Bayesian network models with hidden variables de ne an important class of statistical models. These models are usually de ned parametrically, but can also be described semialgebraically as the solutions in the probability simplex of a nite set of polynomial equations and inequations. In this paper we present a semialgebraic description of discrete Naive Bayes models with two hidden classes and a nite number of observable variables. The identi ability of the parameters is also studied. Our derivations are based on an alternative parametrization of the Naive Bayes models with an arbitrary number of hidden classes. 1
Polynomial constraints in causal Bayesian networks
 In Proceedings of the Seventeenth Annual Conference on Uncertainty in Artificial Intelligence (UAI07
"... We use the implicitization procedure to generate polynomial equality constraints on the set of distributions induced by local interventions on variables governed by a causal Bayesian network with hidden variables. We show how we may reduce the complexity of the implicitization problem and make the p ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
We use the implicitization procedure to generate polynomial equality constraints on the set of distributions induced by local interventions on variables governed by a causal Bayesian network with hidden variables. We show how we may reduce the complexity of the implicitization problem and make the problem tractable in certain causal Bayesian networks. We also show some preliminary results on the algebraic structure of polynomial constraints. The results have applications in distinguishing between causal models and in testing causal models with combined observational and experimental data. 1
The geometry of conditional independence tree models with hidden variables, arXiv:0904.1980
 Department of Mathematics, University of California, Berkeley, CA 94720, USA Email address: macueto@math.berkeley.edu Department of Mathematics, Stanford University
"... Abstract. In this paper we investigate the geometry of undirected graphical models of trees when all the variables in the system are binary and some of them are hidden. We obtain a full description of those models which is given by polynomial equations and inequalities and give exact formulas for th ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Abstract. In this paper we investigate the geometry of undirected graphical models of trees when all the variables in the system are binary and some of them are hidden. We obtain a full description of those models which is given by polynomial equations and inequalities and give exact formulas for their parameters in terms of the marginal probability over the observed variables. We also show how correlations link to tree metrics considered in phylogenetics. Finally, a new system of coordinates is given that is intrinsically related to the phylogenetic tree models and which allows us to classify phylogenetic invariants. 1.