Results 1  10
of
99
Graphical models, exponential families, and variational inference. Foundations Trends
 Ihler (ihler@ics.uci.edu), University of California, Irvine. Michael
"... The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fiel ..."
Abstract

Cited by 428 (27 self)
 Add to MetaCart
The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fields, including bioinformatics, communication theory, statistical physics, combinatorial optimization, signal and image processing, information retrieval and statistical machine learning. Many problems that arise in specific instances — including the key problems of computing marginals and modes of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations. We describe how a wide varietyof algorithms — among them sumproduct, cluster variational methods, expectationpropagation, mean field methods, maxproduct and linear programming relaxation, as well as conic programming relaxations — can all be understood in terms of exact or approximate forms of these variational representations. The variational approach provides a complementary alternative to Markov chain Monte Carlo as a general source of approximation methods for inference in largescale statistical models. 1
Learning depth from single monocular images
 In NIPS 18
, 2005
"... We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their correspond ..."
Abstract

Cited by 82 (28 self)
 Add to MetaCart
We consider the task of depth estimation from a single monocular image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured outdoor environments which include forests, trees, buildings, etc.) and their corresponding groundtruth depthmaps. Then, we apply supervised learning to predict the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a discriminativelytrained Markov Random Field (MRF) that incorporates multiscale local and globalimage features, and models both depths at individual points as well as the relation between depths at different points. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps. 1
3D depth reconstruction from a single still image
, 2006
"... We consider the task of 3d depth estimation from a single still image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured indoor and outdoor environments which include forests, sidewalks, trees, buildings, etc ..."
Abstract

Cited by 63 (16 self)
 Add to MetaCart
We consider the task of 3d depth estimation from a single still image. We take a supervised learning approach to this problem, in which we begin by collecting a training set of monocular images (of unstructured indoor and outdoor environments which include forests, sidewalks, trees, buildings, etc.) and their corresponding groundtruth depthmaps. Then, we apply supervised learning to predict the value of the depthmap as a function of the image. Depth estimation is a challenging problem, since local features alone are insufficient to estimate depth at a point, and one needs to consider the global context of the image. Our model uses a hierarchical, multiscale Markov Random Field (MRF) that incorporates multiscale local and globalimage features, and models the depths and the relation between depths at different points in the image. We show that, even on unstructured scenes, our algorithm is frequently able to recover fairly accurate depthmaps. We further propose a model that incorporates both monocular cues and stereo (triangulation) cues, to obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone.
Embedded Trees: Estimation of Gaussian Processes on Graphs with Cycles
 IEEE Transactions on Signal Processing
, 2002
"... Graphical models provide a powerful general framework for encoding the structure of largescale estimation problems. However, the graphs describing typical realworld phenomena contain many cycles, making direct estimation procedures prohibitively costly. In this paper, we develop an iterative infer ..."
Abstract

Cited by 36 (13 self)
 Add to MetaCart
Graphical models provide a powerful general framework for encoding the structure of largescale estimation problems. However, the graphs describing typical realworld phenomena contain many cycles, making direct estimation procedures prohibitively costly. In this paper, we develop an iterative inference algorithm for general Gaussian graphical models. It operates by exactly solving a series of modified estimation problems on spanning trees embedded within the original cyclic graph. When these subproblems are suitably chosen, the algorithm converges to the correct conditional means. Moreover, and in contrast to many other iterative methods, the treebased procedures we propose can also be used to calculate exact error variances. Although the conditional mean iteration is effective for quite densely connected graphical models, the error variance computation is most efficient for sparser graphs. In this context, we present a modeling example which suggests that very sparsely connected graphs with cycles may provide significant advantages relative to their treestructured counterparts, thanks both to the expressive power of these models and to the efficient inference algorithms developed herein.
LogDeterminant Relaxation for Approximate Inference in Discrete Markov Random Fields
, 2006
"... Graphical models are well suited to capture the complex and nonGaussian statistical dependencies that arise in many realworld signals. A fundamental problem common to any signal processing application of a graphical model is that of computing approximate marginal probabilities over subsets of nod ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
Graphical models are well suited to capture the complex and nonGaussian statistical dependencies that arise in many realworld signals. A fundamental problem common to any signal processing application of a graphical model is that of computing approximate marginal probabilities over subsets of nodes. This paper proposes a novel method, applicable to discretevalued Markov random fields (MRFs) on arbitrary graphs, for approximately solving this marginalization problem. The foundation of our method is a reformulation of the marginalization problem as the solution of a lowdimensional convex optimization problem over the marginal polytope. Exactly solving this problem for general graphs is intractable; for binary Markov random fields, we describe how to relax it by using a Gaussian bound on the discrete entropy and a semidefinite outer bound on the marginal polytope. This combination leads to a logdeterminant maximization problem that can be solved efficiently by interior point methods, thereby providing approximations to the exact marginals. We show how a slightly weakened logdeterminant relaxation can be solved even more efficiently by a dual reformulation. When applied to denoising problems in a coupled mixtureofGaussian model defined on a binary MRF with cycles, we find that the performance of this logdeterminant relaxation is comparable or superior to the widely used sumproduct algorithm over a range of experimental conditions.
Multiscale Poisson intensity and density estimation
 IEEE TRANS. INFO. TH
, 2005
"... The nonparametric Poisson intensity and density estimation methods studied in this paper offer near minimax convergence rates for broad classes of densities and intensities with arbitrary levels of smoothness. The methods and theory presented here share many of the desirable features associated with ..."
Abstract

Cited by 25 (12 self)
 Add to MetaCart
The nonparametric Poisson intensity and density estimation methods studied in this paper offer near minimax convergence rates for broad classes of densities and intensities with arbitrary levels of smoothness. The methods and theory presented here share many of the desirable features associated with waveletbased estimators: computational speed, spatial adaptivity, and the capability of detecting discontinuities and singularities with high resolution. Unlike traditional waveletbased approaches, which impose an upper bound on the degree of smoothness to which they can adapt, the estimators studied here guarantee nonnegativity and do not require any a priori knowledge of the underlying signal’s smoothness to guarantee nearoptimal performance. At the heart of these methods lie multiscale decompositions based on freeknot, freedegree piecewisepolynomial functions and penalized likelihood estimation. The degrees as well as the locations of the polynomial pieces can be adapted to the observed data, resulting in near minimax optimal convergence rates. For piecewise analytic signals, in particular, the error of this estimator converges at nearly the parametric rate. These methods can be further refined in two dimensions, and it is demonstrated that plateletbased estimators in two dimensions exhibit similar nearoptimal error convergence rates for images consisting of smooth surfaces separated by smooth boundaries.
Learning Multiscale Representations of Natural Scenes Using Dirichlet Processes
"... We develop nonparametric Bayesian models for multiscale representations of images depicting natural scene categories. Individual features or wavelet coefficients are marginally described by Dirichlet process (DP) mixtures, yielding the heavytailed marginal distributions characteristic of natural im ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
We develop nonparametric Bayesian models for multiscale representations of images depicting natural scene categories. Individual features or wavelet coefficients are marginally described by Dirichlet process (DP) mixtures, yielding the heavytailed marginal distributions characteristic of natural images. Dependencies between features are then captured with a hidden Markov tree, and Markov chain Monte Carlo methods used to learn models whose latent state space grows in complexity as more images are observed. By truncating the potentially infinite set of hidden states, we are able to exploit efficient belief propagation methods when learning these hierarchical Dirichlet process hidden Markov trees (HDPHMTs) from data. We show that our generative models capture interesting qualitative structure in natural scenes, and more accurately categorize novel images than models which ignore spatial relationships among features. 1.
Extended Message Passing Algorithm for Inference in Loopy Gaussian Graphical Models
, 2002
"... We consider message passing for probabilistic inference in undirected Gaussian graphical models. We show that for singly connected graphs, message passing yields an algorithm that is equivalent to the application of Gaussian elimination to the solution of a particular system of equations. This relat ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
We consider message passing for probabilistic inference in undirected Gaussian graphical models. We show that for singly connected graphs, message passing yields an algorithm that is equivalent to the application of Gaussian elimination to the solution of a particular system of equations. This relation provides a natural way of extending message passing to arbitrary graphs with loops by first studying the operations required by Gaussian elimination. We thus obtain a finite time convergent algorithm that solves the inference problem exactly and whose complexity grows gradually with the "distance" of the graph to a tree. This algorithm can be implemented in a distributed fashion at nodes through message passing, as for example in sensor networks.
Overview of methods for image reconstruction from projections in emission computed tomography
 PROC. IEEE
, 2003
"... Emission computed tomography (ECT) is a technology for medical imaging whose importance is increasing rapidly. There is a growing appreciation for the value of the functional (as opposed to anatomical) information that is provided by ECT and there are significant advancements taking place, both in t ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Emission computed tomography (ECT) is a technology for medical imaging whose importance is increasing rapidly. There is a growing appreciation for the value of the functional (as opposed to anatomical) information that is provided by ECT and there are significant advancements taking place, both in the instrumentation for data collection, and in the computer methods for generating images from the measured data. These computer methods are designed to solve the inverse problem known as “image reconstruction from projections.” This paper uses the various models of the data collection process as the framework for presenting an overview of the wide variety of methods that have been developed for image reconstruction in the major subfields of ECT, which are positron emission tomography (PET) and singlephoton emission computed tomography (SPECT). The overall sequence of the major sections in the paper, and the presentation within each major section, both proceed from the more realistic and general models to those that are idealized and application specific. For most of the topics, the description proceeds from the threedimensional case to the twodimensional case. The paper presents a broad overview of algorithms for PET and SPECT, giving references to the literature where these algorithms and their applications are described in more detail.
Hierarchical stochastic image grammars for classification and segmentation
 IEEE Trans. Image Processing
, 2006
"... Abstract—We develop a new class of hierarchical stochastic image models called spatial random trees (SRTs) which admit polynomialcomplexity exact inference algorithms. Our framework of multitree dictionaries is the starting point for this construction. SRTs are stochastic hidden tree models whose l ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Abstract—We develop a new class of hierarchical stochastic image models called spatial random trees (SRTs) which admit polynomialcomplexity exact inference algorithms. Our framework of multitree dictionaries is the starting point for this construction. SRTs are stochastic hidden tree models whose leaves are associated with image data. The states at the tree nodes are random variables, and, in addition, the structure of the tree is random and is generated by a probabilistic grammar. We describe an efficient recursive algorithm for obtaining the maximum a posteriori estimate of both the tree structure and the tree states given an image. We also develop an efficient procedure for performing one iteration of the expectationmaximization algorithm and use it to estimate the model parameters from a set of training images. We address other inference problems arising in applications such as maximization of posterior marginals and hypothesis testing. Our models and algorithms are illustrated through several image classification and segmentation experiments, ranging from the segmentation of synthetic images to the classification of natural photographs and the segmentation of scanned documents. In each case, we show that our method substantially improves accuracy over a variety of existing methods. Index Terms—Dictionary, estimation, grammar, hierarchical model, image classification, probabilistic contextfree grammar, segmentation, statistical image model, stochastic contextfree grammar, tree model. I.