Results 1  10
of
43
Inducing Features of Random Fields
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1997
"... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..."
Abstract

Cited by 553 (14 self)
 Add to MetaCart
We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the KullbackLeibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are nonMarkovian and have a large number of parameters that must be estimated. Relations to other learning approaches, including decision trees, are given. As a demonstration of the method, we describe its application to the problem of automatic word classifica...
Approximate Solutions to Markov Decision Processes
, 1999
"... One of the basic problems of machine learning is deciding how to act in an uncertain world. For example, if I want my robot to bring me a cup of coffee, it must be able to compute the correct sequence of electrical impulses to send to its motors to navigate from the coffee pot to my office. In fact, ..."
Abstract

Cited by 66 (9 self)
 Add to MetaCart
One of the basic problems of machine learning is deciding how to act in an uncertain world. For example, if I want my robot to bring me a cup of coffee, it must be able to compute the correct sequence of electrical impulses to send to its motors to navigate from the coffee pot to my office. In fact, since the results of its actions are not completely predictable, it is not enough just to compute the correct sequence; instead the robot must sense and correct for deviations from its intended path. In order for any machine learner to act reasonably in an uncertain environment, it must solve problems like the above one quickly and reliably. Unfortunately, the world is often so complicated that it is difficult or impossible to find the optimal sequence of actions to achieve a given goal. So, in order to scale our learners up to realworld problems, we usually must settle for approximate solutions. One representation for a learner's environment and goals is a Markov decision process or MDP. ...
Gibbs sampling, exponential families and orthogonal polynomials
 Statistical Sciences
, 2008
"... Abstract. We give families of examples where sharp rates of convergence to stationarity of the widely used Gibbs sampler are available. The examples involve standard exponential families and their conjugate priors. In each case, the transition operator is explicitly diagonalizable with classical ort ..."
Abstract

Cited by 19 (6 self)
 Add to MetaCart
Abstract. We give families of examples where sharp rates of convergence to stationarity of the widely used Gibbs sampler are available. The examples involve standard exponential families and their conjugate priors. In each case, the transition operator is explicitly diagonalizable with classical orthogonal polynomials as eigenfunctions. Key words and phrases: Gibbs sampler, running time analyses, exponential families, conjugate priors, location families, orthogonal polynomials, singular value decomposition. 1.
An Iterative Monte Carlo Method for Nonconjugate Bayesian Analysis
 Statistics and Computing
, 1991
"... The Gibbs sampler has been proposed as a general method for Bayesian calculation in Gelfand and Smith (1990). However, the predominance of experience to date resides in applications assuming conjugacy where implementation is reasonably straightforward. This paper describes a tailored approximate rej ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
The Gibbs sampler has been proposed as a general method for Bayesian calculation in Gelfand and Smith (1990). However, the predominance of experience to date resides in applications assuming conjugacy where implementation is reasonably straightforward. This paper describes a tailored approximate rejection method approach for implementation of the Gibbs sampler when nonconjugate structure is present. Several challenging applications are presented for illustration.
Flexible covariance estimation in graphical Gaussian models
 ANNALS OF STATISTICS
, 2008
"... In this paper, we propose a class of Bayes estimators for the covariance matrix of graphical Gaussian models Markov with respect to a decomposable graph G. Working with the WP G family defined by Letac and Massam [Ann. Statist. 35 (2007) 1278–1323] we derive closedform expressions for Bayes estimat ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
In this paper, we propose a class of Bayes estimators for the covariance matrix of graphical Gaussian models Markov with respect to a decomposable graph G. Working with the WP G family defined by Letac and Massam [Ann. Statist. 35 (2007) 1278–1323] we derive closedform expressions for Bayes estimators under the entropy and squarederror losses. The WP G family includes the classical inverse of the hyper inverse Wishart but has many more shape parameters, thus allowing for flexibility in differentially shrinking various parts of the covariance matrix. Moreover, using this family avoids recourse to MCMC, often infeasible in highdimensional problems. We illustrate the performance of our estimators through a collection of numerical examples where we explore frequentist risk properties and the efficacy of graphs in the estimation of highdimensional covariance structures.
Wishart distributions for decomposable graphs
 Ann. Statist
"... When considering a graphical Gaussian modelNG Markov with respect to a decomposable graph G, the parameter space of interest for the precision parameter is the cone PG of positive definite matrices with fixed zeros corresponding to the missing edges of G. The parameter space for the scale parameter ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
When considering a graphical Gaussian modelNG Markov with respect to a decomposable graph G, the parameter space of interest for the precision parameter is the cone PG of positive definite matrices with fixed zeros corresponding to the missing edges of G. The parameter space for the scale parameter ofNG is the cone QG, dual to PG, of incomplete matrices with submatrices corresponding to the cliques of G being positive definite. In this paper we construct on the cones QG and PG two families of Wishart distributions, namely the Type I and Type II Wisharts. They can be viewed as generalizations of the hyper Wishart and the inverse of the hyper inverse Wishart as defined by Dawid and Lauritzen [Ann. Statist. 21 (1993) 1272–1317]. We show that the Type I and II Wisharts have properties similar to those of the hyper and hyper inverse Wishart. Indeed, the inverse of the Type II Wishart forms a conjugate family of priors for the covariance parameter of the graphical Gaussian model and is
Optimal Inspection Decisions For The Block Mats of the EasternScheldt Barrier
, 2000
"... To prevent the southwest of The Netherlands from flooding, the EasternScheldt stormsurge barrier was constructed, has to be inspected and, when necessary, repaired. Therefore, one is interested in obtaining optimal rates of inspection for which the expected maintenance cost are minimal and the ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
To prevent the southwest of The Netherlands from flooding, the EasternScheldt stormsurge barrier was constructed, has to be inspected and, when necessary, repaired. Therefore, one is interested in obtaining optimal rates of inspection for which the expected maintenance cost are minimal and the barrier is safe. For optimisation purposes, a maintenance model has been developed for part of the seabed protection of the EasternScheldt barrier, namely the block mats. This model enables optimal inspection decisions to be determined on the basis of the uncertainties in the process of occurrence of scour holes and, given that a scour hole has occurred, of the process of currentinduced scour erosion. The stochastic processes of scourhole initiation and scourhole development have been regarded as a Poisson process and a gamma process, respectively. Engineering knowlegde has been used to estimate their parameters.
Score and Information for Recursive Exponential Models with Incomplete Data.
"... Recursive graphical models usually underlie the statistical modelling concerning probabilistic expert systems based on Bayesian networks. This paper defines a version of these models, denoted as recursive exponential models, which have evolved by the desire to impose sophisticated domain knowl ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Recursive graphical models usually underlie the statistical modelling concerning probabilistic expert systems based on Bayesian networks. This paper defines a version of these models, denoted as recursive exponential models, which have evolved by the desire to impose sophisticated domain knowledge onto local fragments of a model. Besides the structural knowledge, as specified by a given model, the statistical modelling may also include expert opinion about the values of parameters in the model. It is shown how to translate imprecise expert knowledge into approximately conjugate prior distributions. Based on possibly incomplete data, the score and the observed information are derived for these models. This accounts for both the traditional score and observed information, derived as derivatives of the loglikelihood, and the posterior score and observed information, derived as derivatives of the logposterior distribution. Throughout the paper the specialization int...
Identifiability, Improper Priors and Gibbs Sampling for Generalized Linear Models
 J. Statist. Planning and Inference
, 1998
"... Markov chain Monte Carlo algorithms are widely used in the fitting of generalized linear models (GLM). Such model fitting is somewhat of an art form requiring suitable trickery and tuning to obtain results one can have confidence in. A wide range of practical issues arise. The focus here is on param ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Markov chain Monte Carlo algorithms are widely used in the fitting of generalized linear models (GLM). Such model fitting is somewhat of an art form requiring suitable trickery and tuning to obtain results one can have confidence in. A wide range of practical issues arise. The focus here is on parameter identifiability and posterior propriety. In particular, we clarify that nonidentifiability arises for usual GLM's and discuss its implications for simulation based model fitting. Since often, some part of the prior specification is vague we consider whether the resulting posterior is proper, providing rather general and easy to check results for GLM's. We also show that if a Gibbs sampler is run with an improper posterior, it may be possible to use the output to obtain meaningful inference for certain model unknowns. Key words and phrases: Convergence; Embedded Posterior; Estimability; Integrability; Nonfull rank models. 1 Introduction Currently, simulationbased methods offer the be...