Results 1 
8 of
8
Inducing Features of Random Fields
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1997
"... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..."
Abstract

Cited by 554 (14 self)
 Add to MetaCart
We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the KullbackLeibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models and techniques introduced in this paper differ from those common to much of the computer vision literature in that the underlying random fields are nonMarkovian and have a large number of parameters that must be estimated. Relations to other learning approaches, including decision trees, are given. As a demonstration of the method, we describe its application to the problem of automatic word classifica...
Stochastic Approximation in Monte Carlo Computation
, 2006
"... The WangLandau algorithm is an adaptive Markov chain Monte Carlo algorithm to calculate the spectral density for a physical system. A remarkable feature of the algorithm is that it is not trapped by local energy minima, which is very important for systems with rugged energy landscapes. This feature ..."
Abstract

Cited by 23 (13 self)
 Add to MetaCart
The WangLandau algorithm is an adaptive Markov chain Monte Carlo algorithm to calculate the spectral density for a physical system. A remarkable feature of the algorithm is that it is not trapped by local energy minima, which is very important for systems with rugged energy landscapes. This feature has led to many successful applications of the algorithm in statistical physics and biophysics. However, there does not exist rigorous theory to support its convergence, and the estimates produced by the algorithm can only reach a limited statistical accuracy. In this paper, we propose the stochastic approximation Monte Carlo (SAMC) algorithm, which overcomes the shortcomings of the WangLandau algorithm. We establish a theorem concerning its convergence. The estimates produced by SAMC can be improved continuously as the simulation goes on. SAMC also extends applications of the WangLandau algorithm to continuum systems. The potential uses of SAMC in statistics are discussed through two classes of applications, importance sampling and model selection. The results show that SAMC can work as a general importance
Multisensor Image Segmentation Using DempsterShafer Fusion in Markov Fields Context
 IEEE Trans. Geosci. Remote Sens
, 2001
"... This paper deals with the statistical segmentation of multisensor images. In a Bayesian context, the interest of using hidden Markov random fields, which allows one to take contextual information into account, has been well known for about 20 years. In other situations, the Bayesian framework is ins ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
This paper deals with the statistical segmentation of multisensor images. In a Bayesian context, the interest of using hidden Markov random fields, which allows one to take contextual information into account, has been well known for about 20 years. In other situations, the Bayesian framework is insufficient and one must make use of the theory of evidence. The aim of our work is to propose evidential models that can take into account contextual information via Markovian fields. We define a general evidential Markovian model and show that it is usable in practice. Different simulation results presented show the interest of evidential Markovian field modelbased segmentation algorithms. Furthermore, an original variant of generalized mixture estimation, making possible the unsupervised evidential fusion in a Markovian context, is described. It is applied to the unsupervised segmentation of real radar and SPOT images showing the relevance of the proposed models and corresponding segmentation methods in real situations.
Learning in markov random fields using tempered transitions
 In Advances in Neural Information Processing Systems
"... Markov random fields (MRF’s), or undirected graphical models, provide a powerful framework for modeling complex dependencies among random variables. Maximum likelihood learning in MRF’s is hard due to the presence of the global normalizing constant. In this paper we consider a class of stochastic ap ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
Markov random fields (MRF’s), or undirected graphical models, provide a powerful framework for modeling complex dependencies among random variables. Maximum likelihood learning in MRF’s is hard due to the presence of the global normalizing constant. In this paper we consider a class of stochastic approximation algorithms of the RobbinsMonro type that use Markov chain Monte Carlo to do approximate maximum likelihood learning. We show that using MCMC operators based on tempered transitions enables the stochastic approximation algorithm to better explore highly multimodal distributions, which considerably improves parameter estimates in large, denselyconnected MRF’s. Our results on MNIST and NORB datasets demonstrate that we can successfully learn good generative models of highdimensional, richly structured data that perform well on digit and object recognition tasks. 1
Stochastic approximation algorithms for partition function estimation of Gibbs random fields
 IEEE Trans. Inform. Theory
, 1997
"... Abstract—We present an analysis of recently proposed Monte Carlo algorithms for estimating the partition function of a Gibbs random field. We show that this problem reduces to estimating one or more expectations of suitable functionals of the Gibbs states with respect to properly chosen Gibbs distri ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract—We present an analysis of recently proposed Monte Carlo algorithms for estimating the partition function of a Gibbs random field. We show that this problem reduces to estimating one or more expectations of suitable functionals of the Gibbs states with respect to properly chosen Gibbs distributions. As expected, the resulting estimators are consistent. Certain generalizations are also provided. We study computational complexity with respect to grid size and show that Monte Carlo partition function estimation algorithms can be classified into two categories: EType algorithms that are of exponential complexity and PType algorithms that are of polynomial complexity, Turing reducible to the problem of sampling from the Gibbs distribution. EType algorithms require estimating a single expectation, whereas, PType algorithms require estimating a number of expectations with respect to Gibbs distributions which are chosen to be sufficiently “close ” to each other. In the latter case, the required number of expectations is of polynomial order with respect to grid size. We compare computational complexity by using both theoretical results and simulation experiments. We determine the most efficient EType and PType algorithms and conclude that PType algorithms are more appropriate for partition function estimation. We finally suggest a practical and efficient PType algorithm for this task. Index Terms—Computational complexity, Gibbs random fields, importance sampling, Monte Carlo simulations, partition function estimation, stochastic approximation. I.
On the Convergence and the Applications of the Generalized Simulated Annealing
 SIAM J. Control Optim
"... The convergence of the generalized simulated annealing with timeinhomogeneous communication cost functions is discussed. This study is based on the use of LogSobolev inequalities and semigroup techniques in the spirit of a previous article by one of the authors. We also propose a natural test set a ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The convergence of the generalized simulated annealing with timeinhomogeneous communication cost functions is discussed. This study is based on the use of LogSobolev inequalities and semigroup techniques in the spirit of a previous article by one of the authors. We also propose a natural test set approach to study the global minima of the virtual energy. The second part of the paper is devoted to the application of these results. First we propose two general Markovian models of genetic algorithms and we give a simple proof of the convergence toward the global minima of the fitness function. Finally we introduce a stochastic algorithm which converges to the set of the global minima of a given mean cost optimization problem. Introduction Let E a finite state space and q an irreducible Markov kernel. The main purpose of this paper is to study the limiting behavior of a large class of timeinhomogeneous Markov processes controlled by two parameters (fl; fi) 2 R 2 + and associated to a f...
Regularization, Maximum Entropy and Probabilistic Methods in Mass Spectrometry Data Processing Problems
, 2002
"... This paper is a synthetic overview of regularization, maximum entropy and probabilistic methods for some inverse problems such as deconvolution and Fourier synthesis problems which arise in mass spectrometry. First we present a unified description of such problems and discuss the reasons why simple ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
This paper is a synthetic overview of regularization, maximum entropy and probabilistic methods for some inverse problems such as deconvolution and Fourier synthesis problems which arise in mass spectrometry. First we present a unified description of such problems and discuss the reasons why simple na ve methods cannot give satisfactory results. Then we briefly present the main classical deterministic regularization methods, maximum entropybased methods and the probabilistic Bayesian estimation framework for such problems. The main idea is to show how all these different frameworks converge to the optimization of a compound criterion with a data adequation part and an a priori part. We will however see that the Bayesian inference framework gives naturally more tools for inferring the uncertainty of the computed solutions, for the estimation of the hyperparameters or for handling the myopic or blind inversion problems. Finally, based on Bayesian inference, we present a few advanced methods particularly designed for some mass spectrometry data processing problems. Some simulation results illustrate mainly the effect of the prior laws or equivalently the regularization functionals on the results one can obtain in typical deconvolution or Fourier synthesis problems arising in different mass spectrometry technique. (Int J Mass Spectrom 215 (2002) 175193) 2002 Elsevier Science B.V. All rights reserved.
Inconsistent parameter estimation in Markov random fields: Benefits in the computationlimited setting
, 2006
"... Consider the problem of joint parameter estimation and prediction in a Markov random field: i.e., the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. ..."
Abstract
 Add to MetaCart
Consider the problem of joint parameter estimation and prediction in a Markov random field: i.e., the model parameters are estimated on the basis of an initial set of data, and then the fitted model is used to perform prediction (e.g., smoothing, denoising, interpolation) on a new noisy observation. Working under the restriction of limited computation, we analyze a joint method in which the same convex variational relaxation is used to construct an Mestimator for fitting parameters, and to perform approximate marginalization for the prediction step. The key result of this paper is that in the computationlimited setting, using an inconsistent parameter estimator (i.e., an estimator that returns the “wrong ” model even in the infinite data limit) can be provably beneficial, since the resulting errors can partially compensate for errors made by using an approximate prediction technique. En route to this result, we analyze the asymptotic properties of Mestimators based on convex variational relaxations, and establish a Lipschitz stability property that holds for a broad class of variational methods. We show that joint estimation/prediction based on the reweighted sumproduct algorithm substantially outperforms a commonly used heuristic based on ordinary sumproduct.