## Learning in Gibbsian Fields: How Accurate and How Fast Can It Be? (2002)

Citations: | 14 - 2 self |

### BibTeX

@MISC{Zhu02learningin,

author = {Song Chun Zhu and Xiuwen Liu},

title = {Learning in Gibbsian Fields: How Accurate and How Fast Can It Be?},

year = {2002}

}

### Years of Citing Articles

### OpenURL

### Abstract

Gibbsian elds or Markov random elds are widely used in Bayesian image analysis, but learning Gibbs models is computationally expensive. The computational complexity is pronounced by the recent minimax entropy (FRAME) models which use large neighborhoods and hundreds of parameters[22]. In this paper, we present a common framework for learning Gibbs models. We identify two key factors that determine the accuracy and speed of learning Gibbs models: The eciency of likelihood functions and the variance in approximating partition functions using Monte Carlo integration. We propose three new algorithms. In particular, we are interested in a maximum satellite likelihood estimator, which makes use of a set of pre-computed Gibbs models called \satellites" to approximate likelihood functions. This algorithm can approximately estimate the minimax entropy model for textures in seconds in a HP workstation. The performances of various learning algorithms are compared in our experiments.

### Citations

1282 |
Spatial interaction and the statistical analysis of lattice systems
- Besag
- 1974
(Show Context)
Citation Context ...2. The diameter of foreground lattices and, thus, efficiency of the likelihood. 3. Computational complexity. The 10 algorithms are: 1. Stochastic gradient MLE [22], 2. Maximum pseudolikelihood (MPLE) =-=[3]-=-, [4], 3. MCMCMLE [12], [13], [9], [16], 4. Maximum patch likelihood, 5. Maximum partial likelihood, 6. Maximum satellite likelihood, 7. Minuteman minimax [6], 8. Variational method [2], [1], 9. Learn... |

630 | A stochastic approximation method - Robbins, Monro - 1951 |

324 | A polynomial-time approximation algorithm for the permanent of a matrix with nonnegative entries - JERRUM, SINCLAIR, et al. - 2004 |

254 |
Modeling and segmentation of noisy and textured images using Gibbs random fields
- Derin, Elliot
- 1987
(Show Context)
Citation Context ...ood estimator (MCMCMLE) [12], [13], [9]. The second and third methods approximate partition functions by importance sampling techniquesÐa topic studied extensively in the literature [15], [16], [17], =-=[8]-=-. There are also methods [10], [11] that estimate Gibbs parameters with precomputed derivatives of log-partition functions. These algorithms were used primarily for learning MRF models with pair cliqu... |

227 |
Constrained monte carlo maximum likelihood for dependent data
- Geyer, ompson
- 1992
(Show Context)
Citation Context ... first is a maximum pseudolikelihood estimator (MPLE) [4]. The second is a stochastic gradient algorithm [18], [21], [22]. The third is Markov chain Monte Carlo maximum-likelihood estimator (MCMCMLE) =-=[12]-=-, [13], [9]. The second and third methods approximate partition functions by importance sampling techniquesÐa topic studied extensively in the literature [15], [16], [17], [8]. There are also methods ... |

204 | Minimax entropy principle and Its Applications to Texture Modeling
- Zhu, Wu, et al.
- 1997
(Show Context)
Citation Context ...ut learning Gibbs models is computationally expensive. The computational complexity is pronounced by the recent minimax entropy (FRAME) models which use large neighborhoods and hundreds of parameters =-=[22]-=-. In this paper, we present a common framework for learning Gibbs models. We identify two key factors that determine the accuracy and speed of learning Gibbs models: The efficiency of likelihood funct... |

170 |
Statistical Methods for Tomographic Image Reconstruction,” Bulletin of the InterFunction Approximation Using Robust Radial Basis Function Networks
- Geman, McClure
- 1987
(Show Context)
Citation Context ..., [9]. The second and third methods approximate partition functions by importance sampling techniquesÐa topic studied extensively in the literature [15], [16], [17], [8]. There are also methods [10], =-=[11]-=- that estimate Gibbs parameters with precomputed derivatives of log-partition functions. These algorithms were used primarily for learning MRF models with pair cliques, such as Ising models and Potts ... |

122 |
Markov Random Fields Theory and Application
- Chellappa, Jain
- 1993
(Show Context)
Citation Context ...learning accurate Gibbs models is computationally expensive. In the literature, many learning algorithms are focused on auto-models, such as auto-binomial models [7] and Gaussian Markov random fields =-=[5]-=- whose parameters can be computed analytically. For MRF models beyond the auto-model families, there are three algorithms. The first is a maximum pseudolikelihood estimator (MPLE) [4]. The second is a... |

111 |
Efficiency of pseudo-likelihood estimation for simple Gaussian fields
- BESAG
- 1977
(Show Context)
Citation Context ...ov random fields [5] whose parameters can be computed analytically. For MRF models beyond the auto-model families, there are three algorithms. The first is a maximum pseudolikelihood estimator (MPLE) =-=[4]-=-. The second is a stochastic gradient algorithm [18], [21], [22]. The third is Markov chain Monte Carlo maximum-likelihood estimator (MCMCMLE) [12], [13], [9]. The second and third methods approximate... |

107 |
Bayesian image analysis: an application to single photon emission tomography
- Geman, Lure
- 1985
(Show Context)
Citation Context ..., [13], [9]. The second and third methods approximate partition functions by importance sampling techniquesÐa topic studied extensively in the literature [15], [16], [17], [8]. There are also methods =-=[10]-=-, [11] that estimate Gibbs parameters with precomputed derivatives of log-partition functions. These algorithms were used primarily for learning MRF models with pair cliques, such as Ising models and ... |

69 | On the convergence of Monte Carlo maximum likelihood calculations
- Geyer
- 1994
(Show Context)
Citation Context ... is a maximum pseudolikelihood estimator (MPLE) [4]. The second is a stochastic gradient algorithm [18], [21], [22]. The third is Markov chain Monte Carlo maximum-likelihood estimator (MCMCMLE) [12], =-=[13]-=-, [9]. The second and third methods approximate partition functions by importance sampling techniquesÐa topic studied extensively in the literature [15], [16], [17], [8]. There are also methods [10], ... |

23 |
Consistency of maximum likelihood and pseudo-likelihood estimators for Gibbs Distributions
- Gidas
- 1988
(Show Context)
Citation Context ...or satellite likelihood), (c) pseudolikelihood, and (d) partial likelihood. different patches. It is straightforward to prove that maximizing G… † leads to a consistent estimator for all four choices =-=[14]-=-. The flexibility of likelihood function distinguishes Gibbs learning from the problem of estimating partition functions [15], [16], [17]. The latter computes the ªpressureº on a large lattice in orde... |

16 | Stochastic approximation algorithms for partition function estimation of Gibbs random fields
- Potamianos, Goutsias
- 1997
(Show Context)
Citation Context ...ikelihood estimator (MCMCMLE) [12], [13], [9]. The second and third methods approximate partition functions by importance sampling techniquesÐa topic studied extensively in the literature [15], [16], =-=[17]-=-, [8]. There are also methods [10], [11] that estimate Gibbs parameters with precomputed derivatives of log-partition functions. These algorithms were used primarily for learning MRF models with pair ... |

14 | Markov random texture models - Cross, Jain - 1983 |

9 | Exploring the Julesz ensemble by efficient Markov chain Monte Carlo - Zhu, Liu, et al. - 1999 |

7 |
Partition Function Estimation of Gibbs Random Filed Images Using Monte Carlo Simulation
- Potamianos, Goutsias
- 1993
(Show Context)
Citation Context ...imum-likelihood estimator (MCMCMLE) [12], [13], [9]. The second and third methods approximate partition functions by importance sampling techniquesÐa topic studied extensively in the literature [15], =-=[16]-=-, [17], [8]. There are also methods [10], [11] that estimate Gibbs parameters with precomputed derivatives of log-partition functions. These algorithms were used primarily for learning MRF models with... |

7 | Goutsias, “Partition function estimation of Gibbs random-field images using Monte Carlo simulations - Potamianos, K - 1993 |

5 |
A variational method for estimating the parameters of MRF from complete or incomplete data
- Almeida, Gidas
- 1993
(Show Context)
Citation Context ...umber of parameters (e.g., 100 200). Recently, four algorithms were proposed to speedup minimax entropy learning with mixed success: 1. A minuteman minimax algorithm [6], 2. A variational method [2], =-=[1]-=-, 3. A method of learning by diffusion [19], and 4. A generalized MPLE (Private communication with Y.N. Wu). Besides the new algorithms, an ensemble equivalence theorem enables the separation of the m... |

3 | Markov random theory and applications - Chellappa, Jain - 1993 |

3 | Maximum likelihood estimation of Markov random field parameters using markov chain Monte Carlo algorithms - Descombes, Morris, et al. - 1997 |

2 |
ªStatistical Models of Image Texture,º Unpublished preprint
- Anderson, Langer
- 1996
(Show Context)
Citation Context ...rge number of parameters (e.g., 100 200). Recently, four algorithms were proposed to speedup minimax entropy learning with mixed success: 1. A minuteman minimax algorithm [6], 2. A variational method =-=[2]-=-, [1], 3. A method of learning by diffusion [19], and 4. A generalized MPLE (Private communication with Y.N. Wu). Besides the new algorithms, an ensemble equivalence theorem enables the separation of ... |

2 |
ªMarkov Random Field Texture Models,º
- Cross, Jain
- 1983
(Show Context)
Citation Context ...stic patterns or prior knowledge, but learning accurate Gibbs models is computationally expensive. In the literature, many learning algorithms are focused on auto-models, such as auto-binomial models =-=[7]-=- and Gaussian Markov random fields [5] whose parameters can be computed analytically. For MRF models beyond the auto-model families, there are three algorithms. The first is a maximum pseudolikelihood... |

2 | Statistical models of image texture", Unpublished preprint - Anderson, Langer - 1996 |

2 | Exploring Julesz Ensembles by Ecient Markov Chain Monte Carlo - Zhu, Liu, et al. - 2000 |

1 |
ªMinutemax: A Fast Approximation for Minimax Learning,º
- Coughlan, Yuille
- 1998
(Show Context)
Citation Context ...sizes of 33 33 pixels) and large number of parameters (e.g., 100 200). Recently, four algorithms were proposed to speedup minimax entropy learning with mixed success: 1. A minuteman minimax algorithm =-=[6]-=-, 2. A variational method [2], [1], 3. A method of learning by diffusion [19], and 4. A generalized MPLE (Private communication with Y.N. Wu). Besides the new algorithms, an ensemble equivalence theor... |

1 |
ªMaximum Likelihood Estimation of Markov Random Field Parameters Using Markov Chain Monte Carlo Algorithms,º
- Descombes, Morris, et al.
- 1997
(Show Context)
Citation Context ...maximum pseudolikelihood estimator (MPLE) [4]. The second is a stochastic gradient algorithm [18], [21], [22]. The third is Markov chain Monte Carlo maximum-likelihood estimator (MCMCMLE) [12], [13], =-=[9]-=-. The second and third methods approximate partition functions by importance sampling techniquesÐa topic studied extensively in the literature [15], [16], [17], [8]. There are also methods [10], [11] ... |

1 |
ªPolynomial-Time Approximation Algorithms for the Ising Model,º
- Jerrum, Sinclair
- 1993
(Show Context)
Citation Context ...lo maximum-likelihood estimator (MCMCMLE) [12], [13], [9]. The second and third methods approximate partition functions by importance sampling techniquesÐa topic studied extensively in the literature =-=[15]-=-, [16], [17], [8]. There are also methods [10], [11] that estimate Gibbs parameters with precomputed derivatives of log-partition functions. These algorithms were used primarily for learning MRF model... |

1 |
ªA Stochastic Approximation Method,º
- Robbins, Monro
- 1951
(Show Context)
Citation Context ...ed analytically. For MRF models beyond the auto-model families, there are three algorithms. The first is a maximum pseudolikelihood estimator (MPLE) [4]. The second is a stochastic gradient algorithm =-=[18]-=-, [21], [22]. The third is Markov chain Monte Carlo maximum-likelihood estimator (MCMCMLE) [12], [13], [9]. The second and third methods approximate partition functions by importance sampling techniqu... |

1 |
ªMinimax Entropy and Learning by Diffusion,º
- Shah
- 1998
(Show Context)
Citation Context ...tly, four algorithms were proposed to speedup minimax entropy learning with mixed success: 1. A minuteman minimax algorithm [6], 2. A variational method [2], [1], 3. A method of learning by diffusion =-=[19]-=-, and 4. A generalized MPLE (Private communication with Y.N. Wu). Besides the new algorithms, an ensemble equivalence theorem enables the separation of the model selection procedure from the parameter... |

1 |
ªEstimation and Annealing for Gibbsian Fields,º Annales de l'Institut Henri Poincare
- Younes
- 1988
(Show Context)
Citation Context ...lytically. For MRF models beyond the auto-model families, there are three algorithms. The first is a maximum pseudolikelihood estimator (MPLE) [4]. The second is a stochastic gradient algorithm [18], =-=[21]-=-, [22]. The third is Markov chain Monte Carlo maximum-likelihood estimator (MCMCMLE) [12], [13], [9]. The second and third methods approximate partition functions by importance sampling techniquesÐa t... |

1 | Minutemax: a fast approximation for minimax learning - Coughlan, Yuille - 1998 |

1 | Minimax entropy and learning by diusion - Shah - 1998 |

1 | Equivalence of Gibbs and Julesz ensembles - Wu, Zhu, et al. - 1999 |

1 | Estimation and annealing for Gibbsian Annales de l'Institut Henri - Younes - 1988 |