Results 1  10
of
115
Markov Logic Networks
 Machine Learning
, 2006
"... Abstract. We propose a simple approach to combining firstorder logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a firstorder knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects ..."
Abstract

Cited by 569 (34 self)
 Add to MetaCart
Abstract. We propose a simple approach to combining firstorder logic and probabilistic graphical models in a single representation. A Markov logic network (MLN) is a firstorder knowledge base with a weight attached to each formula (or clause). Together with a set of constants representing objects in the domain, it specifies a ground Markov network containing one feature for each possible grounding of a firstorder formula in the KB, with the corresponding weight. Inference in MLNs is performed by MCMC over the minimal subset of the ground network required for answering the query. Weights are efficiently learned from relational databases by iteratively optimizing a pseudolikelihood measure. Optionally, additional clauses are learned using inductive logic programming techniques. Experiments with a realworld database and knowledge base in a university domain illustrate the promise of this approach.
Slice sampling
 Annals of Statistics
, 2000
"... Abstract. Markov chain sampling methods that automatically adapt to characteristics of the distribution being sampled can be constructed by exploiting the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. A Markov chain th ..."
Abstract

Cited by 147 (5 self)
 Add to MetaCart
Abstract. Markov chain sampling methods that automatically adapt to characteristics of the distribution being sampled can be constructed by exploiting the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal ‘slice ’ defined by the current vertical position, or more generally, with some update that leaves the uniform distribution over this slice invariant. Variations on such ‘slice sampling ’ methods are easily implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and more efficient than simple Metropolis updates, due to the ability of slice sampling to adaptively choose the magnitude of changes made. It is therefore attractive for routine and automated use. Slice sampling methods that update all variables simultaneously are also possible. These methods can adaptively choose the magnitudes of changes made to each variable, based on the local properties of the density function. More ambitiously, such methods could potentially allow the sampling to adapt to dependencies between variables by constructing local quadratic approximations. Another approach is to improve sampling efficiency by suppressing random walks. This can be done using ‘overrelaxed ’ versions of univariate slice sampling procedures, or by using ‘reflective ’ multivariate slice sampling methods, which bounce off the edges of the slice.
The multivariate Tutte polynomial (alias Potts model) for graphs and matroids
 In Survey in Combinatorics, 2005, volume 327 of London Mathematical Society Lecture Notes
, 2005
"... and matroids ..."
The stochastic randomcluster process and the uniqueness of randomcluster measures
, 1995
"... The randomcluster model is a generalisation of percolation and ferromagnetic Potts models, due to Fortuin and Kasteleyn (see [29]). Not only is the randomcluster model a worthwhile topic for study in its own right, but also it provides much information about phase transitions in the associated phy ..."
Abstract

Cited by 88 (14 self)
 Add to MetaCart
The randomcluster model is a generalisation of percolation and ferromagnetic Potts models, due to Fortuin and Kasteleyn (see [29]). Not only is the randomcluster model a worthwhile topic for study in its own right, but also it provides much information about phase transitions in the associated physical models. This paper serves two functions. First, we introduce and survey randomcluster measures from the probabilist’s point of view, giving clear statements of some of the many open problems. Secondly, we present new results for such measures, as follows. We discuss the relationship between weak limits of randomcluster measures and measures satisfying a suitable DLR condition. Using an argument based on the convexity of pressure, we prove the uniqueness of randomcluster measures for all but (at most) countably many values of the parameter p. Related results concerning phase transition in two or more dimensions are included, together with various stimulating conjectures. The uniqueness of the infinite cluster is employed in an intrinsic way, in part of these arguments. In the second part of this paper is constructed a Markov process whose levelsets are reversible Markov processes with randomcluster measures as unique equilibrium measures. This construction enables a coupling of randomcluster measures for all values of p. Furthermore it leads to a proof of the semicontinuity of the percolation probability, and provides a heuristic probabilistic justification for the widely held belief that there is a firstorder phase transition if and only if the clusterweighting factor q is sufficiently large.
Auxiliary Variable Methods for Markov Chain Monte Carlo with Applications
 Journal of the American Statistical Association
, 1997
"... Suppose one wishes to sample from the density ß(x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution ß(ujx) can be defined, giving the joint distribution ß(x; u) = ß(x)ß(ujx). A MCMC scheme which samples over this joint distribution can lead to substanti ..."
Abstract

Cited by 63 (1 self)
 Add to MetaCart
Suppose one wishes to sample from the density ß(x) using Markov chain Monte Carlo (MCMC). An auxiliary variable u and its conditional distribution ß(ujx) can be defined, giving the joint distribution ß(x; u) = ß(x)ß(ujx). A MCMC scheme which samples over this joint distribution can lead to substantial gains in efficiency compared to standard approaches. The revolutionary algorithm of Swendsen and Wang (1987) is one such example. In addition to reviewing the SwendsenWang algorithm and its generalizations, this paper introduces a new auxiliary variable method called partial decoupling. Two applications in Bayesian image analysis are considered. The first is a binary classification problem in which partial decoupling out performs SW and single site Metropolis. The second is a PET reconstruction which uses the gray level prior of Geman and McClure (1987). A generalized SwendsenWang algorithm is developed for this problem, which reduces the computing time to the point that MCMC is a viabl...
Data clustering using a model granular magnet
 Neural Computation
, 1997
"... We present a new approach to clustering, based on the physical properties of an inhomogeneous ferromagnet. No assumption is made regarding the underlying distribution of the data. We assign a Potts spin to each data point and introduce an interaction between neighboring points, whose strength is a d ..."
Abstract

Cited by 57 (2 self)
 Add to MetaCart
We present a new approach to clustering, based on the physical properties of an inhomogeneous ferromagnet. No assumption is made regarding the underlying distribution of the data. We assign a Potts spin to each data point and introduce an interaction between neighboring points, whose strength is a decreasing function of the distance between the neighbors. This magnetic system exhibits three phases. At very low temperatures, it is completely ordered; all spins are aligned. At very high temperatures, the system does not exhibit any ordering, and in an intermediate regime, clusters of relatively strongly coupled spins become ordered, whereas different clusters remain uncorrelated. This intermediate phase is identified by a jump in the order parameters. The spinspin correlation function is used to partition the spins and the corresponding data points into clusters. We demonstrate on three synthetic and three real data sets how the method works. Detailed comparison to the performance of other techniques clearly indicates the relative success of our method. 1
Convergence of slice sampler Markov chains
, 1998
"... In this paper, we analyse theoretical properties of the slice sampler. We find that the algorithm has extremely robust geometric ergodicity properties. For the case of just one auxiliary variable, we demonstrate that the algorithm is stochastic monotone, and deduce analytic bounds on the total varia ..."
Abstract

Cited by 55 (10 self)
 Add to MetaCart
In this paper, we analyse theoretical properties of the slice sampler. We find that the algorithm has extremely robust geometric ergodicity properties. For the case of just one auxiliary variable, we demonstrate that the algorithm is stochastic monotone, and deduce analytic bounds on the total variation distance from stationarity of the method using FosterLyapunov drift condition methodology.
Generalizing SwendsenWang to Sampling Arbitrary Posterior Probabilities
 PAMI
, 2005
"... Many vision tasks can be formulated as graph partition problems that minimize energy functions. For such problems, the Gibbs... ..."
Abstract

Cited by 53 (12 self)
 Add to MetaCart
Many vision tasks can be formulated as graph partition problems that minimize energy functions. For such problems, the Gibbs...
Bounds On The Complex Zeros Of (Di)Chromatic Polynomials And PottsModel Partition Functions
 Chromatic Roots Are Dense In The Whole Complex Plane, Combinatorics, Probability and Computing
"... I show that there exist universal constants C(r) < ∞ such that, for all loopless graphs G of maximum degree ≤ r, the zeros (real or complex) of the chromatic polynomial PG(q) lie in the disc q  < C(r). Furthermore, C(r) ≤ 7.963907r. This result is a corollary of a more general result on the zeros ..."
Abstract

Cited by 47 (11 self)
 Add to MetaCart
I show that there exist universal constants C(r) < ∞ such that, for all loopless graphs G of maximum degree ≤ r, the zeros (real or complex) of the chromatic polynomial PG(q) lie in the disc q  < C(r). Furthermore, C(r) ≤ 7.963907r. This result is a corollary of a more general result on the zeros of the Pottsmodel partition function ZG(q, {ve}) in the complex antiferromagnetic regime 1 + ve  ≤ 1. The proof is based on a transformation of the Whitney–Tutte–Fortuin–Kasteleyn representation of ZG(q, {ve}) to a polymer gas, followed by verification of the Dobrushin–Koteck´y–Preiss condition for nonvanishing of a polymermodel partition function. I also show that, for all loopless graphs G of secondlargest degree ≤ r, the zeros of PG(q) lie in the disc q  < C(r) + 1. KEY WORDS: Graph, maximum degree, secondlargest degree, chromatic polynomial,
Markov Chain Monte Carlo Methods Based on `Slicing' the Density Function
, 1997
"... . One way to sample from a distribution is to sample uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `sl ..."
Abstract

Cited by 46 (0 self)
 Add to MetaCart
. One way to sample from a distribution is to sample uniformly from the region under the plot of its density function. A Markov chain that converges to this uniform distribution can be constructed by alternating uniform sampling in the vertical direction with uniform sampling from the horizontal `slice' defined by the current vertical position. Variations on such `slice sampling' methods can easily be implemented for univariate distributions, and can be used to sample from a multivariate distribution by updating each variable in turn. This approach is often easier to implement than Gibbs sampling, and may be more efficient than easilyconstructed versions of the Metropolis algorithm. Slice sampling is therefore attractive in routine Markov chain Monte Carlo applications, and for use by software that automatically generates a Markov chain sampler from a model specification. One can also easily devise overrelaxed versions of slice sampling, which sometimes greatly improve sampling effici...