Results 1  10
of
16
Learning the structure of Markov logic networks
 In Proceedings of the 22nd International Conference on Machine Learning
, 2005
"... Markov logic networks (MLNs) combine logic and probability by attaching weights to firstorder clauses, and viewing these as templates for features of Markov networks. In this paper we develop an algorithm for learning the structure of MLNs from relational databases, combining ideas from inductive l ..."
Abstract

Cited by 94 (18 self)
 Add to MetaCart
Markov logic networks (MLNs) combine logic and probability by attaching weights to firstorder clauses, and viewing these as templates for features of Markov networks. In this paper we develop an algorithm for learning the structure of MLNs from relational databases, combining ideas from inductive logic programming (ILP) and feature induction in Markov networks. The algorithm performs a beam or shortestfirst search of the space of clauses, guided by a weighted pseudolikelihood measure. This requires computing the optimal weights for each candidate structure, but we show how this can be done efficiently. The algorithm can be used to learn an MLN from scratch, or to refine an existing knowledge base. We have applied it in two realworld domains, and found that it outperforms using offtheshelf ILP systems to learn the MLN structure, as well as pure ILP, purely probabilistic and purely knowledgebased approaches. 1.
A Chernoff bound for random walks on expander graphs
 SIAM Journal on Computing
, 1998
"... ..."
(Show Context)
An Optimal Algorithm for Monte Carlo Estimation
, 1995
"... A typical approach to estimate an unknown quantity is to design an experiment that produces a random variable Z distributed in [0; 1] with E[Z] = , run this experiment independently a number of times and use the average of the outcomes as the estimate. In this paper, we consider the case when no a ..."
Abstract

Cited by 54 (4 self)
 Add to MetaCart
A typical approach to estimate an unknown quantity is to design an experiment that produces a random variable Z distributed in [0; 1] with E[Z] = , run this experiment independently a number of times and use the average of the outcomes as the estimate. In this paper, we consider the case when no a priori information about Z is known except that is distributed in [0; 1]. We describe an approximation algorithm AA which, given ffl and ffi, when running independent experiments with respect to any Z, produces an estimate that is within a factor 1 + ffl of with probability at least 1 \Gamma ffi. We prove that the expected number of experiments run by AA (which depends on Z) is optimal to within a constant factor for every Z. An announcement of these results appears in P. Dagum, D. Karp, M. Luby, S. Ross, "An optimal algorithm for MonteCarlo Estimation (extended abstract)", Proceedings of the Thirtysixth IEEE Symposium on Foundations of Computer Science, 1995, pp. 142149 [3]. Section ...
The Complexity of Query Reliability
 In PODS
, 1998
"... The reliability of database queries on databases with uncertain information is studied, on the basis of a probabilistic model for unreliable databases. While it was already known that the reliability of quantifierfree queries is computable in polynomial time, we show here that already for conjunctiv ..."
Abstract

Cited by 46 (2 self)
 Add to MetaCart
(Show Context)
The reliability of database queries on databases with uncertain information is studied, on the basis of a probabilistic model for unreliable databases. While it was already known that the reliability of quantifierfree queries is computable in polynomial time, we show here that already for conjunctive queries, the reliability may become highly intractable. We exhibit a conjunctive query whose reliability problem is complete for FP #P . We further show, that FP #P is the typical complexity level for the reliability problems of a very large class of queries, including all secondorder queries. We study approximation algorithms and prove that the reliabilities of all polynomialtime evaluable queries can be efficiently approximated by randomized algorithms. Finally we discuss the extension of our approach to the more general metafinite database model where finite relational structures are endowed with functions into an infinite interpreted domain; in addition queries may use aggregate ...
Approximating the Permanent of Graphs with Large Factors
 Proceedings of the 29th IEEE Symposium on Foundations of Computer Science
, 1992
"... Let G = (U; V; E) be a bipartite graph with jU j = jV j = n. The factor size of G, f , is the maximum number of edge disjoint perfect matchings in G. We characterize the complexity of counting the number of perfect matchings in classes of graphs parameterized by factor size. We describe the simple ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
Let G = (U; V; E) be a bipartite graph with jU j = jV j = n. The factor size of G, f , is the maximum number of edge disjoint perfect matchings in G. We characterize the complexity of counting the number of perfect matchings in classes of graphs parameterized by factor size. We describe the simple algorithm, which is an approximation algorithm for the permanent that is a natural simplification of the algorithm suggested in [Broder 86] and analyzed in [Jerrum, Sinclair 88a, 88b]. Compared to the algorithm in [Jerrum, Sinclair 88a, 88b], the simple algorithm achieves a polynomial speed up in the running time to compute the permanent. A combinatorial lemma is used to prove that the simple algorithm runs in time n O(n=f) ). Thus: (1) for all constants ff ? 0, Supported by NSERC of Canada and the International Computer Science Institute, Berkeley, California. This work was done while the author was at the University of Toronto and also while visiting ICSI. y Research partially suppo...
Approximate counting by dynamic programming
 Proceedings of the 35th ACM Symposium on Theory of Computing
, 2003
"... Abstract We give efficient algorithms to sample uniformly, and count approximately, solutions to the zeroone knapsack problem. The algorithm is based on using dynamic programming to provide a deterministic relative approximation. Then "dart throwing " techniques are used to give a ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
Abstract We give efficient algorithms to sample uniformly, and count approximately, solutions to the zeroone knapsack problem. The algorithm is based on using dynamic programming to provide a deterministic relative approximation. Then &quot;dart throwing &quot; techniques are used to give arbitrary approximation ratios. We extend this approach to several related problems: the mconstraint zeroone knapsack, the general integer knapsack (including its mconstraint version) and contingency tables with constantly many rows. We also indicate how further improvements can be obtained using randomized rounding.
Lower bounds on twoterminal network reliability
, 1985
"... One measure of twoterminal network reliability, termed probabilistic connectedness, is the probability that two specified communication centers can communicate. A standard model of a network is a graph in which nodes represent communications centers and edges represent links between communication c ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
One measure of twoterminal network reliability, termed probabilistic connectedness, is the probability that two specified communication centers can communicate. A standard model of a network is a graph in which nodes represent communications centers and edges represent links between communication centers. Edges are assumed to have statistically independent probabilities of failing and nodes are assumed to be perfectly reliable. Exact calculation of twoterminal reliability for general networks has been shown to be #Pcomplete. As a result is desirable to compute upper and lower bounds that avoid the exponential computation likely required by exact algorithms. Two methods are considered for computing lower bounds on twoterminal reliability
© 1996 SpringerVerlag New York Inc. A Mildly Exponential Approximation Algorithm for the Permanent
"... Abstract. A new approximation algorithm for the permanent of an n × n 0,1matrix is presented. The algorithm is shown to have worstcase time complexity exp(O(n 1/2 log 2 n)). Asymptotically, this represents a considerable improvement over the best existing algorithm, which has worstcase time compl ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. A new approximation algorithm for the permanent of an n × n 0,1matrix is presented. The algorithm is shown to have worstcase time complexity exp(O(n 1/2 log 2 n)). Asymptotically, this represents a considerable improvement over the best existing algorithm, which has worstcase time complexity exp(�(n)).