Results 1  10
of
211
Constraint Networks
, 1992
"... Constraintbased reasoning is a paradigm for formulating knowledge as a set of constraints without specifying the method by which these constraints are to be satisfied. A variety of techniques have been developed for finding partial or complete solutions for different kinds of constraint expression ..."
Abstract

Cited by 1149 (43 self)
 Add to MetaCart
Constraintbased reasoning is a paradigm for formulating knowledge as a set of constraints without specifying the method by which these constraints are to be satisfied. A variety of techniques have been developed for finding partial or complete solutions for different kinds of constraint expressions. These have been successfully applied to diverse tasks such as design, diagnosis, truth maintenance, scheduling, spatiotemporal reasoning, logic programming and user interface. Constraint networks are graphical representations used to guide strategies for solving constraint satisfaction problems (CSPs).
Dynamic Bayesian Networks: Representation, Inference and Learning
, 2002
"... Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have bee ..."
Abstract

Cited by 759 (3 self)
 Add to MetaCart
Modelling sequential data is important in many areas of science and engineering. Hidden Markov models (HMMs) and Kalman filter models (KFMs) are popular for this because they are simple and flexible. For example, HMMs have been used for speech recognition and biosequence analysis, and KFMs have been used for problems ranging from tracking planes and missiles to predicting the economy. However, HMMs
and KFMs are limited in their “expressive power”. Dynamic Bayesian Networks (DBNs) generalize HMMs by allowing the state space to be represented in factored form, instead of as a single discrete random variable. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linearGaussian. In this thesis, I will discuss how to represent many different kinds of models as DBNs, how to perform exact and approximate inference in DBNs, and how to learn DBN models from sequential data.
In particular, the main novel technical contributions of this thesis are as follows: a way of representing
Hierarchical HMMs as DBNs, which enables inference to be done in O(T) time instead of O(T 3), where T is the length of the sequence; an exact smoothing algorithm that takes O(log T) space instead of O(T); a simple way of using the junction tree algorithm for online inference in DBNs; new complexity bounds on exact online inference in DBNs; a new deterministic approximate inference algorithm called factored frontier; an analysis of the relationship between the BK algorithm and loopy belief propagation; a way of
applying RaoBlackwellised particle filtering to DBNs in general, and the SLAM (simultaneous localization
and mapping) problem in particular; a way of extending the structural EM algorithm to DBNs; and a variety of different applications of DBNs. However, perhaps the main value of the thesis is its catholic presentation of the field of sequential data modelling.
A Partial KArboretum of Graphs With Bounded Treewidth
 J. Algorithms
, 1998
"... The notion of treewidth has seen to be a powerful vehicle for many graph algorithmic studies. This survey paper wants to give an overview of many classes of graphs that can be seen to have a uniform upper bound on the treewidth of graphs in the class. Also, some mutual relations between such classes ..."
Abstract

Cited by 335 (34 self)
 Add to MetaCart
The notion of treewidth has seen to be a powerful vehicle for many graph algorithmic studies. This survey paper wants to give an overview of many classes of graphs that can be seen to have a uniform upper bound on the treewidth of graphs in the class. Also, some mutual relations between such classes are discussed.
Bucket Elimination: A Unifying Framework for Reasoning
"... Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problemsolving and reasoning tasks. Algorithms such as directionalresolution for propositional satisfiability, adaptiveconsistency for constraint satisfaction, Fourier and Gaussian elimination ..."
Abstract

Cited by 316 (64 self)
 Add to MetaCart
(Show Context)
Bucket elimination is an algorithmic framework that generalizes dynamic programming to accommodate many problemsolving and reasoning tasks. Algorithms such as directionalresolution for propositional satisfiability, adaptiveconsistency for constraint satisfaction, Fourier and Gaussian elimination for solving linear equalities and inequalities, and dynamic programming for combinatorial optimization, can all be accommodated within the bucket elimination framework. Many probabilistic inference tasks can likewise be expressed as bucketelimination algorithms. These include: belief updating, finding the most probable explanation, and expected utility maximization. These algorithms share the same performance guarantees; all are time and space exponential in the inducedwidth of the problem's interaction graph. While elimination strategies have extensive demands on memory, a contrasting class of algorithms called "conditioning search" require only linear space. Algorithms in this class split a problem into subproblems by instantiating a subset of variables, called a conditioning set, or a cutset. Typical examples of conditioning search algorithms are: backtracking (in constraint satisfaction), and branch and bound (for combinatorial optimization). The paper presents the bucketelimination framework as a unifying theme across probabilistic and deterministic reasoning tasks and show how conditioning search can be augmented to systematically trade space for time.
Bucket Elimination: A Unifying Framework for Probabilistic Inference
, 1996
"... Probabilistic inference algorithms for belief updating, finding the most probable explanation, the maximum a posteriori hypothesis, and the maximum expected utility are reformulated within the bucket elimination framework. This emphasizes the principles common to many of the algorithms appearing in ..."
Abstract

Cited by 313 (32 self)
 Add to MetaCart
Probabilistic inference algorithms for belief updating, finding the most probable explanation, the maximum a posteriori hypothesis, and the maximum expected utility are reformulated within the bucket elimination framework. This emphasizes the principles common to many of the algorithms appearing in the probabilistic inference literature and clarifies the relationship of such algorithms to nonserial dynamic programming algorithms. A general method for combining conditioning and bucket elimination is also presented. For all the algorithms, bounds on complexity are given as a function of the problem's structure.
A Scalable Method for Multiagent Constraint Optimization
"... We present in this paper a new, complete method for distributed constraint optimization, based on dynamic programming. It is a utility propagation method, inspired by the sumproduct algorithm, which is correct only for treeshaped constraint networks. In this paper, we show how to extend that algor ..."
Abstract

Cited by 180 (18 self)
 Add to MetaCart
We present in this paper a new, complete method for distributed constraint optimization, based on dynamic programming. It is a utility propagation method, inspired by the sumproduct algorithm, which is correct only for treeshaped constraint networks. In this paper, we show how to extend that algorithm to arbitrary topologies using a pseudotree arrangement of the problem graph. Our algorithm requires a linear number of messages, whose maximal size depends on the induced width along the particular pseudotree chosen. We compare our algorithm with backtracking algorithms, and present experimental results. For some problem types we report orders of magnitude fewer messages, and the ability to deal with arbitrarily large problems. Our algorithm is formulated for optimization problems, but can be easily applied to satisfaction problems as well.
On problems without polynomial kernels
 LECT. NOTES COMPUT. SCI
, 2007
"... Kernelization is a strong and widelyapplied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomialtime transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size and parame ..."
Abstract

Cited by 147 (16 self)
 Add to MetaCart
Kernelization is a strong and widelyapplied technique in parameterized complexity. In a nutshell, a kernelization algorithm, or simply a kernel, is a polynomialtime transformation that transforms any given parameterized instance to an equivalent instance of the same problem, with size and parameter bounded by a function of the parameter in the input. A kernel is polynomial if the size and parameter of the output are polynomiallybounded by the parameter of the input. In this paper we develop a framework which allows showing that a wide range of FPT problems do not have polynomial kernels. Our evidence relies on hypothesis made in the classical world (i.e. nonparametric complexity), and evolves around a new type of algorithm for classical decision problems, called a distillation algorithm, which might be of independent interest. Using the notion of distillation algorithms, we develop a generic lowerbound engine which allows us to show that a variety of FPT problems, fulfilling certain criteria, cannot have polynomial kernels unless the polynomial hierarchy collapses. These problems include kPath, kCycle, kExact Cycle, kShort Cheap Tour, kGraph Minor Order Test, kCutwidth, kSearch Number, kPathwidth, kTreewidth, kBranchwidth, and several optimization problems parameterized by treewidth or cliquewidth.
Decomposable negation normal form
 Journal of the ACM
, 2001
"... Abstract. Knowledge compilation has been emerging recently as a new direction of research for dealing with the computational intractability of general propositional reasoning. According to this approach, the reasoning process is split into two phases: an offline compilation phase and an online quer ..."
Abstract

Cited by 133 (19 self)
 Add to MetaCart
(Show Context)
Abstract. Knowledge compilation has been emerging recently as a new direction of research for dealing with the computational intractability of general propositional reasoning. According to this approach, the reasoning process is split into two phases: an offline compilation phase and an online queryanswering phase. In the offline phase, the propositional theory is compiled into some target language, which is typically a tractable one. In the online phase, the compiled target is used to efficiently answer a (potentially) exponential number of queries. The main motivation behind knowledge compilation is to push as much of the computational overhead as possible into the offline phase, in order to amortize that overhead over all online queries. Another motivation behind compilation is to produce very simple online reasoning systems, which can be embedded costeffectively into primitive computational platforms, such as those found in consumer electronics. One of the key aspects of any compilation approach is the target language into which the propositional theory is compiled. Previous target languages included Horn theories, prime implicates/implicants and ordered binary decision diagrams (OBDDs). We propose in this paper a new target compilation language, known as decomposable negation normal form (DNNF), and present a number of its properties that make it of interest to the broad community. Specifically, we