Results

**1 - 6**of**6**### Tractable structures for constraint satisfaction problems

- Handbook of Constraint Programming, part I, chapter 7
, 2006

"... ..."

### Generalized Hypertree . . .

, 2007

"... The generalized hypertree width GHW (H) of a hypergraph H is a measure of its cyclicity. Classes of conjunctive queries or constraint satisfaction problems whose associated hypergraphs have bounded GHW are known to be solvable in polynomial time. However, it has been an open problem for several yea ..."

Abstract
- Add to MetaCart

The generalized hypertree width GHW (H) of a hypergraph H is a measure of its cyclicity. Classes of conjunctive queries or constraint satisfaction problems whose associated hypergraphs have bounded GHW are known to be solvable in polynomial time. However, it has been an open problem for several years if for a fixed constant k and input hypergraph H it can be determined in polynomial time whether GHW (H) ≤ k. Here, this problem is settled by proving that even for k = 3 the problem is already NP-hard. On the way to this result, another long standing open problem, originally raised by Goodman and Shmueli in 1984 in the context of join optimization is solved. It is proven that determining whether a hypergraph H admits a tree projection with respect to a hypergraph G is NP-complete. Our intractability results on generalized hypertree width motivate further research on more restrictive tractable hypergraph decomposition methods that approximate general hypertree decomposition (GHD). We show that each such method is dominated by a tractable decomposition method definable through a function that associates a set of partial edges to a hypergraph. By using one particular such function, we define the new Component Hypertree Decomposition method, which is tractable and strictly more general than other approximations to GHD published so far.

### Chapter 7 Combining Search and Inference;

"... Trading space for time As we noted at the introduction, search and inference have complementary properties. Inference exploit the graph structure and therefore allows structure-based time guarantees but require substantial memory. Brute-force Search, does not posses good complexity time bounds but a ..."

Abstract
- Add to MetaCart

Trading space for time As we noted at the introduction, search and inference have complementary properties. Inference exploit the graph structure and therefore allows structure-based time guarantees but require substantial memory. Brute-force Search, does not posses good complexity time bounds but as AND/OR search does, as we will show in the forthcoming last two cahpters. The main virtue of search is that it can operate in linear space. Therefore, using a hybrid of search and inference allows a structure-driven tradeoff of space and time. Two such hybrids are presented next. We demonstrate the principles of the hybrids in the context of tree-search. However, all these ideas can be extended later when the cutset is traversed as an AND/OR search tree or a graph, as we will discuss. 7.1 The cycle-cutset and w-cutset schemes The algorithms presented in this section exploit the fact that variable instantiation changes the effective connectivity of the primal graph. Consider a constraint problem whose graph is given in Figure 7.1a. For this problem, instantiating X2 to some value, say a, renders the choices of values to X1 and X5 independent, as if the pathway X1 − X2 − X5 were blocked at X2. Similarly, this instantiation blocks dependency in the pathway X1 − X2 − X4, leaving only one path between any two variables. In other words, given that X2 was

### Chapter 10 Bounding Inference: mini-bucket

"... and mini-clustering schemes Up to now we focused almost exclusively on exact algorithms for processing graphical models and we emphasized the two styles of inference, exemplified by variable elimination schemes, and search, or conditioning, exemplified by AND/OR search or backtracking search. We als ..."

Abstract
- Add to MetaCart

and mini-clustering schemes Up to now we focused almost exclusively on exact algorithms for processing graphical models and we emphasized the two styles of inference, exemplified by variable elimination schemes, and search, or conditioning, exemplified by AND/OR search or backtracking search. We also showed that hybrids of search and inference are effective and can be used to trade space for time. Clearly, due to the hardness of the tasks we address, some networks cannot be processed exactly because their structure is not sparse enough; its treewidth is too high, and their functions do not posses any internal structure that can be exploited. In such cases approximation algorithms are the only choice. Approximation algorithms can be designed to approximate either an inference scheme, or a search scheme. Bounded inference algorithms, on which this chapter focuses, approximate inference, while sampling scheme can be viewed as approximating search. This chapter presents a class of approximation algorithms that bound the dimensionality of the dependencies created by inference. This yields a collection of parameterized schemes such as mini-buckets, mini-clustering and iterative join-graph propagation that offers adjustable trade-off between accuracy and efficiency. It was shown that approximation scheme within given relative error bounds is NPhard [87, 98]. Nevertheless there are approximation strategies that work well in practice. 215>-, h r+1,..., 1 h r h n { ,..., } { h r+1,..., h n} h 1 h h r g = bucket (X) = h = max n

### Constraint Satisfaction and Fair Multi-Objective Optimization Problems: Foundations, Complexity, and Islands of Tractability

"... An extension of the CSP optimization framework tailored to identify fair solutions to instances involving multiple optimization functions is studied. Two settings are considered, based on the maximization of the minimum value over all the given functions (MAX-MIN approach) and on its lexicographical ..."

Abstract
- Add to MetaCart

An extension of the CSP optimization framework tailored to identify fair solutions to instances involving multiple optimization functions is studied. Two settings are considered, based on the maximization of the minimum value over all the given functions (MAX-MIN approach) and on its lexicographical refinement where, over all solutions maximizing the minimum value, those maximizing the second minimum value are preferred, and so on, until all functions are considered (LEXMAX-MIN approach). For both settings, the complexity of computing an optimal solution is analyzed and the tractability frontier is charted for acyclic instances, w.r.t. the number and the domains of the functions to be optimized. Larger islands of tractability are then identified via a novel structural approach, based on a notion of guard that is designed to deal with the interactions among constraint scopes and optimization functions. 1

### A Greedy Algorithm for Constructing a Low-Width Generalized Hypertree Decomposition

"... We propose a greedy algorithm which, given a hypergraph H and a positive integer k, produces a hypertree decomposition of width less than or equal to 3k − 1, or determines that H does not have a generalized hypertree-width less than k. The running time of this algorithm is O(m k+2 n), wherem is the ..."

Abstract
- Add to MetaCart

We propose a greedy algorithm which, given a hypergraph H and a positive integer k, produces a hypertree decomposition of width less than or equal to 3k − 1, or determines that H does not have a generalized hypertree-width less than k. The running time of this algorithm is O(m k+2 n), wherem is the number of hyperedges and n is the number of vertices. If k is a constant, it is polynomial. The concepts of (generalized) hypertree decomposition and (generalized) hypertree-width were introduced by Gottlob et al. Many important NP-complete problems in database theory or artificial intelligence are polynomially solvable for classes of instances associated with hypergraphs of bounded hypertree-width. Gottlob et al. also developed a polynomial time algorithm det-k-decomp which, given a hypergraph H and a constant k, computes a hypertree decomposition of width less than or equal to k if the hypertree-width of H is less than or equal to k. The running time of det-k-decomp is O(m 2k n 2) in the worst case, where m and n are the number of hyperedges and the number of vertices, respectively. The proposed algorithm is faster than this. The key step of our algorithm is checking whether a set of hyperedges is an obstacle to a hypergraph having low generalized hypertree-width. We call such a local hypergraph structure a k-hyperconnected set. If a hypergraph contains a k-hyperconnected set with a size of at least 2k, it has hypertreewidth of at least k. Adler et al. propose another obstacle called a k-hyperlinked set. We discuss the difference between the two concepts with examples. 1.