###
*Proceedings* of the *Twenty-Third* *AAAI* *Conference* on *Artificial* *Intelligence* (*2008*)

"... ThesearchofaprecisemeasureofwhathardnessofSAT instancesmeansforstate-of-the-artsolversisarelevantresearchquestion.Amongothers,thespacecomplexityoftreelikeresolution(alsocalledhardness),theminimalsizeof strongbackdoorsandofcycle-cutsets,andthetreewidthcan beusedforthispurpose. Weproposetheuseofthetre ..."

Abstract
- Add to MetaCart

ThesearchofaprecisemeasureofwhathardnessofSAT instancesmeansforstate-of-the-artsolversisarelevantresearchquestion.Amongothers,thespacecomplexityoftreelikeresolution(alsocalledhardness),theminimalsizeof strongbackdoorsandofcycle-cutsets,andthetreewidthcan beusedforthispurpose. Weproposetheuseofthetree-likespacecomplexityasa solidcandidatetobethebestmeasureforsolversbasedon DPLL.Tosupportthisthesisweprovideacomparisonwith theothermentionedmeasures. Wealsoconductanexperimentalinvestigationtoshowhowtheproposedmeasurecharacterizesthehardnessofrandomandindustrialinstances.

###
*Proceedings* of the *Twenty-Third* *AAAI* *Conference* on *Artificial* *Intelligence* (*2008*) Horn Complements: Towards Horn-to-Horn Belief Revision

"... Horn-to-Horn belief revision asks for the revision of a Horn knowledge base such that the revised knowledge base is also Horn. Horn knowledge bases are important whenever one is concerned with efficiency—of computing inferences, of knowledge acquisition, etc. Horn-to-Horn belief revision could be of ..."

Abstract
- Add to MetaCart

Horn-to-Horn belief revision asks for the revision of a Horn knowledge base such that the revised knowledge base is also Horn. Horn knowledge bases are important whenever one is concerned with efficiency—of computing inferences, of knowledge acquisition, etc. Horn-to-Horn belief revision could be of interest, in particular, as a component of any efficient system requiring large commonsense knowledge bases that may need revisions because, for example, new contradictory information is acquired. Recent results on belief revision for general logics show that the existence of a belief contraction operator satisfying the generalized AGM postulates is equivalent to the existence of a complement. Here we provide a first step towards efficient Horn-to-Horn belief revision, by characterizing the existence of a complement of a Horn consequence of a Horn knowledge base. A complement exists if and only if the Horn consequence is not the consequence of a modified knowledge base obtained from the original by an operation called body building. This characterization leads to the efficient construction of a complement whenever it exists.

###
*Proceedings* of the *Twenty-Third* *AAAI* *Conference* on *Artificial* *Intelligence* (*2008*) Hyperequivalence of Logic Programs with Respect to Supported Models

"... Recent research in nonmonotonic logic programming has focused on program equivalence relevant for program optimization and modular programming. So far, most results concern the stable-model semantics. However, other semantics for logic programs are also of interest, especially the semantics of suppo ..."

Abstract
- Add to MetaCart

Recent research in nonmonotonic logic programming has focused on program equivalence relevant for program optimization and modular programming. So far, most results concern the stable-model semantics. However, other semantics for logic programs are also of interest, especially the semantics of supported models which, when properly generalized, is closely related to the autoepistemic logic of Moore. In this paper, we consider a framework of equivalence notions for logic programs under the supported (minimal) modelsemantics and provide characterizations for this framework in model-theoretic terms. We use these characterizations to derive complexity results concerning testing hyperequivalence of logic programs wrt supported (minimal) models.

###
*Proceedings* of the *Twenty-Third* *AAAI* *Conference* on *Artificial* *Intelligence* (*2008*) Manifold Integration with Markov Random Walks

"... Most manifold learning methods consider only one similarity matrix to induce a low-dimensional manifold embedded in data space. In practice, however, we often use multiple sensors at a time so that each sensory information yields different similarity matrix derived from the same objects. In such a c ..."

Abstract
- Add to MetaCart

Most manifold learning methods consider only one similarity matrix to induce a low-dimensional manifold embedded in data space. In practice, however, we often use multiple sensors at a time so that each sensory information yields different similarity matrix derived from the same objects. In such a case, manifold integration is a desirable task, combining these similarity matrices into a compromise matrix that faithfully reflects multiple sensory information. A small number of methods exists for manifold integration, including a method based on reproducing kernel Krein space (RKKS) or DISTA-TIS, where the former is restricted to the case of only two manifolds and the latter considers a linear combination of normalized similarity matrices as a compromise matrix. In this paper we present a new manifold integration method, Markov random walk on multiple manifolds (RAMS), which integrates transition probabilities defined on each manifold to compute a compromise matrix. Numerical experiments confirm that RAMS finds more informative manifolds with a desirable projection property.

###
*Proceedings* of the *Twenty-Third* *AAAI* *Conference* on *Artificial* *Intelligence* (*2008*) Computing Observation Vectors for Max-Fault Min-Cardinality Diagnoses

"... Model-Based Diagnosis (MBD) typically focuses on diagnoses, minimal under some minimality criterion, e.g., the minimal-cardinality set of faulty components that explain an observation α. However, for different α there may be minimal-cardinality diagnoses of differing cardinalities, and several appli ..."

Abstract
- Add to MetaCart

Model-Based Diagnosis (MBD) typically focuses on diagnoses, minimal under some minimality criterion, e.g., the minimal-cardinality set of faulty components that explain an observation α. However, for different α there may be minimal-cardinality diagnoses of differing cardinalities, and several applications (such as test pattern generation and benchmark model analysis) need to identify the α leading to the max-cardinality diagnosis amongst them. We denote this problem as a Max-Fault Min-Cardinality (MFMC) problem. This paper considers the generation of observations that lead to MFMC diagnoses. We present a near-optimal, stochastic algorithm, called MIRANDA (Max-fault mIn-caRdinAlity observatioN Deduction Algorithm), that computes MFMC observations. Compared to optimal, deterministic approaches such as ATPG, the algorithm has very low cost, allowing us to generate observations corresponding to high-cardinality faults. Experiments show that MIRANDA delivers optimal results on the 74XXX circuits, as well as good MFMC cardinality estimates on the larger ISCAS85 circuits.

###
*Proceedings* of the *Twenty-Third* *AAAI* *Conference* on *Artificial* *Intelligence* (*2008*) A New Clause Learning Scheme for Efficient Unsatisfiability Proofs

"... We formalize in this paper a key property of asserting clauses (the most common type of clauses learned by SAT solvers). We show that the formalized property, which is called empowerment, is not exclusive to asserting clauses, and introduce a new class of learned clauses which can also be empowering ..."

Abstract
- Add to MetaCart

be empowering. We show empirically that (1) the new class of clauses tends to be much shorter and induce further backtracks than asserting clauses and (

*2*) an empowering subset of this new class of clauses significantly improves the performance of the Rsat solver on unsatisfiable problems.###
*Proceedings* of the *Twenty-Third* *AAAI* *Conference* on *Artificial* *Intelligence* (*2008*) From Comparing Clusterings to Combining Clusterings

"... This paper presents a fast simulated annealing framework for combining multiple clusterings (i.e. clustering ensemble) based on some measures of agreement between partitions, which are originally used to compare two clusterings (the obtained clustering vs. a ground truth clustering) for the evaluati ..."

Abstract
- Add to MetaCart

This paper presents a fast simulated annealing framework for combining multiple clusterings (i.e. clustering ensemble) based on some measures of agreement between partitions, which are originally used to compare two clusterings (the obtained clustering vs. a ground truth clustering) for the evaluation of a clustering algorithm. Though we can follow a greedy strategy to optimize these measures as objective functions of clustering ensemble, some local optima may be obtained and simultaneously the computational cost is too large. To avoid the local optima, we then consider a simulated annealing optimization scheme that operates through single label changes. Moreover, for these measures between partitions based on the relationship (joined or separated) of pairs of objects such as Rand index, we can update them incrementally for each label change, which makes sure the simulated annealing optimization scheme is computationally feasible. The simulation and real-life experiments then demonstrate that the proposed framework can achieve superior results.

###
*Proceedings* of the *Twenty-Third* *AAAI* *Conference* on *Artificial* *Intelligence* (*2008*) Factored Models for Probabilistic Modal Logic

"... Modal logic represents knowledge that agents have about other agents ’ knowledge. Probabilistic modal logic further captures probabilistic beliefs about probabilistic beliefs. Models in those logics are useful for understanding and decision making in conversations, bargaining situations, and competi ..."

Abstract
- Add to MetaCart

Modal logic represents knowledge that agents have about other agents ’ knowledge. Probabilistic modal logic further captures probabilistic beliefs about probabilistic beliefs. Models in those logics are useful for understanding and decision making in conversations, bargaining situations, and competitions. Unfortunately, probabilistic modal structures are impractical for large real-world applications because they represent their state space explicitly. In this paper we scale up probabilistic modal structures by giving them a factored representation. This representation applies conditional independence for factoring the probabilistic aspect of the structure (as in Bayesian Networks (BN)). We also present two exact and one approximate algorithm for reasoning about the truth value of probabilistic modal logic queries over a model encoded in a factored form. The first exact algorithm applies inference in BNs to answer a limited class of queries. Our second exact method applies a variable elimination scheme and is applicable without restrictions. Our approximate algorithm uses sampling and can be used for applications with very large models. Given a query, it computes an answer and its confidence level efficiently.

###
*Proceedings* of the *Twenty-Third* *AAAI* *Conference* on *Artificial* *Intelligence* (*2008*) Credulous Resolution for Answer Set Programming ∗

"... The paper presents a calculus based on resolution for credulous reasoning in Answer Set Programming. The new approach allows a top-down and goal directed resolution, in the same spirit as traditional SLD-resolution. The proposed credulous resolution can be used in query-answering with nonground quer ..."

Abstract
- Add to MetaCart

The paper presents a calculus based on resolution for credulous reasoning in Answer Set Programming. The new approach allows a top-down and goal directed resolution, in the same spirit as traditional SLD-resolution. The proposed credulous resolution can be used in query-answering with nonground queries and with non-ground, and possibly infinite, programs. Soundness and completeness results for the resolution procedure are proved for large classes of logic programs. The resolution procedure is also extended to handle some traditional syntactic extensions used in Answer Set Programming, such as choice rules and constraints. The paper also describes an initial implementation of a system for credulous reasoning in Answer Set Programming.

###
*Proceedings* of the *Twenty-Third* *AAAI* *Conference* on *Artificial* *Intelligence* (*2008*) Studies in Solution Sampling

"... We introduce novel algorithms for generating random solutions from a uniform distribution over the solutions of a boolean satisfiability problem. Our algorithms operate in two phases. In the first phase, we use a recently introduced SampleSearch scheme to generate biased samples while in the second ..."

Abstract
- Add to MetaCart

We introduce novel algorithms for generating random solutions from a uniform distribution over the solutions of a boolean satisfiability problem. Our algorithms operate in two phases. In the first phase, we use a recently introduced SampleSearch scheme to generate biased samples while in the second phase we correct the bias by using either Sampling/Importance Resampling or the Metropolis-Hastings method. Unlike state-of-the-art algorithms, our algorithms guarantee convergence in the limit. Our empirical results demonstrate the superior performance of our new algorithms over several competing schemes.