Results 1  10
of
45
On the Hardness of Approximate Reasoning
, 1996
"... Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider va ..."
Abstract

Cited by 219 (13 self)
 Add to MetaCart
Many AI problems, when formalized, reduce to evaluating the probability that a propositional expression is true. In this paper we show that this problem is computationally intractable even in surprisingly restricted cases and even if we settle for an approximation to this probability. We consider various methods used in approximate reasoning such as computing degree of belief and Bayesian belief networks, as well as reasoning techniques such as constraint satisfaction and knowledge compilation, that use approximation to avoid computational difficulties, and reduce them to modelcounting problems over a propositional domain. We prove that counting satisfying assignments of propositional languages is intractable even for Horn and monotone formulae, and even when the size of clauses and number of occurrences of the variables are extremely limited. This should be contrasted with the case of deductive reasoning, where Horn theories and theories with binary clauses are distinguished by the e...
A Knowledge Compilation Map
 Journal of Artificial Intelligence Research
, 2002
"... We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime. ..."
Abstract

Cited by 159 (22 self)
 Add to MetaCart
We propose a perspective on knowledge compilation which calls for analyzing different compilation approaches according to two key dimensions: the succinctness of the target compilation language, and the class of queries and transformations that the language supports in polytime.
Decomposable negation normal form
 Journal of the ACM
, 2001
"... Abstract. Knowledge compilation has been emerging recently as a new direction of research for dealing with the computational intractability of general propositional reasoning. According to this approach, the reasoning process is split into two phases: an offline compilation phase and an online quer ..."
Abstract

Cited by 109 (18 self)
 Add to MetaCart
Abstract. Knowledge compilation has been emerging recently as a new direction of research for dealing with the computational intractability of general propositional reasoning. According to this approach, the reasoning process is split into two phases: an offline compilation phase and an online queryanswering phase. In the offline phase, the propositional theory is compiled into some target language, which is typically a tractable one. In the online phase, the compiled target is used to efficiently answer a (potentially) exponential number of queries. The main motivation behind knowledge compilation is to push as much of the computational overhead as possible into the offline phase, in order to amortize that overhead over all online queries. Another motivation behind compilation is to produce very simple online reasoning systems, which can be embedded costeffectively into primitive computational platforms, such as those found in consumer electronics. One of the key aspects of any compilation approach is the target language into which the propositional theory is compiled. Previous target languages included Horn theories, prime implicates/implicants and ordered binary decision diagrams (OBDDs). We propose in this paper a new target compilation language, known as decomposable negation normal form (DNNF), and present a number of its properties that make it of interest to the broad community. Specifically, we
The Use of Classifiers in Sequential Inference
, 2001
"... We study the problem of combining the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints. In particular, we develop two general approaches for an important subproblem  identifying phrase structure. The first is a Markovian appro ..."
Abstract

Cited by 90 (34 self)
 Add to MetaCart
We study the problem of combining the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints. In particular, we develop two general approaches for an important subproblem  identifying phrase structure. The first is a Markovian approach that extends standard HMMs to allow the use of a rich observation structure and of general classifiers to model stateobservation dependencies. The second is an extension of constraint satisfaction formalisms. We develop efficient combination algorithms under both models and study them experimentally in the context of shallow parsing. 1 Introduction In many situations it is necessary to make decisions that depend on the outcomes of several different classifiers in a way that provides a coherent inference that satisfies some constraints  the sequential nature of the data or other domain specific constraints. Consider, for example, the problem of chunking natural language sentences ...
Defaultreasoning with models
"... Reasoning with modelbased representations is an intuitive paradigm, which has been shown to be theoretically sound and to possess some computational advantages over reasoning with formulabased representations of knowledge. In this paper we present more evidence to the utility of such representatio ..."
Abstract

Cited by 79 (18 self)
 Add to MetaCart
Reasoning with modelbased representations is an intuitive paradigm, which has been shown to be theoretically sound and to possess some computational advantages over reasoning with formulabased representations of knowledge. In this paper we present more evidence to the utility of such representations. In real life situations, one normally completes a lot of missing "context" information when answering queries. We model this situation by augmenting the available knowledge about the world with contextspecific information; we show that reasoning with modelbased representations can be done efficiently in the presence of varying context information. We then consider the task of default reasoning. We show that default reasoning is a generalization of reasoning within context, in which the reasoner has many "context" rules, which may be conflicting. We characterize the cases in which modelbased reasoning supports efficient default reasoning and develop algorithms that handle efficiently fragments of Reiter's default logic. In particular, this includes cases in which performing the default reasoning task with the traditional, formulabased, representation is intractable. Further, we argue that these results support an incremental view of reasoning in a natural way.
A Bayesian Approach to Tackling Hard Computational Problems
 IN UAI
, 2001
"... We describe research and results centering on the construction and use of Bayesian models that can predict the run time of problem solvers. Our efforts are motivated by observations of high variance in the time required to solve instances for several challenging problems. The methods ..."
Abstract

Cited by 63 (9 self)
 Add to MetaCart
We describe research and results centering on the construction and use of Bayesian models that can predict the run time of problem solvers. Our efforts are motivated by observations of high variance in the time required to solve instances for several challenging problems. The methods
A Learning Approach to Shallow Parsing
 IN PROCEEDINGS OF EMNLPWVLC'99. ASSOCIATION FOR COMPUTATIONAL LINGUISTICS
, 1999
"... A SNoW based learning approach to shallow parsing tasks is presented and studied experimentally. The approach learns to identify syntactic patterns by combining simple predictors to produce a coherent inference. Two instantiations of this approach are studied and experimental results for NounPhrase ..."
Abstract

Cited by 60 (23 self)
 Add to MetaCart
A SNoW based learning approach to shallow parsing tasks is presented and studied experimentally. The approach learns to identify syntactic patterns by combining simple predictors to produce a coherent inference. Two instantiations of this approach are studied and experimental results for NounPhrases (NP) and SubjectVerb (SV) phrases that compare favorably with the best published results are presented. In doing that, we compare two ways of modeling the problem of learning to recognize patterns and suggest that shallow parsing patterns are bet ter learned using open/close predictors than using inside/outside predictors.
Learning to Take Actions
, 1998
"... We formalize a model for supervised learning of action strategies in dynamic stochastic domains and show that PAClearning results on Occam algorithms hold in this model as well. We then identify a class of rulebased action strategies for which polynomial time learning is possible. The representati ..."
Abstract

Cited by 50 (8 self)
 Add to MetaCart
We formalize a model for supervised learning of action strategies in dynamic stochastic domains and show that PAClearning results on Occam algorithms hold in this model as well. We then identify a class of rulebased action strategies for which polynomial time learning is possible. The representation of strategies is a generalization of decision lists; strategies include rules with existentially quantified conditions, simple recursive predicates, and small internal state, but are syntactically restricted. We also study the learnability of hierarchically composed strategies where a subroutine already acquired can be used as a basic action in a higher level strategy. We prove some positive results in this setting, but also show that in some cases the hierarchical learning problem is computationally hard. 1 Introduction We formalize a model for supervised learning of action strategies in dynamic stochastic domains, and study the learnability of strategies represented by rulebased syste...
Learning Bayesian Nets that Perform Well
 In UAI97
, 1997
"... A Bayesian net (BN) is more than a succinct way to encode a probabilistic distribution; it also corresponds to a function used to answer queries. A BN can therefore be evaluated by the accuracy of the answers it returns. Many algorithms for learning BNs, however, attempt to optimize another criterio ..."
Abstract

Cited by 47 (17 self)
 Add to MetaCart
A Bayesian net (BN) is more than a succinct way to encode a probabilistic distribution; it also corresponds to a function used to answer queries. A BN can therefore be evaluated by the accuracy of the answers it returns. Many algorithms for learning BNs, however, attempt to optimize another criterion (usually likelihood, possibly augmented with a regularizing term), which is independent of the distribution of queries that are posed. This paper takes the "performance criteria" seriously, and considers the challenge of computing the BN whose performance  read "accuracy over the distribution of queries"  is optimal. We show that many aspects of this learning task are more difficult than the corresponding subtasks in the standard model. To appear in Proceedings of the Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI97), Providence, RI, August 1997. 1 INTRODUCTION Many tasks require answering questions; this model applies, for example, to both expert systems th...
Is Intractability of NonMonotonic Reasoning a Real Drawback?
 Artificial Intelligence
, 1996
"... Several studies about computational complexity of nonmonotonic reasoning (NMR) showed that nonmonotonic inference is significantly harder than classical, monotonic inference. This contrasts with the general idea that NMR can be used to make knowledge representation and reasoning simpler, not harde ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
Several studies about computational complexity of nonmonotonic reasoning (NMR) showed that nonmonotonic inference is significantly harder than classical, monotonic inference. This contrasts with the general idea that NMR can be used to make knowledge representation and reasoning simpler, not harder. In this paper we show that, to some extent, NMR fulfills the representation goal. In particular, we prove that nonmonotonic formalisms such as circumscription and default logic allow for a much more compact and natural representation of propositional knowledge than propositional calculus. Proofs are based on a suitable definition of compilable inference problem, and on nonuniform complexity classes. Some results about intractability of circumscription and default logic can therefore be interpreted as the price one has to pay for having such an extracompact representation. On the other hand, intractability of inference and compactness of representation are not equivalent notions: we ex...