Results 1  10
of
88
Probabilistic Horn abduction and Bayesian networks
 Artificial Intelligence
, 1993
"... This paper presents a simple framework for Hornclause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesia ..."
Abstract

Cited by 305 (38 self)
 Add to MetaCart
(Show Context)
This paper presents a simple framework for Hornclause abduction, with probabilities associated with hypotheses. The framework incorporates assumptions about the rule base and independence assumptions amongst hypotheses. It is shown how any probabilistic knowledge representable in a discrete Bayesian belief network can be represented in this framework. The main contribution is in finding a relationship between logical and probabilistic notions of evidential reasoning. This provides a useful representation language in its own right, providing a compromise between heuristic and epistemic adequacy. It also shows how Bayesian networks can be extended beyond a propositional language. This paper also shows how a language with only (unconditionally) independent hypotheses can represent any probabilistic knowledge, and argues that it is better to invent new hypotheses to explain dependence rather than having to worry about dependence in the language. Scholar, Canadian Institute for Advanced...
ObjectOriented Bayesian Networks
, 1997
"... Bayesian networks provide a modeling language and associated inference algorithm for stochastic domains. They have been successfully applied in a variety of mediumscale applications. However, when faced with a large complex domain, the task of modeling using Bayesian networks begins to resemble the ..."
Abstract

Cited by 192 (11 self)
 Add to MetaCart
Bayesian networks provide a modeling language and associated inference algorithm for stochastic domains. They have been successfully applied in a variety of mediumscale applications. However, when faced with a large complex domain, the task of modeling using Bayesian networks begins to resemble the task of programming using logical circuits. In this paper, we describe an objectoriented Bayesian network (OOBN) language, which allows complex domains to be described in terms of interrelated objects. We use a Bayesian network fragment to describe the probabilistic relations between the attributes of an object. These attributes can themselves be objects, providing a natural framework for encoding partof hierarchies. Classes are used to provide a reusable probabilistic model which can be applied to multiple similar objects. Classes also support inheritance of model fragments from a class to a subclass, allowing the common aspects of related classes to be defined only once. Our language h...
The Independent Choice Logic for modelling multiple agents under uncertainty
 Artificial Intelligence
, 1997
"... Inspired by game theory representations, Bayesian networks, influence diagrams, structured Markov decision process models, logic programming, and work in dynamical systems, the independent choice logic (ICL) is a semantic framework that allows for independent choices (made by various agents, includi ..."
Abstract

Cited by 157 (9 self)
 Add to MetaCart
(Show Context)
Inspired by game theory representations, Bayesian networks, influence diagrams, structured Markov decision process models, logic programming, and work in dynamical systems, the independent choice logic (ICL) is a semantic framework that allows for independent choices (made by various agents, including nature) and a logic program that gives the consequence of choices. This representation can be used as a specification for agents that act in a world, make observations of that world and have memory, as well as a modelling tool for dynamic environments with uncertainty. The rules specify the consequences of an action, what can be sensed and the utility of outcomes. This paper presents a possibleworlds semantics for ICL, and shows how to embed influence diagrams, structured Markov decision processes, and both the strategic (normal) form and extensive (gametree) form of games within the Thanks to Craig Boutilier and Holger Hoos for detailed comments on this paper. This work was supporte...
PCLASSIC: A tractable probabilistic description logic
 In Proceedings of AAAI97
, 1997
"... Knowledge representation languages invariably reflect a tradeoff between expressivity and tractability. Evidence suggests that the compromise chosen by description logics is a particularly successful one. However, description logic (as for all variants of firstorder logic) is severely limited in i ..."
Abstract

Cited by 109 (4 self)
 Add to MetaCart
(Show Context)
Knowledge representation languages invariably reflect a tradeoff between expressivity and tractability. Evidence suggests that the compromise chosen by description logics is a particularly successful one. However, description logic (as for all variants of firstorder logic) is severely limited in its ability to express uncertainty. In this paper, we present PCLASSIC, a probabilistic version of the description logic CLASSIC. In addition to terminological knowledge, the language utilizes Bayesian networks to express uncertainty about the basic properties of an individual, the number of fillers for its roles, and the properties of these fillers. We provide a semantics for PCLASSIC and an effective inference procedure for probabilistic subsumption: computing the probability that a random individual in class C is also in class D. The effectiveness of the algorithm relies on independenceassumptions and on our ability to execute lifted inference: reasoning about similar individuals as a gr...
Parameter learning of logic programs for symbolicstatistical modeling
 Journal of Artificial Intelligence Research
, 2001
"... We propose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. de nite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distributio ..."
Abstract

Cited by 100 (20 self)
 Add to MetaCart
We propose a logical/mathematical framework for statistical parameter learning of parameterized logic programs, i.e. de nite clause programs containing probabilistic facts with a parameterized distribution. It extends the traditional least Herbrand model semantics in logic programming to distribution semantics, possible world semantics with a probability distribution which is unconditionally applicable to arbitrary logic programs including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM algorithm, the graphical EM algorithm, thatrunsfora class of parameterized logic programs representing sequential decision processes where each decision is exclusive and independent. It runs on a new data structure called support graphs describing the logical relationship between observations and their explanations, and learns parameters by computing inside and outside probability generalized for logic programs. The complexity analysis shows that when combined with OLDT search for all explanations for observations, the graphical EM algorithm, despite its generality, has the same time complexity as existing EM algorithms, i.e. the BaumWelch algorithm for HMMs, the InsideOutside algorithm for PCFGs, and the one for singly connected Bayesian networks that have beendeveloped independently in each research eld. Learning experiments with PCFGs using two corpora of moderate size indicate that the graphical EM algorithm can signi cantly outperform the InsideOutside algorithm. 1.
Answering Queries from ContextSensitive Probabilistic Knowledge Bases
 Theoretical Computer Science
, 1996
"... We define a language for representing contextsensitive probabilistic knowledge. A knowledge base consists of a set of universally quantified probability sentences that include context constraints, which allow inference to be focused on only the relevant portions of the probabilistic knowledge. We p ..."
Abstract

Cited by 96 (0 self)
 Add to MetaCart
(Show Context)
We define a language for representing contextsensitive probabilistic knowledge. A knowledge base consists of a set of universally quantified probability sentences that include context constraints, which allow inference to be focused on only the relevant portions of the probabilistic knowledge. We provide a declarative semantics for our language. We present a query answering procedure which takes a query Q and a set of evidence E and constructs a Bayesian network to compute P (QjE). The posterior probability is then computed using any of a number of Bayesian network inference algorithms. We use the declarative semantics to prove the query procedure sound and complete. We use concepts from logic programming to justify our approach. Keywords: reasoning under uncertainty, Bayesian networks, Probability model construction, logic programming Submitted to Theoretical Computer Science special issue on Uncertainty in Databases and Deductive Systems. This work was partially supported by NSF g...
Lifted firstorder probabilistic inference
 In Proceedings of IJCAI05, 19th International Joint Conference on Artificial Intelligence
, 2005
"... Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting firstorder specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poo ..."
Abstract

Cited by 93 (7 self)
 Add to MetaCart
Most probabilistic inference algorithms are specified and processed on a propositional level. In the last decade, many proposals for algorithms accepting firstorder specifications have been presented, but in the inference stage they still operate on a mostly propositional representation level. [Poole, 2003] presented a method to perform inference directly on the firstorder level, but this method is limited to special cases. In this paper we present the first exact inference algorithm that operates directly on a firstorder level, and that can be applied to any firstorder model (specified in a language that generalizes undirected graphical models). Our experiments show superior performance in comparison with propositional exact inference. 1
An Algorithm for Probabilistic LeastCommitment Planning
, 1994
"... We define the probabilistic planning problem in terms of a probability distribution over initial world states, a boolean combination of goal propositions, a probability threshold, and actions whose effects depend on the executiontime state of the world and on random chance. Adopting a probabilistic ..."
Abstract

Cited by 88 (2 self)
 Add to MetaCart
We define the probabilistic planning problem in terms of a probability distribution over initial world states, a boolean combination of goal propositions, a probability threshold, and actions whose effects depend on the executiontime state of the world and on random chance. Adopting a probabilistic model complicates the definition of plan success: instead of demanding a plan that provably achieves the goal, we seek plans whose probability of success exceeds the threshold. This paper describes a probabilistic semantics for planning under uncertainty, and presents a fully implemented algorithm that generates plans that succeed with probability no less than a usersupplied probability threshold. The algorithm is sound (if it terminates then the generated plan is sufficiently likely to achieve the goal) and complete (the algorithm will generate a solution if one exists).
Student Assessment Using Bayesian Nets
 International Journal of HumanComputer Studies
, 1995
"... This paper will focus exclusively on the problem solving activity. The other activities are described in Martin and VanLehn (1993, in press). This section describes OLAE's input (student behavior) and output (assessment presentation), and the way that OLAE uses the behavioral data to calculate ..."
Abstract

Cited by 71 (9 self)
 Add to MetaCart
This paper will focus exclusively on the problem solving activity. The other activities are described in Martin and VanLehn (1993, in press). This section describes OLAE's input (student behavior) and output (assessment presentation), and the way that OLAE uses the behavioral data to calculate the assessments
The automated mapping of plans for plan recognition
 In Proceedings of the Tenth Annual Conference on Uncertainty in Artificial Intelligence
, 1994
"... To coordinate with other agents in its an environment, an agent needs models of what the other agents are trying to do. When communication is impossible or expensive, this information must be acquired indirectly via plan recognition. Typical approaches to plan recognition start with specification of ..."
Abstract

Cited by 70 (8 self)
 Add to MetaCart
To coordinate with other agents in its an environment, an agent needs models of what the other agents are trying to do. When communication is impossible or expensive, this information must be acquired indirectly via plan recognition. Typical approaches to plan recognition start with specification of the possible plans the other agents may be following and develop special techniques for discriminating among the possibilities. These structures are not the direct nor derived output of a planning system. Prior work has not yet addressed the problem of how the plan recognition structures are (or could be) derived from executable plans as generated by planning systems. Furthermore, concerns about building models of agents ’ actions in all possible worlds lead to a desire for dynamically constructing belief network models for situationspecific plan recognition activities. As a step in this direction, we have developed and implemented methods that take plans, as generated by a planning system, and creates a belief network model in support of the plan recognition task. We start from a language designed for plan specification, PRS (Ingrand, Georgeff, & Rao 1992).l From a PRS plan, we generate a belief network model that directly serves plan recognition by relating potential observations to the candidate plans. Our methods handle a large variety of plan structures such as conditional branching, subgoaling, and alternative goals. Furthermore, our application domain is coordinated autonomous robotic teams, where sensorbased observations are inherently uncertain. The methodology we have developed handles this uncertainty through explicit modeling, something not necessary in other plan recognition domains (Charniak & Goldman 1993; Goodman & Litman 1990) where observations are certain. An example of a belief network generated by the mapping methods from a set of simple plans for performing a “boundingoverwatch ” surveillance task