Results 1  10
of
27
Model Checking vs. Theorem Proving: A Manifesto
, 1991
"... We argue that rather than representing an agent's knowledge as a collection of formulas, and then doing theorem proving to see if a given formula follows from an agent's knowledge base, it may be more useful to represent this knowledge by a semantic model, and then do model checking to se ..."
Abstract

Cited by 117 (5 self)
 Add to MetaCart
We argue that rather than representing an agent's knowledge as a collection of formulas, and then doing theorem proving to see if a given formula follows from an agent's knowledge base, it may be more useful to represent this knowledge by a semantic model, and then do model checking to see if the given formula is true in that model. We discuss how to construct a model that represents an agent's knowledge in a number of different contexts, and then consider how to approach the modelchecking problem.
CAUSAL NETWORKS: SEMANTICS AND EXPRESSIVENESS
, 1990
"... Dependency knowledge of the form "x is independent of y once z is known" invariably obeys the four graphoid axioms, examples include probabilistic and database dependencies. Often, such knowledge can be represented efficiently with graphical structures such as undirected graphs and directe ..."
Abstract

Cited by 96 (8 self)
 Add to MetaCart
Dependency knowledge of the form "x is independent of y once z is known" invariably obeys the four graphoid axioms, examples include probabilistic and database dependencies. Often, such knowledge can be represented efficiently with graphical structures such as undirected graphs and directed acyclic graphs (DAGs). In this paper we show that the graphical criterion called dseparation is a sound rule for reading independencies from any DAG based on a causal input list drawn from a graphoid. The rule may be extended to cover DAGs that represent functional dependencies as well as conditional dependencies.
Decision Theory in Expert Systems and Artificial Intelligence
 International Journal of Approximate Reasoning
, 1988
"... Despite their different perspectives, artificial intelligence (AI) and the disciplines of decision science have common roots and strive for similar goals. This paper surveys the potential for addressing problems in representation, inference, knowledge engineering, and explanation within the decision ..."
Abstract

Cited by 93 (18 self)
 Add to MetaCart
Despite their different perspectives, artificial intelligence (AI) and the disciplines of decision science have common roots and strive for similar goals. This paper surveys the potential for addressing problems in representation, inference, knowledge engineering, and explanation within the decisiontheoretic framework. Recent analyses of the restrictions of several traditional AI reasoning techniques, coupled with the development of more tractable and expressive decisiontheoretic representation and inference strategies, have stimulated renewed interest in decision theory and decision analysis. We describe early experience with simple probabilistic schemes for automated reasoning, review the dominant expertsystem paradigm, and survey some recent research at the crossroads of AI and decision science. In particular, we present the belief network and influence diagram representations. Finally, we discuss issues that have not been studied in detail within the expertsystems sett...
Improved learning of Bayesian networks
 Proc. of the Conf. on Uncertainty in Artificial Intelligence
, 2001
"... Two or more Bayesian network structures are Markov equivalent when the corresponding acyclic digraphs encode the same set of conditional independencies. Therefore, the search space of Bayesian network structures may be organized in equivalence classes, where each of them represents a different set o ..."
Abstract

Cited by 38 (6 self)
 Add to MetaCart
Two or more Bayesian network structures are Markov equivalent when the corresponding acyclic digraphs encode the same set of conditional independencies. Therefore, the search space of Bayesian network structures may be organized in equivalence classes, where each of them represents a different set of conditional independencies. The collection of sets of conditional independencies obeys a partial order, the socalled “inclusion order.” This paper discusses in depth the role that the inclusion order plays in learning the structure of Bayesian networks. In particular, this role involves the way a learning algorithm traverses the search space. We introduce a condition for traversal operators, the inclusion boundary condition, which, when it is satisfied, guarantees that the search strategy can avoid local maxima. This is proved under the assumptions that the data is sampled from a probability distribution which is faithful to an acyclic digraph, and the length of the sample is unbounded. The previous discussion leads to the design of a new traversal operator and two new learning algorithms in the context of heuristic search and the Markov Chain Monte Carlo method. We carry out a set of experiments with synthetic and realworld data that show empirically the benefit of striving for the inclusion order when learning Bayesian networks from data.
Lp, A Logic for Representing and Reasoning with Statistical Knowledge
, 1990
"... This paper presents a logical formalism for representing and reasoning with statistical knowledge. One of the key features of the formalism is its ability to deal with qualitative statistical information. It is argued that statistical knowledge, especially that of a qualitative nature, is an importa ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
This paper presents a logical formalism for representing and reasoning with statistical knowledge. One of the key features of the formalism is its ability to deal with qualitative statistical information. It is argued that statistical knowledge, especially that of a qualitative nature, is an important component of our world knowledge and that such knowledge is used in many different reasoning tasks. The work is further motivated by the observation that previous formalisms for representing probabilistic information are inadequate for representing statistical knowledge. The representation mechanism takes the form of a logic that is capable of representing a wide variety of statistical knowledge, and that possesses an intuitive formal semantics based on the simple notions of sets of objects and probabilities defined over those sets. Furthermore, a proof theory is developed and is shown to be sound and complete. The formalism offers a perspicuous and powerful representational tool for stat...
Trygve Haavelmo and the Emergence of Causal Calculus
, 2012
"... Haavelmo was the first to recognize the capacity of economic models to guide policies. This paper describes some of the barriers that Haavelmo’s ideas have had (and still have) to overcome, and lays out a logical framework for capturing the relationships between theory, data and policy questions. Th ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Haavelmo was the first to recognize the capacity of economic models to guide policies. This paper describes some of the barriers that Haavelmo’s ideas have had (and still have) to overcome, and lays out a logical framework for capturing the relationships between theory, data and policy questions. The mathematical tools that emerge from this framework now enable investigators to answer complex policy and counterfactual questions using embarrassingly simple routines, some by mere inspection of the model’s structure. Several such problems are illustrated by examples, including misspecification tests, identification, mediation and introspection. Finally, we observe that modern economists are largely unaware of the benefits that Haavelmo’s ideas bestow upon them and, as a result, econometric research has not fully utilized modern advances in causal analysis. 1
Distributed inference in Bayesian networks
 CYBERNETICS AND SYSTEMS
, 1994
"... Bayesian networks originated as a framework for distributed reasoning. In singlyconnected networks, there exists an elegant inference algorithm that can be implemented in parallel having a processor for every node. It can be extended to take profit of the ORgate, a model of interaction among ca ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Bayesian networks originated as a framework for distributed reasoning. In singlyconnected networks, there exists an elegant inference algorithm that can be implemented in parallel having a processor for every node. It can be extended to take profit of the ORgate, a model of interaction among causes which simplifies knowledge acquisition and evidence propagation. We also discuss two exact and one approximate methods for dealing with general networks. All these algorithms admit distributed implementations.
Automated Database Schema Design Using Mined Data Dependencies
 J. Amer. Soc. Inform. Sci
, 1998
"... Data dependencies are used in database schema design to enforce the correctness of a database as well as to reduce redundant data. These dependencies are usually determined from the semantics of the attributes and are then enforced upon the relations. This paper describes a bottomup procedure for d ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Data dependencies are used in database schema design to enforce the correctness of a database as well as to reduce redundant data. These dependencies are usually determined from the semantics of the attributes and are then enforced upon the relations. This paper describes a bottomup procedure for discovering multivalued dependencies (MVDs) in observed data without knowing `a priori the relationships amongst the attributes. The proposed algorithm is an application of the technique we designed for learning conditional independencies in probabilistic reasoning. A prototype system for automated database schema design has been implemented. Experiments were carried out to demonstrate both the effectiveness and efficiency of our method. 1
Decision analysis techniques for knowledge acquisition: Combining information and preference models using Aquinas
 Proceedings of the Second AAAI Knowledge Acquisition for KnowledgeBased Systems Workshop
, 1987
"... The field of decision analysis is concerned with the application of formal theories of probability and utility to the guidance of action. Decision analysis has been used for many years as a way to gain insight regarding decisions that involve significant amounts of uncertain information and complex ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
The field of decision analysis is concerned with the application of formal theories of probability and utility to the guidance of action. Decision analysis has been used for many years as a way to gain insight regarding decisions that involve significant amounts of uncertain information and complex preference issues, but it has been largely overlooked by knowledgebased system researchers. This paper illustrates the value of incorporating decision analysis insights and techniques into the knowledge acquisition and decision making process. This approach is being implemented within Aquinas, an automated knowledge acquisition and decision support tool based on personal construct theory that is under development at Boeing Computer Services. The need for explicit preference models in knowledgebased systems will be shown. The modeling of problems will be viewed from the perspectives of decision analysis and personal construct theory. We will outline the approach of Aquinas and then present an example that illustrates how preferences can be used to guide the knowledge acquisition process and the selection of alternatives in decision making. Techniques for combining supervised and unsupervised inductive learning from data with expert judgment, and integration of knowledge and inference methods at varying levels of precision will be presented. Personal construct theory and decision theory are shown to be complementary: the former provides a plausible account of the dynamics of model formulation and revision, while the latter provides a consistent framework for model evaluation. Applied personal construct theory (in the form of tools such as Aquinas) and applied decision theory (in the form of decision analysis) are moving along convergent paths. We see the approach in this paper as the first step toward a full integration of insights from the two disciplines and their respective repertory grid and influence diagram representations.
Entropy and MDL Discretization of Continuous Variables for Bayesian Belief Networks
 Int’l J. Intelligent Systems
, 2000
"... An efficient algorithm for partitioning the range of a continuous variable to a discrete number of intervals, for use in the construction of Bayesian belief networks Ž BBNs., is presented here. The partitioning minimizes the information loss, relative to the number of intervals used to represent the ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
An efficient algorithm for partitioning the range of a continuous variable to a discrete number of intervals, for use in the construction of Bayesian belief networks Ž BBNs., is presented here. The partitioning minimizes the information loss, relative to the number of intervals used to represent the variable. Partitioning can be done prior to BBN construction or extended for repartitioning during construction. Prior partitioning allows either Bayesian or minimum descriptive length Ž MDL. metrics to be used to guide BBN construction. Dynamic repartitioning, during BBN construction, is done with a MDL metric to guide construction. The methods are demonstrated with data from two epidemiological studies and these results are compared for all of the methods. The use of the partitioning algorithm resulted in more sparsely connected BBNs, than with binary partitioning, with little information loss from mapping continuous variables into discrete ones. � 2000 John Wiley & Sons, Inc. I.