Results 11  20
of
37
Learning to Reason: The NonMonotonic Case
, 1995
"... We suggest a new approach for the study of the nonmonotonicity of human commonsense reasoning. The two main premises that underlie this work are that commonsense reasoning is an inductive phenomenon, and that missing information in the interaction of the agent with the environment may be as informat ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
We suggest a new approach for the study of the nonmonotonicity of human commonsense reasoning. The two main premises that underlie this work are that commonsense reasoning is an inductive phenomenon, and that missing information in the interaction of the agent with the environment may be as informative for future interactions as observed information. This intuition is formalized and the problem of reasoning from incomplete information is presented as a problem of learning attribute functions over a generalized domain. We consider examples that illustrate various aspects of the nonmonotonic reasoning phenomena, which have been used over the years as "benchmarks" for various formalisms, and translate them into Learning to Reason problems. We demonstrate that these have concise representations over the generalized domain and prove that these representations can be learned efficiently. The framework developed suggests an "operational " approach to studying reasoning that is nevertheless ...
Generating New Beliefs From Old
, 1994
"... In previous work [BGHK92, BGHK93], we have studied the randomworlds approacha particular (and quite powerful) method for generating degrees of belief (i.e., subjective probabilities) from a knowledge base consisting of objective (firstorder, statistical, and default) information. But allow ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
In previous work [BGHK92, BGHK93], we have studied the randomworlds approacha particular (and quite powerful) method for generating degrees of belief (i.e., subjective probabilities) from a knowledge base consisting of objective (firstorder, statistical, and default) information. But allowing a knowledge base to contain only objective information is sometimes limiting. We occasionally wish to include information about degrees of belief in the knowledge base as well, because there are contexts in which old beliefs represent important information that should influence new beliefs. In this paper, we describe three quite general techniques for extending a method that generates degrees of belief from objective information to one that can make use of degrees of belief as well. All of our techniques are based on wellknown approaches, such as crossentropy. We discuss general connections between the techniques and in particular show that, although conceptually and techn...
Speeding Up Inferences Using Relevance Reasoning: A Formalism and Algorithms
 ARTIFICIAL INTELLIGENCE
, 1997
"... Irrelevance reasoning refers to the process in which a system reasons about which parts of its knowledge are relevant (or irrelevant) to a specific query. Aside from its importance in speeding up inferences from large knowledge bases, relevance reasoning is crucial in advanced applications such a ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
Irrelevance reasoning refers to the process in which a system reasons about which parts of its knowledge are relevant (or irrelevant) to a specific query. Aside from its importance in speeding up inferences from large knowledge bases, relevance reasoning is crucial in advanced applications such as modeling complex physical devices and information gathering in distributed heterogeneous systems. This article presents a novel framework for studying the various kinds of irrelevance that arise in inference and efficient algorithms for relevance reasoning. We present a
A Logic for Default Reasoning About Probabilities
, 1998
"... A logic is defined that allows to express information about statistical probabilities and about degrees of belief in specific propositions. By interpreting the two types of probabilities in one common probability space, the semantics given are well suited to model the in uence of statistical informa ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
A logic is defined that allows to express information about statistical probabilities and about degrees of belief in specific propositions. By interpreting the two types of probabilities in one common probability space, the semantics given are well suited to model the in uence of statistical information on the formation of subjective beliefs. Cross entropy minimization is a key element in these semantics, the use of which is justified by showing that the resulting logic exhibits some very reasonable properties.
Asymptotic Conditional Probabilities: The Nonunary Case
 J. SYMBOLIC LOGIC
, 1993
"... Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for firstorder sentences. Given firstorder sentences ' and `, we consider the structures with domain f1; : : : ; Ng that satisfy `, and compute the fraction ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for firstorder sentences. Given firstorder sentences ' and `, we consider the structures with domain f1; : : : ; Ng that satisfy `, and compute the fraction of them in which ' is true. We then consider what happens to this fraction as N gets large. This extends the work on 01 laws that considers the limiting probability of firstorder sentences, by considering asymptotic conditional probabilities. As shown by Liogon'kii [Lio69], if there is a nonunary predicate symbol in the vocabulary, asymptotic conditional probabilities do not always exist. We extend this result to show that asymptotic conditional probabilities do not always exist for any reasonable notion of limit. Liogon'kii also showed that the problem of deciding whether the limit exists is undecidable. We analyze the complexity of three problems with respect to this limit: deciding whether it is welldefined, whether it exists, and whether it lies in some nontrivial interval. Matching upper and lower bounds are given for all three problems, showing them to be highly undecidable.
Using Methods of Declarative Logic Programming for Intelligent Information Agents
 TPLP
, 2002
"... At present, the search for specific information on the World Wide Web is faced with several problems, which arise on the one hand from the vast number of information sources available, and on the other hand from their intrinsic heterogeneity, since standards are missing. A promising approach for sol ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
At present, the search for specific information on the World Wide Web is faced with several problems, which arise on the one hand from the vast number of information sources available, and on the other hand from their intrinsic heterogeneity, since standards are missing. A promising approach for solving the complex problems emerging in this context is the use of multiagent systems of information agents, which cooperatively solve advanced informationretrieval problems. This requires advanced capabilities to address complex tasks, such as search and assessment of information sources, query planning, information merging and fusion, dealing with incomplete information, and handling of inconsistency. In this paper, our interest lies in the role which some methods from the field of declarative logic programming can play in the realization of reasoning capabilities for information agents. In particular, we are interested to see in how they can be used, extended, and further developed for the specific needs of this application domain. We review some existing systems and current projects, which typically address informationintegration problems. We then focus on declarative knowledgerepresentation methods, and review and evaluate approaches and methods from logic programming and nonmonotonic reasoning for information agents. We discuss advantages and drawbacks, and point out the possible extensions and open issues. 1
Discovering Robust Knowledge from Databases that Change
 DATA MINING AND KNOWLEDGE DISCOVERY
, 1998
"... Many applications of knowledge discovery and data mining such as rule discovery for semantic query optimization, database integration and decision support, require the knowledge to be consistent with data. However, databases usually change over time and makemachinediscovered knowledge inconsiste ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Many applications of knowledge discovery and data mining such as rule discovery for semantic query optimization, database integration and decision support, require the knowledge to be consistent with data. However, databases usually change over time and makemachinediscovered knowledge inconsistent. Useful knowledge should be robust against database changessothatitisunlikely to become inconsistentafter database changes. This paper defines this notion of robustness in the context of relational databases that contain multiple relations and describes how robustness of firstorder Hornclause rules can be estimated and applied in knowledge discovery.Our experiments show that the estimation approach can accurately predict the robustness of a rule.
Logical Considerations on Default Semantics
 PROC. 3RD INT'L SYMP. ON ARTIFICIAL INTELLIGENCE AND MATHEMATICS
, 1994
"... We consider a reinterpretation of the rules of default logic. We make Reiter's default rules into a constructive method of building models, not theories. To allow reasoning in first order systems, we equip standard firstorder logic with a (new) Kleene 3valued partial model semantics. Then, using ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
We consider a reinterpretation of the rules of default logic. We make Reiter's default rules into a constructive method of building models, not theories. To allow reasoning in first order systems, we equip standard firstorder logic with a (new) Kleene 3valued partial model semantics. Then, using our methodology, we add defaults to this semantic system. The result is that our logic is an ordinary monotonic one, but its semantics is now nonmonotonic. Reiter's extensions now appear in the semantics, not in the syntax. As an application, we show that this semantics gives a partial solution to the conceptual problems with open defaults pointed out by Lifschitz [16], and Baader and Hollunder [2]. The solution is not complete, chiefly because in making the defaults modeltheoretic, we can only add conjunctive information to our models. This is in contrast to default theories, where extensions can contain disjunctive formulas, and therefore disjunctive information. Our proposal to treat ...
An Implementation of Statistical Default Logic
 Logics in Artificial Intelligence: JELIA 2004, LNAI Series No. 3229
, 2004
"... Abstract. Statistical Default Logic (SDL) is an expansion of classical (i.e., Reiter) default logic that allows us to model common inference patterns found in standard inferential statistics, e.g., hypothesis testing and the estimation of a population‘s mean, variance and proportions. This paper pre ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
Abstract. Statistical Default Logic (SDL) is an expansion of classical (i.e., Reiter) default logic that allows us to model common inference patterns found in standard inferential statistics, e.g., hypothesis testing and the estimation of a population‘s mean, variance and proportions. This paper presents an embedding of an important subset of SDL theories, called literal statistical default theories, into stable model semantics. The embedding is designed to compute the signature set of literals that uniquely distinguishes each extension on a statistical default theory at a preassigned errorbound probability. 1