Results 1 - 10
of
10
LEARNER: A System for Acquiring Commonsense Knowledge by Analogy
- in Proceedings of Second International Conference on Knowledge Capture (K-CAP
, 2003
"... One of the long-term goals of Artificial Intelligence is construction of a machine that is capable of reasoning about the everyday world the way humans are. In this paper, I first argue that construction of a large collection of statements about everyday world (a repository of commonsense knowledge) ..."
Abstract
-
Cited by 37 (4 self)
- Add to MetaCart
(Show Context)
One of the long-term goals of Artificial Intelligence is construction of a machine that is capable of reasoning about the everyday world the way humans are. In this paper, I first argue that construction of a large collection of statements about everyday world (a repository of commonsense knowledge) is a valuable step towards this long-term goal. Then, I point out that volunteer contributors over the Internet — a frequently overlooked source of knowledge — can be tapped to construct such a knowledge repository. To operationalize construction of a large commonsense knowledge repository by volunteer contributors, I then introduce cumulative analogy, a class of analogy-based reasoning algorithms that leverage existing knowledge to pose knowledge acquisition questions to the volunteer contributors. The algorithms have been implemented and deployed as the Learner system. To date, about 3,400 volunteer contributors have interacted with the system over the course of 11 months, increasing a starting collection of 47,147 statements by 362 % to a total of 217,971. The deployed system and the growing collection of knowledge it acquired are publicly available from
Useful Counterfactuals
- ETAI (ELECTRONIC TRANSACTIONS ON ARTIFICIAL INTELLIGENCE
, 1999
"... Counterfactual conditional sentences can be useful in articial intelligence as they are in human aairs. In particular, they allow reasoners to learn from experiences that they did not quite have. Our tools for making inferences from counterfactuals permit inferring sentences that are not themselves ..."
Abstract
-
Cited by 19 (2 self)
- Add to MetaCart
Counterfactual conditional sentences can be useful in articial intelligence as they are in human aairs. In particular, they allow reasoners to learn from experiences that they did not quite have. Our tools for making inferences from counterfactuals permit inferring sentences that are not themselves counterfactual. This is what makes them useful. A simple class of useful counterfactuals involves a change of one component of a point in a space provided with a cartesian product structure. We call these cartesian counterfactuals. Cartesian counterfactuals can be modeled by assignment and contents functions as in program semantics. We also consider the more general tree-structured counterfactuals.
Computing strongest necessary and weakest sufficient conditions of first-order formulas
- International Joint Conference on AI (IJCAI’2001
, 2000
"... A technique is proposed for computing the weakest sufficient (wsc) and strongest necessary (snc) conditions for formulas in an expressive fragment of first-order logic using quantifier elimination techniques. The efficacy of the approach is demonstrated by using the techniques to compute snc’s and w ..."
Abstract
-
Cited by 18 (9 self)
- Add to MetaCart
A technique is proposed for computing the weakest sufficient (wsc) and strongest necessary (snc) conditions for formulas in an expressive fragment of first-order logic using quantifier elimination techniques. The efficacy of the approach is demonstrated by using the techniques to compute snc’s and wsc’s for use in agent communication applications, theory approximation and generation of abductive hypotheses. Additionally, we generalize recent results involving the generation of successor state axioms in the propositional situation calculus via snc’s to the first-order case. Subsumption results for existing approaches to this problem and a re-interpretation of the concept of forgetting as a process of quantifier elimination are also provided.
Similarity, Approximations and Vagueness
- PROC. RSFDGRC’05 (E. A. SLEZAK D., ED.), LNAI, NUMBER 3641 IN LNAI
, 2005
"... The relation of similarity is essential in understanding and developing frameworks for reasoning with vague and approximate concepts. There is a wide spectrum of choice as to what properties we associate with similarity and such choices determine the nature of vague and approximate concepts define ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
The relation of similarity is essential in understanding and developing frameworks for reasoning with vague and approximate concepts. There is a wide spectrum of choice as to what properties we associate with similarity and such choices determine the nature of vague and approximate concepts defined in terms of these relations. Additionally, robotic systems naturally have to deal with vague and approximate concepts due to the limitations in reasoning and sensor capabilities. Halpern [1] introduces the use of subjective and objective states in a modal logic formalizing vagueness and distinctions in transitivity when an agent reasons in the context of sensory and other limitations. He also relates these ideas to a solution to the Sorities and other paradoxes. In this paper, we generalize and apply the idea of similarity and tolerance spaces [2,3,4,5], a means of constructing approximate and vague concepts from such spaces and an explicit way to distinguish between an agent’s objective and subjective states. We also show how some of the intuitions from Halpern can be used with similarity spaces to formalize the above-mentioned Sorities and other paradoxes.
www.elsevier.com/locate/artint From here to human-level AI
, 2007
"... Human-level AI will be achieved, but new ideas are almost certainly needed, so a date cannot be reliably predicted—maybe five years, maybe five hundred years. I’d be inclined to bet on this 21st century. It is not surprising that human-level AI has proved difficult and progress has been slow—though ..."
Abstract
- Add to MetaCart
(Show Context)
Human-level AI will be achieved, but new ideas are almost certainly needed, so a date cannot be reliably predicted—maybe five years, maybe five hundred years. I’d be inclined to bet on this 21st century. It is not surprising that human-level AI has proved difficult and progress has been slow—though there has been important progress. The slowness and the demand to exploit what has been discovered has led many to mistakenly redefine AI, sometimes in ways that preclude human-level AI—by relegating to humans parts of the task that human-level computer programs would have to do. In the terminology of this paper, it amounts to settling for a bounded informatic situation instead of the more general common sense informatic situation. Overcoming the “brittleness ” of present AI systems and reaching human-level AI requires programs that deal with the common sense informatic situation—in which the phenomena to be taken into account in achieving a goal are not fixed in advance. We discuss reaching human-level AI, emphasizing logical AI and especially emphasizing representation problems of information and of reasoning. Ideas for reasoning in the common sense informatic situation include nonmonotonic reasoning, approximate concepts, formalized contexts and introspection.
Towards a Model of Pattern Recovery in Relational Data
"... This paper describes some fundamental issues associated with using rule-based reasoning to discover evidence for the instantiation of complex threat types in relational data. Based on the considerations raised, I delineate a general model for the pattern recovery process. 1. ..."
Abstract
- Add to MetaCart
This paper describes some fundamental issues associated with using rule-based reasoning to discover evidence for the instantiation of complex threat types in relational data. Based on the considerations raised, I delineate a general model for the pattern recovery process. 1.
Formalizing Approximate Objects and Theories: Some Initial Results
, 2002
"... This paper introduces some preliminary formalizations of the approximate entities of [McCarthy, 2000]. Approximate objects, predicates, and theories are considered necessary for human-level AI, and we believe they enable very powerful modes of reasoning (which admittedly are not always sound). Appro ..."
Abstract
- Add to MetaCart
This paper introduces some preliminary formalizations of the approximate entities of [McCarthy, 2000]. Approximate objects, predicates, and theories are considered necessary for human-level AI, and we believe they enable very powerful modes of reasoning (which admittedly are not always sound). Approximation is known as vagueness in philosophical circles and is often deplored as a defective aspect of human language which infects the precision of logic. Quite to the contrary, we believe we can tame this monster by formalizing it within logic, and then can "build solid intellectual structures on such swampy conceptual foundations." [McCarthy, 2000].
Designing Ontology-Based Interactive
- Proceedings of OTM Workshops
, 2003
"... The so-called Semantic Web advocates the future availability of machine-understandable metadata, describing Web resources by means of ontologies expressed in description logics. This would eventually entail changes in Information Retrieval (IR) indexing and matching algorithms, but also in the u ..."
Abstract
- Add to MetaCart
The so-called Semantic Web advocates the future availability of machine-understandable metadata, describing Web resources by means of ontologies expressed in description logics. This would eventually entail changes in Information Retrieval (IR) indexing and matching algorithms, but also in the user interface design of IR tools. This second aspect can be informed by existing Interactive Information Retrieval (IIR) research, but it requires also further investigations about the interaction of users with terminological structures and iterative, browsing-oriented query construction paradigms. In this paper, preliminary experiences and reflections regarding ontology-based query formulation interface design are described.
UDC 004.896, DOI:10.2298/CSIS100209015D A Layered Rule-Based Architecture for Approximate Knowledge Fusion⋆
"... Abstract. In this paper we present a framework for fusing approximate knowledge obtained from various distributed, heterogenous knowledge sources. This issue is substantial in modeling multi-agent systems, where a group of loosely coupled heterogeneous agents cooperate in achieving a common goal. In ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. In this paper we present a framework for fusing approximate knowledge obtained from various distributed, heterogenous knowledge sources. This issue is substantial in modeling multi-agent systems, where a group of loosely coupled heterogeneous agents cooperate in achieving a common goal. In paper [5] we have focused on defining gen-eral mechanism for knowledge fusion. Next, the techniques ensuring tractability of fusing knowledge expressed as a Horn subset of proposi-tional dynamic logic were developed in [13,16]. Propositional logics may seem too weak to be useful in real-world applications. On the other hand, propositional languages may be viewed as sublanguages of first-order logics which serve as a natural tool to define concepts in the spirit of description logics [2]. These notions may be further used to define various ontologies, like e.g. those applicable in the Semantic Web. Taking this step, we propose a framework, in which our Horn subset of dynamic logic is combined with deductive database technology. This synthesis is formally implemented in the framework of HSPDL architecture. The resulting knowledge fusion rules are naturally applicable to real-world data.