• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

A.: The core method: Connectionist model generation for first-order logic programs. (2007)

by S Bader, P Hitzler, S Holldobler, Witzel
Add To MetaCart

Tools

Sorted by:
Results 1 - 9 of 9

Connectionist Model generation: A First-Order Approach

by Sebastian Bader, Pascal Hitzler, Steffen Hölldobler , 2007
"... Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate log ..."
Abstract - Cited by 20 (5 self) - Add to MetaCart
Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feed-forward core. We show in this paper how the core method can be used to learn first-order logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.

A computational logic approach to the suppression task

by Emmanuelle-anna Dietz, Marco Ragni - In Proceedings of the 34th Annual Conference of the Cognitive Science Society, CogSci 2013
"... A novel approach to human conditional reasoning based on the three-valued Łukasiewicz logic is presented. We will demon-strate that the Łukasiewicz logic overcomes problems the so-far proposed Fitting logic has in reasoning with the suppres-sion task. While adequately solving the suppression task, t ..."
Abstract - Cited by 10 (7 self) - Add to MetaCart
A novel approach to human conditional reasoning based on the three-valued Łukasiewicz logic is presented. We will demon-strate that the Łukasiewicz logic overcomes problems the so-far proposed Fitting logic has in reasoning with the suppres-sion task. While adequately solving the suppression task, the approach gives rise to a number of open questions concerning the use of Łukasiewicz logic, unique fixed points, completion versus weak completion, explanations, negation, and sceptical versus credulous approaches in human reasoning.

Extracting reduced logic programs from artificial neural networks

by Jens Lehmann, Sebastian Bader, Pascal Hitzler - Applied Intelligence , 2010
"... Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis o ..."
Abstract - Cited by 7 (2 self) - Add to MetaCart
Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way. 1 1
(Show Context)

Citation Context

...r the naive approach in order to obtain a simpler set of rules, i.e. one which appears to be more meaningful and intelligible. Within the context of our own broader research efforts described e.g. in =-=[6, 3, 4, 5, 7, 8, 25, 24]-=-, we seek to understand rule extraction within a learning cycle of (1) initializing an untrained network with background knowledge, (2) training of the network taking background knowledge into account...

2009c. Logics and networks for human reasoning

by Steffen Hölldobler, Carroline Dewi, Puspa Kencana Ramli - In ICANN’09
"... Abstract We propose to model human reasoning tasks using completed logic programs interpreted under the three-valued Lukasiewicz semantics. Given an appropriate immediate consequence operator, completed logic programs admit a least model, which can be computed by iterating the consequence operator. ..."
Abstract - Cited by 6 (3 self) - Add to MetaCart
Abstract We propose to model human reasoning tasks using completed logic programs interpreted under the three-valued Lukasiewicz semantics. Given an appropriate immediate consequence operator, completed logic programs admit a least model, which can be computed by iterating the consequence operator. Reasoning is then performed with respect to the least model. The approach is realized in a connectionist setting.
(Show Context)

Citation Context

...asoning 5 3 The Core Method In [10] a connectionist model generator for propositional logic programs using recurrent networks with feed-forward core was presented. It was later called the core method =-=[2]-=-. The core method has been extended and applied to a variety of programs including modal (see e.g. [5]) and first-order logic programs [1]. It is based on the idea that feed-forward connectionist netw...

Unification Neural Networks: Unification by Error-Correction Learning

by Ekaterina Komendantskaya
"... We show that the conventional first-order algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by error-correction learning. Each time-step of adaptation of the network c ..."
Abstract - Cited by 4 (4 self) - Add to MetaCart
We show that the conventional first-order algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by error-correction learning. Each time-step of adaptation of the network corresponds to a single iteration of the unification algorithm. We present this result together with the library of learning functions and examples fully formalised in MATLAB Neural Network Toolbox.
(Show Context)

Citation Context

...ven by vectors. Related literature that concerns processing of structured data in neural networks falls within three areas of research: the core method to deal with symbolic formulae and Prolog terms =-=[55, 20]-=- we have just considered under a simpler guise of TP networks; recursive networks which can deal with string trees [56]; and kernel methods for structures [57]. Our approach is somewhat different from...

The grand challenges and myths of neural-symbolic computation

by Luis C Lamb - Recurrent Neural Networks- Models, Capacities, and Applications, number 08041 in Dagstuhl Seminar Proceedings, Dagstuhl, Germany, 2008. Internationales Begegnungs- und Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl
"... Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logic-based inference and connectionist learning systems may lead to the construction of semanti ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logic-based inference and connectionist learning systems may lead to the construction of semantically sound computational cognitive models in artificial intelligence, computer and cognitive sciences. Over the last decades, results regarding the computation and learning of classical reasoning within neural networks have been promising. Nonetheless, there still remains much do be done. Artificial intelligence, cognitive and computer science are strongly based on several non-classical reasoning formalisms, methodologies and logics. In knowledge representation, distributed systems, hardware design, theorem proving, systems specification and verification classical and non-classical logics have had a great impact on theory and real-world applications. Several challenges for neural-symbolic computation are pointed out, in particular for classical and non-classical computation in connectionist systems. We also analyse myths about neural-symbolic computation and shed new light on them considering recent research advances.
(Show Context)

Citation Context

... the seven myths of formal methods.12 L.C. Lamb networks [9,10,11,48,49,50]. Bader, Hitzler, Hölldobler, and Witzel have proved that neural networks can generate models of first-order logic programs =-=[51,52,53]-=-. Several researchers have also shown that neural-symbolic systems can learn or compute relations and fragments of first-order logics, see e.g. [54,55,56,57]. These results provide evidence that the f...

Variable Binding in . . .

by Douwe Kiela , 2011
"... ..."
Abstract - Add to MetaCart
Abstract not found

Advanced Petri Nets and the Fluent Calculus

by Ferdian Jovan
"... Abstract. In this paper we discuss conjunctive planning problems in the context of the fluent calculus and Petri nets. We show that both formalisms are equivalent in solving these problems. Thereafter, we ex-tend actions to contain preconditions as well as obstacles. This requires to extend the flue ..."
Abstract - Add to MetaCart
Abstract. In this paper we discuss conjunctive planning problems in the context of the fluent calculus and Petri nets. We show that both formalisms are equivalent in solving these problems. Thereafter, we ex-tend actions to contain preconditions as well as obstacles. This requires to extend the fluent calculus as well as Petri nets. Again, we show that both extended formalisms are equivalent. 1
(Show Context)

Citation Context

... model of an appropriate logic program and reasoning with respect to this least model [6,7]. Moreover, it was shown that there is a connectionist realization of this approach based on the core method =-=[11,1]-=-. However, human reasoning is much more complex than the above mentioned scenarios and involves – among others – reasoning about actions and causality including compositionality, concurrency, quick re...

Connectionist Model Generation: A First-Order Approach

by Core Scholar, Sebastian Bader, Pascal Hitzler, Steffen Holldobler, Sebastian Bader, Pascal Hitzler
"... Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate log ..."
Abstract - Add to MetaCart
Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feed-forward core. We show in this paper how the core method can be used to learn first-order logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We
(Show Context)

Citation Context

...ng technical preliminaries in Section 2, discussions of further and related work in Sections 7 and 8, and a concluding Section 9. This paper is a substantially revised and extended version of [5] and =-=[6]-=- by including all necessary proofs, providing more intuition and more details of the training. 2 2. Technical Preliminaries We assume the reader to be familiar with basic notions from artificial neura...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University