Results 1 -
9 of
9
Connectionist Model generation: A First-Order Approach
, 2007
"... Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate log ..."
Abstract
-
Cited by 20 (5 self)
- Add to MetaCart
Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feed-forward core. We show in this paper how the core method can be used to learn first-order logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.
A computational logic approach to the suppression task
- In Proceedings of the 34th Annual Conference of the Cognitive Science Society, CogSci 2013
"... A novel approach to human conditional reasoning based on the three-valued Łukasiewicz logic is presented. We will demon-strate that the Łukasiewicz logic overcomes problems the so-far proposed Fitting logic has in reasoning with the suppres-sion task. While adequately solving the suppression task, t ..."
Abstract
-
Cited by 10 (7 self)
- Add to MetaCart
A novel approach to human conditional reasoning based on the three-valued Łukasiewicz logic is presented. We will demon-strate that the Łukasiewicz logic overcomes problems the so-far proposed Fitting logic has in reasoning with the suppres-sion task. While adequately solving the suppression task, the approach gives rise to a number of open questions concerning the use of Łukasiewicz logic, unique fixed points, completion versus weak completion, explanations, negation, and sceptical versus credulous approaches in human reasoning.
Extracting reduced logic programs from artificial neural networks
- Applied Intelligence
, 2010
"... Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis o ..."
Abstract
-
Cited by 7 (2 self)
- Add to MetaCart
(Show Context)
Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way. 1 1
2009c. Logics and networks for human reasoning
- In ICANN’09
"... Abstract We propose to model human reasoning tasks using completed logic programs interpreted under the three-valued Lukasiewicz semantics. Given an appropriate immediate consequence operator, completed logic programs admit a least model, which can be computed by iterating the consequence operator. ..."
Abstract
-
Cited by 6 (3 self)
- Add to MetaCart
(Show Context)
Abstract We propose to model human reasoning tasks using completed logic programs interpreted under the three-valued Lukasiewicz semantics. Given an appropriate immediate consequence operator, completed logic programs admit a least model, which can be computed by iterating the consequence operator. Reasoning is then performed with respect to the least model. The approach is realized in a connectionist setting.
Unification Neural Networks: Unification by Error-Correction Learning
"... We show that the conventional first-order algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by error-correction learning. Each time-step of adaptation of the network c ..."
Abstract
-
Cited by 4 (4 self)
- Add to MetaCart
(Show Context)
We show that the conventional first-order algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by error-correction learning. Each time-step of adaptation of the network corresponds to a single iteration of the unification algorithm. We present this result together with the library of learning functions and examples fully formalised in MATLAB Neural Network Toolbox.
The grand challenges and myths of neural-symbolic computation
- Recurrent Neural Networks- Models, Capacities, and Applications, number 08041 in Dagstuhl Seminar Proceedings, Dagstuhl, Germany, 2008. Internationales Begegnungs- und Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl
"... Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logic-based inference and connectionist learning systems may lead to the construction of semanti ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logic-based inference and connectionist learning systems may lead to the construction of semantically sound computational cognitive models in artificial intelligence, computer and cognitive sciences. Over the last decades, results regarding the computation and learning of classical reasoning within neural networks have been promising. Nonetheless, there still remains much do be done. Artificial intelligence, cognitive and computer science are strongly based on several non-classical reasoning formalisms, methodologies and logics. In knowledge representation, distributed systems, hardware design, theorem proving, systems specification and verification classical and non-classical logics have had a great impact on theory and real-world applications. Several challenges for neural-symbolic computation are pointed out, in particular for classical and non-classical computation in connectionist systems. We also analyse myths about neural-symbolic computation and shed new light on them considering recent research advances.
Advanced Petri Nets and the Fluent Calculus
"... Abstract. In this paper we discuss conjunctive planning problems in the context of the fluent calculus and Petri nets. We show that both formalisms are equivalent in solving these problems. Thereafter, we ex-tend actions to contain preconditions as well as obstacles. This requires to extend the flue ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract. In this paper we discuss conjunctive planning problems in the context of the fluent calculus and Petri nets. We show that both formalisms are equivalent in solving these problems. Thereafter, we ex-tend actions to contain preconditions as well as obstacles. This requires to extend the fluent calculus as well as Petri nets. Again, we show that both extended formalisms are equivalent. 1
Connectionist Model Generation: A First-Order Approach
"... Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate log ..."
Abstract
- Add to MetaCart
(Show Context)
Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structure-sensitive processes as expressed e.g., by means of first-order predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feed-forward core. We show in this paper how the core method can be used to learn first-order logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We