Results 1  10
of
11
Connectionist Model generation: A FirstOrder Approach
, 2007
"... Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes as expressed e.g., by means of firstorder predicate log ..."
Abstract

Cited by 18 (4 self)
 Add to MetaCart
Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes as expressed e.g., by means of firstorder predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feedforward core. We show in this paper how the core method can be used to learn firstorder logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.
The core method: Connectionist model generation
 In Proceedings of the 16th International Conference on Artificial Neural Networks (ICANN
, 2006
"... Abstract. Knowledge based artificial networks networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes it is not obvious at all how neural symbol ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Knowledge based artificial networks networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes it is not obvious at all how neural symbolic systems should look like such that they are truly connectionist and allow for a declarative reading at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feedforward core. After an introduction to the core method, this paper will focus on possible connectionist representations of structured objects and their use in structuresensitive reasoning tasks. 1
Extracting reduced logic programs from artificial neural networks
 Applied Intelligence
, 2010
"... Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis o ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way. 1 1
2009c. Logics and networks for human reasoning
 In ICANN’09
"... Abstract We propose to model human reasoning tasks using completed logic programs interpreted under the threevalued Lukasiewicz semantics. Given an appropriate immediate consequence operator, completed logic programs admit a least model, which can be computed by iterating the consequence operator. ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Abstract We propose to model human reasoning tasks using completed logic programs interpreted under the threevalued Lukasiewicz semantics. Given an appropriate immediate consequence operator, completed logic programs admit a least model, which can be computed by iterating the consequence operator. Reasoning is then performed with respect to the least model. The approach is realized in a connectionist setting.
Unification Neural Networks: Unification by ErrorCorrection Learning
"... We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network c ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network corresponds to a single iteration of the unification algorithm. We present this result together with the library of learning functions and examples fully formalised in MATLAB Neural Network Toolbox.
Connectionist Representation of MultiValued Logic Programs
"... Hölldobler and Kalinke showed how, given a propositional logic program P, a 3layer feedforward artificial neural network may be constructed, using only binary threshold units, which can compute the familiar immediateconsequence operator TP associated with P. In this chapter, essentially these res ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Hölldobler and Kalinke showed how, given a propositional logic program P, a 3layer feedforward artificial neural network may be constructed, using only binary threshold units, which can compute the familiar immediateconsequence operator TP associated with P. In this chapter, essentially these results are established for a class of logic programs which can handle manyvalued logics, constraints and uncertainty; these programs therefore represent a considerable extension of conventional propositional programs. The work of the chapter basically falls into two parts. In the first of these, the programs considered extend the syntax of conventional logic programs by allowing elements of quite general algebraic structures to be present in clause bodies. Such programs include manyvalued logic programs, and semiringbased constraint logic programs. In the second part, the programs considered are bilatticebased annotated logic programs in which body literals are annotated by elements drawn from bilattices. These programs are wellsuited to handling uncertainty. Appropriate semantic operators are defined for the programs considered in both parts of the chapter, and it is shown that one may construct
Unification by ErrorCorrection
"... The paper formalises the famous algorithm of firstorder unification by Robinson by means of the errorcorrection learning in neural networks. The significant achievement of this formalisation is that, for the first time, the firstorder unification of two arbitrary firstorder atoms is performed b ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
The paper formalises the famous algorithm of firstorder unification by Robinson by means of the errorcorrection learning in neural networks. The significant achievement of this formalisation is that, for the first time, the firstorder unification of two arbitrary firstorder atoms is performed by finite (twoneuron) network.
Learning from Inconsistencies in an Integrated Cognitive Architecture
"... Abstract. Whereas symbol–based systems, like deductive reasoning devices, knowledge bases, planning systems, or tools for solving constraint satisfaction problems, presuppose (more or less) the consistency of data and the consistency of results of internal computations, this is far from being plausi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. Whereas symbol–based systems, like deductive reasoning devices, knowledge bases, planning systems, or tools for solving constraint satisfaction problems, presuppose (more or less) the consistency of data and the consistency of results of internal computations, this is far from being plausible in real–world applications,
The Grand Challenges and Myths of NeuralSymbolic Computation ⋆
"... Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logicbased inference and connectionist learning systems may lead to the construction of semantic ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logicbased inference and connectionist learning systems may lead to the construction of semantically sound computational cognitive models in artificial intelligence, computer and cognitive sciences. Over the last decades, results regarding the computation and learning of classical reasoning within neural networks have been promising. Nonetheless, there still remains much do be done. Artificial intelligence, cognitive and computer science are strongly based on several nonclassical reasoning formalisms, methodologies and logics. In knowledge representation, distributed systems, hardware design, theorem proving, systems specification and verification classical and nonclassical logics have had a great impact on theory and realworld applications. Several challenges for neuralsymbolic computation are pointed out, in particular for classical and nonclassical computation in connectionist systems. We also analyse myths about neuralsymbolic computation and shed new light on them considering recent research advances.