Results 1  10
of
10
The Core Method: Connectionist model generation for . . .
 IN PROCEEDINGS OF THE ICANN’06
, 2006
"... Research into the processing of symbolic knowledge by means of connectionist networks aims at systems which combine the declarative nature of logicbased artificial intelligence with the robustness and trainability of artificial neural networks. This endeavour has been addressed quite successfully ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Research into the processing of symbolic knowledge by means of connectionist networks aims at systems which combine the declarative nature of logicbased artificial intelligence with the robustness and trainability of artificial neural networks. This endeavour has been addressed quite successfully in the past for propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended beyond propositional logic, it is not obvious at all what neuralsymbolic systems should look like such that they are truly connectionist and allow for a declarative reading at the same time. The Core Method – which we present here – aims at such an integration. It is a method for connectionist model generation using recurrent networks with feedforward core. These networks can be trained by standard algorithms to learn symbolic knowledge, and they can be used for reasoning about this knowledge.
Connectionist Model generation: A FirstOrder Approach
, 2007
"... Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes as expressed e.g., by means of firstorder predicate log ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Knowledge based artificial neural networks have been applied quite successfully to propositional knowledge representation and reasoning tasks. However, as soon as these tasks are extended to structured objects and structuresensitive processes as expressed e.g., by means of firstorder predicate logic, it is not obvious at all what neural symbolic systems would look like such that they are truly connectionist, are able to learn, and allow for a declarative reading and logical reasoning at the same time. The core method aims at such an integration. It is a method for connectionist model generation using recurrent networks with feedforward core. We show in this paper how the core method can be used to learn firstorder logic programs in a connectionist fashion, such that the trained network is able to do reasoning over the acquired knowledge. We also report on experimental evaluations which show the feasibility of our approach.
Connectionist Representation of MultiValued Logic Programs
"... Hölldobler and Kalinke showed how, given a propositional logic program P, a 3layer feedforward artificial neural network may be constructed, using only binary threshold units, which can compute the familiar immediateconsequence operator TP associated with P. In this chapter, essentially these res ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Hölldobler and Kalinke showed how, given a propositional logic program P, a 3layer feedforward artificial neural network may be constructed, using only binary threshold units, which can compute the familiar immediateconsequence operator TP associated with P. In this chapter, essentially these results are established for a class of logic programs which can handle manyvalued logics, constraints and uncertainty; these programs therefore represent a considerable extension of conventional propositional programs. The work of the chapter basically falls into two parts. In the first of these, the programs considered extend the syntax of conventional logic programs by allowing elements of quite general algebraic structures to be present in clause bodies. Such programs include manyvalued logic programs, and semiringbased constraint logic programs. In the second part, the programs considered are bilatticebased annotated logic programs in which body literals are annotated by elements drawn from bilattices. These programs are wellsuited to handling uncertainty. Appropriate semantic operators are defined for the programs considered in both parts of the chapter, and it is shown that one may construct
Extracting reduced logic programs from artificial neural networks
 Applied Intelligence
, 2010
"... Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis o ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way. 1 1
Learning from Inconsistencies in an Integrated Cognitive Architecture
"... Abstract. Whereas symbol–based systems, like deductive reasoning devices, knowledge bases, planning systems, or tools for solving constraint satisfaction problems, presuppose (more or less) the consistency of data and the consistency of results of internal computations, this is far from being plausi ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Whereas symbol–based systems, like deductive reasoning devices, knowledge bases, planning systems, or tools for solving constraint satisfaction problems, presuppose (more or less) the consistency of data and the consistency of results of internal computations, this is far from being plausible in real–world applications,
Unification by ErrorCorrection
"... The paper formalises the famous algorithm of firstorder unification by Robinson by means of the errorcorrection learning in neural networks. The significant achievement of this formalisation is that, for the first time, the firstorder unification of two arbitrary firstorder atoms is performed b ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
The paper formalises the famous algorithm of firstorder unification by Robinson by means of the errorcorrection learning in neural networks. The significant achievement of this formalisation is that, for the first time, the firstorder unification of two arbitrary firstorder atoms is performed by finite (twoneuron) network.
Unification Neural Networks: Unification by ErrorCorrection Learning
"... We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network c ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network corresponds to a single iteration of the unification algorithm. We present this result together with the library of learning functions and examples fully formalised in MATLAB Neural Network Toolbox.
Invited Keynote Talk Modeling Reasoning Mechanisms by NeuralSymbolic Learning
"... Currently, neuralsymbolic integration covers – at least in theory – a whole bunch of types of reasoning: neural representations (and partially also neuralinspired learning approaches) exist for modeling propositional logic (programs), whole classes of manyvalued logics, modal logic, temporal logic ..."
Abstract
 Add to MetaCart
Currently, neuralsymbolic integration covers – at least in theory – a whole bunch of types of reasoning: neural representations (and partially also neuralinspired learning approaches) exist for modeling propositional logic (programs), whole classes of manyvalued logics, modal logic, temporal logic, and epistemic logic, just to mention some important examples [2,4]. Besides these propositional variants of logical theories, also first proposals exist for approximating “infinity” with neural means, in particular, theories of firstorder logic. An example is the core method intended to learn the semantics of the singlestep operator TP for firstorder logic (programs) with a neural network [1]. Another example is the neural approximation of variablefree firstorder logic by learning representations of arrow constructions (which represent logical expressions) in the R n using Topos constructions [3]. Although these examples show a certain success of neuralsymbolic learning and reasoning research, there are several nontrivial challenges. First, there exist
Perspectives of Neuro–Symbolic Integration – Extended Abstract –
"... Abstract. There is an obvious tension between symbolic and subsymbolic theories, because both show complementary strengths and weaknesses in corresponding applications and underlying methodologies. The resulting gap in the foundations and the applicability of these approaches is theoretically unsati ..."
Abstract
 Add to MetaCart
Abstract. There is an obvious tension between symbolic and subsymbolic theories, because both show complementary strengths and weaknesses in corresponding applications and underlying methodologies. The resulting gap in the foundations and the applicability of these approaches is theoretically unsatisfactory and practically undesirable. We sketch a theory that bridges this gap between symbolic and subsymbolic approaches by the introduction of a Toposbased semisymbolic level used for coding logical firstorder expressions in a homogeneous framework. This semisymbolic level can be used for neural learning of logical firstorder theories. Besides a presentation of the general idea of the framework, we sketch some challenges and important open problems for future research with respect to the presented approach and the field of neurosymbolic integration, in general. Keywords. Neuro–Symbolic Integration, Topos Theory, First–Order Logic 1
The Grand Challenges and Myths of NeuralSymbolic Computation ⋆
"... Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logicbased inference and connectionist learning systems may lead to the construction of semantic ..."
Abstract
 Add to MetaCart
Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logicbased inference and connectionist learning systems may lead to the construction of semantically sound computational cognitive models in artificial intelligence, computer and cognitive sciences. Over the last decades, results regarding the computation and learning of classical reasoning within neural networks have been promising. Nonetheless, there still remains much do be done. Artificial intelligence, cognitive and computer science are strongly based on several nonclassical reasoning formalisms, methodologies and logics. In knowledge representation, distributed systems, hardware design, theorem proving, systems specification and verification classical and nonclassical logics have had a great impact on theory and realworld applications. Several challenges for neuralsymbolic computation are pointed out, in particular for classical and nonclassical computation in connectionist systems. We also analyse myths about neuralsymbolic computation and shed new light on them considering recent research advances.