Results 1  10
of
16
Approximating the Semantics of Logic Programs by Recurrent Neural Networks
"... In [18] we have shown how to construct a 3layered recurrent neural network that computes the fixed point of the meaning function TP of a given propositional logic program P, which corresponds to the computation of the semantics of P. In this article we consider the first order case. We define a no ..."
Abstract

Cited by 54 (9 self)
 Add to MetaCart
In [18] we have shown how to construct a 3layered recurrent neural network that computes the fixed point of the meaning function TP of a given propositional logic program P, which corresponds to the computation of the semantics of P. In this article we consider the first order case. We define a notion of approximation for interpretations and prove that there exists a 3layered feed forward neural network that approximates the calculation of TP for a given first order acyclic logic program P with an injective level mapping arbitrarily well. Extending the feed forward network by recurrent connections we obtain a recurrent neural network whose iteration approximates the fixed point of TP. This result is proven by taking advantage of the fact that for acyclic logic programs the function TP is a contraction mapping on a complete metric space defined by the interpretations of the program. Mapping this space to the metric space IR with Euclidean distance, a real valued function fP can be defined which corresponds to TP and is continuous as well as a contraction. Consequently it can be approximated by an appropriately chosen class of feed forward neural networks.
Logic Programs and Connectionist Networks
 Journal of Applied Logic
, 2004
"... One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic p ..."
Abstract

Cited by 43 (16 self)
 Add to MetaCart
One facet of the question of integration of Logic and Connectionist Systems, and how these can complement each other, concerns the points of contact, in terms of semantics, between neural networks and logic programs. In this paper, we show that certain semantic operators for propositional logic programs can be computed by feedforward connectionist networks, and that the same semantic operators for firstorder normal logic programs can be approximated by feedforward connectionist networks. Turning the networks into recurrent ones allows one also to approximate the models associated with the semantic operators. Our methods depend on a wellknown theorem of Funahashi, and necessitate the study of when Funahasi's theorem can be applied, and also the study of what means of approximation are appropriate and significant.
The Connectionist Inductive Learning and Logic Programming System
, 1999
"... This paper presents the Connectionist Inductive Learning and Logic Programming System (CIL²P). CIL²P is a new massively parallel computational model based on a feedforward Artificial Neural Network that integrates inductive learning from examples and background knowledge, with deductive learning ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
This paper presents the Connectionist Inductive Learning and Logic Programming System (CIL²P). CIL²P is a new massively parallel computational model based on a feedforward Artificial Neural Network that integrates inductive learning from examples and background knowledge, with deductive learning from Logic Programming. Starting with the background knowledge represented by a propositional logic program, a translation algorithm is applied generating a neural network that can be trained with examples. The results obtained with this refined network can be explained by extracting a revised logic program from it. Moreover, the neural network computes the stable model of the logic program inserted in it as background knowledge, or learned with the examples, thus functioning as a parallel system for Logic Programming. We have successfully applied CIL2Ptotwo realworld problems of computational biology, specifically DNA sequence analyses. Comparisons with the results obtained by some of the main neural, symbolic, and hybrid inductive learning systems, using the same domain knowledge, show the effectiveness of CIL²P.
Dimensions of Neuralsymbolic Integration  A Structured Survey
 We Will Show Them: Essays in Honour of Dov Gabbay
, 2005
"... Introduction Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the ..."
Abstract

Cited by 21 (6 self)
 Add to MetaCart
Introduction Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to strive for applicable implementations and use cases. Recent work has covered a great variety of logics used in artificial intelligence and provides a multitude of techniques for dealing with them within the context of artificial neural networks. Already in the pioneering days of computational models of neural cognition, the question was raised how symbolic knowledge can be represented and dealt with within neural networks. The landmark paper [McCulloch and Pitts, 1943] provides fundamental insights how propositional logic can be processed using simple artificial neural networks. Within the following decades, however, the topic did not receive much attention as research in arti
Designing a Counter: Another Case Study of Dynamics and Activation Landscapes in Recurrent Networks
 In Proceedings of KI97: Advances in Artificial Intelligence
, 1997
"... . Inspired by the work of Elman and Wiles [15] and based on the notions developed within the theory of dynamical systems it is shown how a simple recurrent connectionist network with a single hidden unit implements a counter. This result is exemplified by showing that such a network can be used to r ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
. Inspired by the work of Elman and Wiles [15] and based on the notions developed within the theory of dynamical systems it is shown how a simple recurrent connectionist network with a single hidden unit implements a counter. This result is exemplified by showing that such a network can be used to recognize the contextfree language a n b n consisting of strings of some number n of a 's followed by the same number n of b 's for n 250 , whereby the maximum value for n is only restricted by the computing accuracy of the used hardware. 1 Introduction It is a common hypothesis today that the functionality of the brain and nervous system can be understood within the theory of computation (e.g. [11]), it is widely believed that highlevel cognitive tasks require the ability to represent structured objects and to execute structuresensitive processes on top of these objects (e.g. [3,7]), and it seems that  based on our current data and understanding  connectionist networks in t...
Logic Programs, Iterated Function Systems, and Recurrent Radial Basis Function Networks
 Journal of Applied Logic
, 2004
"... Graphs of the singlestep operator for firstorder logic programs  displayed in the real plane  exhibit selfsimilar structures known from topological dynamics, i.e. they appear to be fractals, or more precisely, attractors of iterated function systems. We show that this observation can be ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
Graphs of the singlestep operator for firstorder logic programs  displayed in the real plane  exhibit selfsimilar structures known from topological dynamics, i.e. they appear to be fractals, or more precisely, attractors of iterated function systems. We show that this observation can be made mathematically precise. In particular, we give conditions which ensure that those graphs coincide with attractors of suitably chosen iterated function systems, and conditions which allow the approximation of such graphs by iterated function systems or by fractal interpolation. Since iterated function systems can easily be encoded using recurrent radial basis function networks, we eventually obtain connectionist systems which approximate logic programs in the presence of function symbols.
The integration of connectionism and firstorder knowledge representation and reasoning as a challenge for artificial intelligence
 In Proceedings of the Third International Conference on Information
, 2006
"... Intelligent systems based on firstorder logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Intelligent systems based on firstorder logic on the one hand, and on artificial neural networks (also called connectionist systems) on the other, differ substantially. It would be very desirable to combine the robust neural networking machinery with symbolic knowledge representation and reasoning paradigms like logic programming in such a way that the strengths of either paradigm will be retained. Current stateoftheart research, however, fails by far to achieve this ultimate goal. As one of the main obstacles to be overcome we perceive the question how symbolic knowledge can be encoded by means of connectionist systems: Satisfactory answers to this will naturally lead the way to knowledge extraction algorithms and to integrated neuralsymbolic systems. 1
Reasoning about Time and Knowledge in NeuralSymbolic Learning Systems
, 2003
"... We show that temporal logic and combinations of temporal logics and modal logics of knowledge can be effectively represented in artificial neural networks. We present a Translation Algorithm from temporal rules to neural networks, and show that the networks compute a fixedpoint semantics of the ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
We show that temporal logic and combinations of temporal logics and modal logics of knowledge can be effectively represented in artificial neural networks. We present a Translation Algorithm from temporal rules to neural networks, and show that the networks compute a fixedpoint semantics of the rules. We also apply the translation to the muddy children puzzle, which has been used as a testbed for distributed multiagent systems. We provide a complete solution to the puzzle with the use of simple neural networks, capable of reasoning about time and of knowledge acquisition through inductive learning.
Applying the Connectionist Inductive Learning and Logic Programming System to Power System Diagnosis
 In: Proceedings of the IEEE International Conference on Neural Networks (ICNN97
, 1997
"... The Connectionist Inductive Learning and Logic Programming System, CIL 2 P, integrates the symbolic and connectionist paradigms of Artificial Intelligence through neural networks that perform massively parallel Logic Programming and inductive learning from examples and background knowledge. This ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
The Connectionist Inductive Learning and Logic Programming System, CIL 2 P, integrates the symbolic and connectionist paradigms of Artificial Intelligence through neural networks that perform massively parallel Logic Programming and inductive learning from examples and background knowledge. This work presents an extension of CIL 2 P that allows the implementation of Extended Logic Programs in Neural Networks. This extension makes CIL 2 P applicable to problems where the background knowledge is represented in a Default Logic. As a case example, we have applied the system for fault diagnosis of a simplified power system generation plant, obtaining good preliminary results.
Corollaries on the fixpoint completion: studying the stable semantics by means of the Clark completion
, 2004
"... The xpoint completion x(P ) of a normal logic program P is a program transformation such that the stable models of P are exactly the models of the Clark completion of x(P ). This is wellknown and was studied by Dung and Kanchanasut [15]. The correspondence, however, goes much further: The Ge ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
The xpoint completion x(P ) of a normal logic program P is a program transformation such that the stable models of P are exactly the models of the Clark completion of x(P ). This is wellknown and was studied by Dung and Kanchanasut [15]. The correspondence, however, goes much further: The GelfondLifschitz operator of P coincides with the immediate consequence operator of x(P ), as shown by Wendt [51], and even carries over to standard operators used for characterizing the wellfounded and the KripkeKleene semantics. We will apply this knowledge to the study of the stable semantics, and this will allow us to almost eortlessly derive new results concerning xedpoint and metricbased semantics, and neuralsymbolic integration.