Results 1  10
of
17
Dimensions of neuralsymbolic integration – a structural survey
 We Will Show Them: Essays in Honour of Dov Gabbay
"... Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to
A fully connectionist model generator for covered firstorder logic programs
 Proceedings of the Twentieth International Joint Conference on Artificial Intelligence (IJCAI07), Hyderabad, India, Menlo Park CA, AAAI Press (2007) 666–671
, 2007
"... We present a fully connectionist system for the learning of firstorder logic programs and the generation of corresponding models: Given a program and a set of training examples, we embed the associated semantic operator into a feedforward network and train the network using the examples. This resu ..."
Abstract

Cited by 12 (5 self)
 Add to MetaCart
We present a fully connectionist system for the learning of firstorder logic programs and the generation of corresponding models: Given a program and a set of training examples, we embed the associated semantic operator into a feedforward network and train the network using the examples. This results in the learning of firstorder knowledge while damaged or noisy data is handled gracefully. 1
Integrating FirstOrder Logic Programs and Connectionist Systems  A Constructive Approach
 Proceedings of the IJCAI05 Workshop on NeuralSymbolic Learning and Reasoning, NeSy’05
, 2005
"... Significant advances have recently been made concerning the integration of symbolic knowledge representation with artificial neural networks (also called connectionist systems). However, while the integration with propositional paradigms has resulted in applicable systems, the case of firstord ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
(Show Context)
Significant advances have recently been made concerning the integration of symbolic knowledge representation with artificial neural networks (also called connectionist systems). However, while the integration with propositional paradigms has resulted in applicable systems, the case of firstorder knowledge representation has so far hardly proceeded beyond theoretical studies which prove the existence of connectionist systems for approximating firstorder logic programs up to any chosen precision.
On approximation of the semantic operators determined by bilatticebased logic programs
 In Proceedings of the Seventh International Workshop on FirstOrder Theorem Proving (FTP’05
, 2005
"... 1 Introduction Since their introduction by Ginsberg [8], bilattices have become a wellknown algebraic structure for reasoning about the sort of inconsistencies which arise when one formalizes the process of accumulating knowledge. In particular, Fitting [4, 5, 7] introduced quite general consequenc ..."
Abstract

Cited by 9 (8 self)
 Add to MetaCart
(Show Context)
1 Introduction Since their introduction by Ginsberg [8], bilattices have become a wellknown algebraic structure for reasoning about the sort of inconsistencies which arise when one formalizes the process of accumulating knowledge. In particular, Fitting [4, 5, 7] introduced quite general consequence operators for logic programs whose semantics are based on fourvalued bilattices, and derived their basic properties.
Extracting reduced logic programs from artificial neural networks
 Applied Intelligence
, 2010
"... Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis o ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way. 1 1
Unification Neural Networks: Unification by ErrorCorrection Learning
"... We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network c ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network corresponds to a single iteration of the unification algorithm. We present this result together with the library of learning functions and examples fully formalised in MATLAB Neural Network Toolbox.
Logic programs with uncertainty: neural computations and automated reasoning
 In Proc. CiE’06
, 2006
"... Abstract. Bilatticebased annotated logic programs (BAPs) form a very general class of programs which can handle uncertainty and conflicting information. We use BAPs to integrate two alternative paradigms of computation: specifically, we build learning artificial neural networks which can model iter ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Bilatticebased annotated logic programs (BAPs) form a very general class of programs which can handle uncertainty and conflicting information. We use BAPs to integrate two alternative paradigms of computation: specifically, we build learning artificial neural networks which can model iterations of the semantic operator associated with each BAP and introduce sound and complete SLDresolution for this class of programs. Key words: Logic programs, artificial neural networks, SLDresolution 1
Connectionist Representation of MultiValued Logic Programs
"... Hölldobler and Kalinke showed how, given a propositional logic program P, a 3layer feedforward artificial neural network may be constructed, using only binary threshold units, which can compute the familiar immediateconsequence operator TP associated with P. In this chapter, essentially these res ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Hölldobler and Kalinke showed how, given a propositional logic program P, a 3layer feedforward artificial neural network may be constructed, using only binary threshold units, which can compute the familiar immediateconsequence operator TP associated with P. In this chapter, essentially these results are established for a class of logic programs which can handle manyvalued logics, constraints and uncertainty; these programs therefore represent a considerable extension of conventional propositional programs. The work of the chapter basically falls into two parts. In the first of these, the programs considered extend the syntax of conventional logic programs by allowing elements of quite general algebraic structures to be present in clause bodies. Such programs include manyvalued logic programs, and semiringbased constraint logic programs. In the second part, the programs considered are bilatticebased annotated logic programs in which body literals are annotated by elements drawn from bilattices. These programs are wellsuited to handling uncertainty. Appropriate semantic operators are defined for the programs considered in both parts of the chapter, and it is shown that one may construct
Computation of normal logic programs by fibring neural networks
 In Proceedings of the Seventh International Workshop on FirstOrder Theorem Proving (FTP’05
, 2005
"... Abstract. In this paper, we develop a theory of the integration of fibring neural networks (a generalization of conventional neural networks) into modeltheoretic semantics for logic programming. We present some ideas and results about the approximate computation by fibring neural networks of sema ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we develop a theory of the integration of fibring neural networks (a generalization of conventional neural networks) into modeltheoretic semantics for logic programming. We present some ideas and results about the approximate computation by fibring neural networks of semantic immediate consequence operators TP and TP, where TP denotes a generalization of TP relative to a manyvalued logic analogous to Kleene’s strong logic. We establish a minimalfixedpoint semantics for normal logic programs somewhat analogous to the leastfixedpoint semantics for definite logic programs. We argue that the class of logic programs for which the approximation by fibring neural networks may be employed to compute minimal fixed points of TP and of TP is the class of normal programs. Our theorems on the approximation of TP and TP for normal programs extend recent results on approximation of these operators for definite programs by conventional neural networks. Key words: logic programs, fibring neural networks, immediate consequence operators, leastfixedpoint semantics, Kleene’s strong logic. 1