Results 1 
6 of
6
Dimensions of neuralsymbolic integration – a structural survey
 We Will Show Them: Essays in Honour of Dov Gabbay
"... Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
Research on integrated neuralsymbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to
Integrating FirstOrder Logic Programs and Connectionist Systems  A Constructive Approach
 Proceedings of the IJCAI05 Workshop on NeuralSymbolic Learning and Reasoning, NeSy’05
, 2005
"... Significant advances have recently been made concerning the integration of symbolic knowledge representation with artificial neural networks (also called connectionist systems). However, while the integration with propositional paradigms has resulted in applicable systems, the case of firstord ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
(Show Context)
Significant advances have recently been made concerning the integration of symbolic knowledge representation with artificial neural networks (also called connectionist systems). However, while the integration with propositional paradigms has resulted in applicable systems, the case of firstorder knowledge representation has so far hardly proceeded beyond theoretical studies which prove the existence of connectionist systems for approximating firstorder logic programs up to any chosen precision.
Extracting reduced logic programs from artificial neural networks
 Applied Intelligence
, 2010
"... Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis o ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Artificial neural networks can be trained to perform excellently in many application areas. Whilst they can learn from raw data to solve sophisticated recognition and analysis problems, the acquired knowledge remains hidden within the network architecture and is not readily accessible for analysis or further use: Trained networks are black boxes. Recent research efforts therefore investigate the possibility to extract symbolic knowledge from trained networks, in order to analyze, validate, and reuse the structural insights gained implicitly during the training process. In this paper, we will study how knowledge in form of propositional logic programs can be obtained in such a way that the programs are as simple as possible — where simple is being understood in some clearly defined and meaningful way. 1 1
Computation of normal logic programs by fibring neural networks
 In Proceedings of the Seventh International Workshop on FirstOrder Theorem Proving (FTP’05
, 2005
"... Abstract. In this paper, we develop a theory of the integration of fibring neural networks (a generalization of conventional neural networks) into modeltheoretic semantics for logic programming. We present some ideas and results about the approximate computation by fibring neural networks of sema ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we develop a theory of the integration of fibring neural networks (a generalization of conventional neural networks) into modeltheoretic semantics for logic programming. We present some ideas and results about the approximate computation by fibring neural networks of semantic immediate consequence operators TP and TP, where TP denotes a generalization of TP relative to a manyvalued logic analogous to Kleene’s strong logic. We establish a minimalfixedpoint semantics for normal logic programs somewhat analogous to the leastfixedpoint semantics for definite logic programs. We argue that the class of logic programs for which the approximation by fibring neural networks may be employed to compute minimal fixed points of TP and of TP is the class of normal programs. Our theorems on the approximation of TP and TP for normal programs extend recent results on approximation of these operators for definite programs by conventional neural networks. Key words: logic programs, fibring neural networks, immediate consequence operators, leastfixedpoint semantics, Kleene’s strong logic. 1
Matriculation Number: 3065439 Supervisors:
, 2006
"... Matriculation Number: 3065439 The integration of the paradigms of logic programs and connectionist systems is desirable because of their contrasting advantages and disadvantages. Algorithms for transforming logic programs into standard architecture connectionist systems exist for the case of proposi ..."
Abstract
 Add to MetaCart
Matriculation Number: 3065439 The integration of the paradigms of logic programs and connectionist systems is desirable because of their contrasting advantages and disadvantages. Algorithms for transforming logic programs into standard architecture connectionist systems exist for the case of propositional logic, and for firstorder logic programs a theoretical construction was developed in the project thesis [Wit05]. The aim of this thesis is to facilitate a real implementation by extending the results from the project or by devising other architectures and training methods, and to implement and evaluate the corresponding systems along with transformation and training algorithms. • Extending the embedding and the approximation of the TP operator to multiple dimensions • Lifting the sigmoidal constructions to multiple dimensions • Should this fail, devising other constructions along with transformation and training methods
MFCSIT 2004 Preliminary Version On the Integration of Connectionist and LogicBased Systems
"... It is a longstanding and important problem to integrate logicbased systems and connectionist systems. In brief, this problem is concerned with how each of these two paradigms interacts with the other and how each complements the other: how one may give a logical interpretation of neural networks, ..."
Abstract
 Add to MetaCart
(Show Context)
It is a longstanding and important problem to integrate logicbased systems and connectionist systems. In brief, this problem is concerned with how each of these two paradigms interacts with the other and how each complements the other: how one may give a logical interpretation of neural networks, how one may interpret connectionism within a logical framework, and how one may combine the advantages of each within a single integrated system. In this paper, the computation and approximate computation by neural networks of semantic operators TP determined by logic programs P is studied; the converse of this problem, namely, the extraction of logic programs from given neural networks is also briefly considered. The foundations of the relevant notions employed in this problem are revisited and clarified and new definitions are presented which avoid embedding spaces of interpretations in the real line. In particular, such definitions are formulated relating to (1) pointwise and uniform approximation of TP, and (2) approximation and computation of (least) fixed points of TP. There are related notions of approximation and convergence of neural networks, and related notions of approximation and convergence of programs and these are discussed briefly, although the focus here is on (1) and (2). Necessary and sufficient conditions for uniform approximation of TP by neural networks are given in terms of continuity. Finally, the class of programs for which these methods can be employed to compute fixed points is greatly extended from the rather small class of acyclic programs to the (computationally adequate) class of all definite programs.