Results 1  10
of
20
Unification Neural Networks: Unification by ErrorCorrection Learning
"... We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network c ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
(Show Context)
We show that the conventional firstorder algorithm of unification can be simulated by finite artificial neural networks with one layer of neurons. In these unification neural networks, the unification algorithm is performed by errorcorrection learning. Each timestep of adaptation of the network corresponds to a single iteration of the unification algorithm. We present this result together with the library of learning functions and examples fully formalised in MATLAB Neural Network Toolbox.
Neurons or symbols: why does or remain exclusive
 in: Proceedings of ICNC’09
, 2009
"... NeuroSymbolic Integration is an interdisciplinary area that endeavours to unify neural networks and symbolic logic. The goal is to create a system that combines the advantages of neural networks (adaptive behaviour, robustness, tolerance of noise and probability) and symbolic logic (validity of com ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
(Show Context)
NeuroSymbolic Integration is an interdisciplinary area that endeavours to unify neural networks and symbolic logic. The goal is to create a system that combines the advantages of neural networks (adaptive behaviour, robustness, tolerance of noise and probability) and symbolic logic (validity of computations, generality, higherorder reasoning). Several different approaches have been proposed in the past. However, the existing neurosymbolic networks provide only a limited coverage of the techniques used in computational logic. In this paper, we outline the areas of neurosymbolism where computational logic has been implemented so far, and analyse the problematic areas. We show why certain concepts cannot be implemented using the existing neurosymbolic networks, and propose four main improvements needed to build neurosymbolic networks of the future. 1
Parallel rewriting in neural networks
 In Proceedings of ICNC’09
, 2009
"... Abstract: Rewriting systems are used in various areas of computer science, and especially in lambdacalculus, higherorder logics and functional programming. We show that the unsupervised learning networks can implement parallel rewriting. We show how this general correspondence can be refined in or ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract: Rewriting systems are used in various areas of computer science, and especially in lambdacalculus, higherorder logics and functional programming. We show that the unsupervised learning networks can implement parallel rewriting. We show how this general correspondence can be refined in order to perform parallel term rewriting in neural networks, for any given firstorder term. We simulate these neural networks in the MATLAB Neural Network Toolbox and present the complete library of functions written in the MATLAB Neural Network Toolbox. 1
Guiding Backprop by Inserting Rules
"... Abstract. We report on an experiment where we inserted symbolic rules into a neural network during the training process. This was done to guide the learning and to help escape local minima. The rules are constructed by analysing the errors made by the network after training. This process can be repe ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We report on an experiment where we inserted symbolic rules into a neural network during the training process. This was done to guide the learning and to help escape local minima. The rules are constructed by analysing the errors made by the network after training. This process can be repeated, which allows to improve the network performance again and again. We propose a general framework and provide a proof of concept of the usefullness of our approach. 1
The grand challenges and myths of neuralsymbolic computation
 Recurrent Neural Networks Models, Capacities, and Applications, number 08041 in Dagstuhl Seminar Proceedings, Dagstuhl, Germany, 2008. Internationales Begegnungs und Forschungszentrum für Informatik (IBFI), Schloss Dagstuhl
"... Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logicbased inference and connectionist learning systems may lead to the construction of semanti ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The construction of computational cognitive models integrating the connectionist and symbolic paradigms of artificial intelligence is a standing research issue in the field. The combination of logicbased inference and connectionist learning systems may lead to the construction of semantically sound computational cognitive models in artificial intelligence, computer and cognitive sciences. Over the last decades, results regarding the computation and learning of classical reasoning within neural networks have been promising. Nonetheless, there still remains much do be done. Artificial intelligence, cognitive and computer science are strongly based on several nonclassical reasoning formalisms, methodologies and logics. In knowledge representation, distributed systems, hardware design, theorem proving, systems specification and verification classical and nonclassical logics have had a great impact on theory and realworld applications. Several challenges for neuralsymbolic computation are pointed out, in particular for classical and nonclassical computation in connectionist systems. We also analyse myths about neuralsymbolic computation and shed new light on them considering recent research advances.
Firstorder logic learning in artificial neural networks
, 2010
"... Abstract—Artificial Neural Networks have previously been applied in neurosymbolic learning to learn ground logic program rules. However, there are few results of learning relations using neurosymbolic learning. This paper presents the system PAN, which can learn relations. The inputs to PAN are o ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
(Show Context)
Abstract—Artificial Neural Networks have previously been applied in neurosymbolic learning to learn ground logic program rules. However, there are few results of learning relations using neurosymbolic learning. This paper presents the system PAN, which can learn relations. The inputs to PAN are one or more atoms, representing the conditions of a logic rule, and the output is the conclusion of the rule. The symbolic inputs may include functional terms of arbitrary depth and arity, and the output may include terms constructed from the input functors. Symbolic inputs are encoded as an integer using an invertible encoding function, which is used in reverse to extract the output terms. The main advance of this system is a convention to allow construction of Artificial Neural Networks able to learn rules with the same power of expression as first order definite clauses. The system is tested on three examples and the results are discussed. I.
The Role of Logic in AGI Systems: Towards a Lingua Franca for General Intelligence
"... Systems for general intelligence require a significant potential to model a variety of different cognitive abilities. It is often claimed that logicbased systems – although rather successful for modeling specialized tasks – lack the ability to be useful as a universal modeling framework due to the ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Systems for general intelligence require a significant potential to model a variety of different cognitive abilities. It is often claimed that logicbased systems – although rather successful for modeling specialized tasks – lack the ability to be useful as a universal modeling framework due to the fact that particular logics can often be used only for special purposes (and do not cover the whole breadth of reasoning abilities) and show significant weaknesses for tasks like learning, pattern matching, or controlling behavior. This paper argues against this thesis by exemplifying that logicbased frameworks can be used to integrate different reasoning types and can function as a coding scheme for the integration of symbolic and subsymbolic approaches. In particular, AGI systems can be based on logic frameworks.
Using Inductive Types for Ensuring Correctness of NeuroSymbolic Computations
"... Abstract. We propose a new method for ensuring correctness of neurosymbolic computations. We consider important examples when checking the data type of the network’s inputs/outputs is crucial for ensuring that it performs correctly. We construct neurosymbolic networks that can recognise the type o ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a new method for ensuring correctness of neurosymbolic computations. We consider important examples when checking the data type of the network’s inputs/outputs is crucial for ensuring that it performs correctly. We construct neurosymbolic networks that can recognise the type of the input/output data; they are capable of recognising inductive and even dependent types.
An Extension of the Core Method for Continuous Values: Learning with Probabilities
"... Abstract. This paper proposes an extension to the neurosymbolic core method useful when observations are expressed by continuous values. Some theoretical results are presented regarding the learning process over these observations. An illustrative example is reported, demonstrating the problems of ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This paper proposes an extension to the neurosymbolic core method useful when observations are expressed by continuous values. Some theoretical results are presented regarding the learning process over these observations. An illustrative example is reported, demonstrating the problems of the original approach and justifying how this extension can overcome them. Results of the extended approach on irregular continuous values (simulating probabilistic data) show similar results to the original core method on clean symbolic data and point to the validity of the approach. 1