## Labeling RAAM (1994)

Venue: | Connection Science |

Citations: | 44 - 10 self |

### BibTeX

@TECHREPORT{Sperduti94labelingraam,

author = {Alessandro Sperduti},

title = {Labeling RAAM},

institution = {Connection Science},

year = {1994}

}

### Years of Citing Articles

### OpenURL

### Abstract

In this paper we propose an extension of the Recursive Auto-Associative Memory (RAAM) by Pollack. This extension, the Labeling RAAM (LRAAM), is able to encode labeled graphs with cycles by representing pointers explicitly. A theoretical analysis of the constraints imposed on the weights by the learning task under the hypothesis of perfect learning and linear output units is presented. Cycles and confluent pointers result to be particularly effective in imposing constraints on the weights. Some technical problems encountered in the RAAM, such as the termination problem in the learning and decoding processes, are solved more naturally in the LRAAM framework. The representations developed for the pointers seem to be robust to recurrent decoding along a cycle. Data encoded in a LRAAM can be accessed by pointer as well as by content. The direct access by content can be achieved by transforming the encoder network of the LRAAM in a Bidirectional Associative Memory (BAM). Different access pro...

### Citations

337 | Recursive distributed representations - Pollack - 1990 |

295 |
Neural Networks and Fuzzy Systems
- Kosko
- 1992
(Show Context)
Citation Context ...be stored and used on demand. In an LRAAM it is possible to access a component of the encoded structure in other ways if the Encoder Network is transformed in a Bidirectional Associative Memory (BAM) =-=[Kos92]-=-. A BAM consists of two layers of processing elements, name them layer BH and BO , that are fully interconnected between layers with weight matrices M h , from BH to BO , and M o , from BO to B h . Th... |

251 |
Tensor product variable binding and the representation of symbolic structures in connectionist networks
- Smolensky
- 1990
(Show Context)
Citation Context ...sentation, very fast and cheap inference engines would be built (see [Cha90]). A more formal characterization of representations of structures in connectionist systems has been developed by Smolensky =-=[Smo90]-=-. He reduces the problem of representing structured objects to three subproblems: decomposing the structures via roles, representing conjunctions, and representing variable/value bindings. The represe... |

117 | Syntactic Transformations on Distributed Representations
- Chalmers
- 1990
(Show Context)
Citation Context ...arsely tuned. inference over trees might be performed by numerical transformation (i.e. neural networks) over their numerical representation, very fast and cheap inference engines would be built (see =-=[Cha90]-=-). A more formal characterization of representations of structures in connectionist systems has been developed by Smolensky [Smo90]. He reduces the problem of representing structured objects to three ... |

117 |
Mapping part-whole hierarchies into connectionist networks
- Hinton
- 1990
(Show Context)
Citation Context ...o one variable as a variable. Reduced representations of structured objects in connectionist systems are related by Hinton to the problem of mapping part-whole hierarchies into connectionist networks =-=[Hin90]. The sche-=-me he proposes considers two quite different methods for performing inference. Simple "intuitive" inferences can be performed by a single settling of a network without changing the way in wh... |

108 | Holographic reduced representations
- Plate
- 2003
(Show Context)
Citation Context ... it devises a reduced representation of a set of functions relating the components of the graph instead of a reduced representation for the graph. Potentially also Holographic Reduced Representations =-=[Pla91]-=- are able to encode cyclic graphs. The LRAAM model can also be viewed as an extension of the Hopfield networks philosophy. The basic idea is that, while Hopfield networks are able to exploit only mini... |

57 | Learning recursive distributed representations for holistic computation - Chrisman - 1991 |

48 |
BoltzCONS : Dynamic Symbol Structures in a Connectionist Network. Rapport technique CMU-CS-89-182
- Touretzky
(Show Context)
Citation Context ...proach to handle domains of structured tasks. The common background of their ideas is the search for a realization of the distal access ability and consequently of the compositionality one. BoltzCONS =-=[Tou90]-=- is an example of how a connectionist system (i.e. Boltzman machine) can handle symbolic structures. It is based on parallel associative retrieval and it differs from other connectionist systems becau... |

20 |
Neural Trees: A New Tool for Classification
- Sirat, P
- 1990
(Show Context)
Citation Context ...oped as a way to synthesize a neural code, i.e., a set of weights which can be interpreted as a program for a particular recurrent neural network. An example of neural code implementing a Neural Tree =-=[SN90]-=- has been given and different aspects of the neural code discussed in [Spe93, SS93a, SS93b]. Neural Trees (NTs) are decision trees where the decision at each node is taken by a perceptron 8 . Usually,... |

20 | Tree matching with recursive distributed representations
- Stolcke, Wu
- 1992
(Show Context)
Citation Context ... a classifier to discriminate between terminals and nonterminals, however it would result in a relevant computational overload. A more elegant solution to this problem was developed by Stolcke and Wu =-=[SW92]-=-. They used one unit of the hidden layer to represent explicitly the distinction between terminals and nonterminals. In order to obtain this distinction, they injected an extra error in one of the uni... |

19 |
Symbolic neuro-engineering for natural language processing: A multi-level research approach
- Dyer
- 1991
(Show Context)
Citation Context ...s for structures with cycles is what makes the difference between the LRAAM and the RAAM. The only system that we know of which is able to represent labeled graphs is the DUAL system proposed by Dyer =-=[Dye91]-=-. It is able to encode small labeled graphs representing relationships among entities. The idea is to have two networks, one responsible for the encoding of the relationships between one particular en... |

17 |
Inversion of neural networks by gradient descent
- Kindermann, Linden
- 1990
(Show Context)
Citation Context ...nother solution to this problem can be to perform a gradient descent on the pointers maintaining the label fixed. This can be performed using the inversion technique proposed by Kindermann and Linden =-=[KL90]-=-. Using this method, representations for the pointers which satisfy the generalization test, without the constraint to be an asymptotically stable memory of the BAM, can be found. Obviously, this tech... |

11 |
Exploring the symbolic /subsymbolic continuum: A case study of RAAM
- Blank, Meeden, et al.
- 1992
(Show Context)
Citation Context ...h the recursive application of the decoding function can extract all the components of the structure? The approach we use in performing this exploration is very close to the one used by Blank and al. =-=[BMM92]-=-, but adjusted to the LRAAM model and extended in scope. In the next section we will give a geometric interpretation of the binary LRAAM model with respect to the decoding process. 5.1 Geometric inter... |

9 | A connectionist technique for on-line parsing - Reilly - 1992 |

6 |
An Analog Feedback Associative Memory
- Atiya, Abu-Mostafa
- 1993
(Show Context)
Citation Context ...inter W l W r L left right label Figure 31: Examples of encoding networks. can be viewed as an approximate method to build analog BAMs, which actually are analog Hopfield networks with a hidden layer =-=[AAM93]-=-. Most probably an LRAAM is something between them. In fact, while extending the representational capabilities of the RAAM model, it doesn't possess the same synthetic capabilities of the RAAM, since ... |

6 | Optimization and Functional Reduced Descriptors in Neural Networks - Sperduti - 1993 |

5 |
Four capacity models for coarse-coded symbol memories
- Rosenfeld, Touretzky
- 1988
(Show Context)
Citation Context ...ifies composite symbol structures dynamically, by representing them as activity patterns rather than as weights. It uses distributed representations of linked lists, loaded in coarse-coded memories 1 =-=[RT87]-=-, as basic representational elements and LISP's car, cdr, and cons functions as basic operations. Links are implemented by associations. The associative retrieval capabilities of BolzCONS support comp... |

5 | An example of neural code: Neural trees implemented by LRAAMs - Sperduti, Starita - 1993 |

1 | Modular neural codes implementing neural trees - Sperduti, Starita - 1993 |