Results 1 
3 of
3
Neural Associative Memories
 Biological Cybernetics
, 1993
"... Despite of processing elements which are thousands of times faster than the neurons in the brain, modern computers still cannot match quite a few processing capabilities of the brain, many of which we even consider trivial (such as recognizing faces or voices, or following a conversation). A common ..."
Abstract

Cited by 90 (12 self)
 Add to MetaCart
Despite of processing elements which are thousands of times faster than the neurons in the brain, modern computers still cannot match quite a few processing capabilities of the brain, many of which we even consider trivial (such as recognizing faces or voices, or following a conversation). A common principle for those capabilities lies in the use of correlations between patterns in order to identify patterns which are similar. Looking at the brain as an information processing mechanism with  maybe among others  associative processing capabilities together with the converse view of associative memories as certain types of artificial neural networks initiated a number of interesting results, ranging from theoretical considerations to insights in the functioning of neurons, as well as parallel hardware implementations of neural associative memories. This paper discusses three main aspects of neural associative memories: ffl theoretical investigations, e.g. on the information storage...
Massive Parallelism in Logic
"... The capability of drawing new conclusions from available information still systems. Whereas in more formal areas like automated theorem proving the everincreasing is a major challenge for computer computing power of conventional systems also leads to an extension of the range of problems which can ..."
Abstract
 Add to MetaCart
The capability of drawing new conclusions from available information still systems. Whereas in more formal areas like automated theorem proving the everincreasing is a major challenge for computer computing power of conventional systems also leads to an extension of the range of problems which can be treated, many realworld reasoning problems involve uncertain, incomplete, and inconsistent information, possibly from various sources like rule and data bases, interaction with the user, or raw data from the real world. For this kind of applications inference mechanisms based on or combined with massive parallelism, in particular connectionist techniques show some promise. This contribution concentrates on three aspects of massive parallelism and inference: first, the potential of parallelism in logic is investigated, then an massively parallel inference system based on connectionist techniques is presented 1 and finally the combination of neuralnetwork based components with symbolic ones as pursued in the project WlNA is described. 1 Parallelism in Logic The exploitation of parallelism in inference systems mostly concentrates on AND/ORparallelism with Prolog as input language, resolution as underlying calculus and either implicit parallelism or a small set of builtin constructs for explicit parallelism. Whereas the potential degree of parallelism in these approaches seems quite high, its practical exploitation is rather limited, mainly due to the complicated management of the execution environment. The potential of parallelism, and in particular massive parallelism, in logic is discussed here with respect to different parts of a logical formula, namely the whole formula, its clauses, the literals of the clauses, the terms, and the symbols (Kurfef~, 1991b). Formula Level On the formula level, parallelism occurs mainly in two varieties: one is to work on different relatively large and independent pieces of the formula simultaneously, the pieces being either separate formulae,
Memory capacities for synaptic . . .
, 2010
"... Neural associative networks with plastic synapses have been proposed as computational models of brain functions and also for applications such as pattern recognition and information retrieval. To guide biological models and optimize technical applications, several definitions of memory capacity have ..."
Abstract
 Add to MetaCart
Neural associative networks with plastic synapses have been proposed as computational models of brain functions and also for applications such as pattern recognition and information retrieval. To guide biological models and optimize technical applications, several definitions of memory capacity have been used to measure the efficiency of associative memory. Here we explain why the currently used performance measures bias the comparison between models and cannot serve as a theoretical benchmark. We introduce fair measures for informationtheoretic capacity in associative memory that also provide a theoretical benchmark. In neural networks, two types of manipulating synapses can be discerned: synaptic plasticity, the change in strength of existing synapses, and structural plasticity, the creation and pruning of synapses. One of the new types of memory capacity we introduce permits quantifying how structural plasticity can increase the network efficiency by compressing the network structure, for example, by pruning unused synapses. Specifically,