Results 1  10
of
12
Optimality Theory: Constraint interaction in Generative Grammar
, 1993
"... ~ ROA Version, 8/2002. Essentially identical to the Tech Report, with new pagination (but the same footnote and example numbering); correction of typos, oversights & outright errors; improved typography; and occasional smallscale clarificatory rewordings. Citation should include reference to this ..."
Abstract

Cited by 1456 (36 self)
 Add to MetaCart
~ ROA Version, 8/2002. Essentially identical to the Tech Report, with new pagination (but the same footnote and example numbering); correction of typos, oversights & outright errors; improved typography; and occasional smallscale clarificatory rewordings. Citation should include reference to this version.
A learning algorithm for Boltzmann machines
 Cognitive Science
, 1985
"... The computotionol power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections con allow a significant fraction of the knowledge of the system to be applied to an instance of a probl ..."
Abstract

Cited by 432 (14 self)
 Add to MetaCart
The computotionol power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections con allow a significant fraction of the knowledge of the system to be applied to an instance of a problem in o very short time. One kind of computation for which massively porollel networks appear to be well suited is large constraint satisfaction searches, but to use the connections efficiently two conditions must be met: First, a search technique that is suitable for parallel networks must be found. Second, there must be some way of choosing internal representations which allow the preexisting hardware connections to be used efficiently for encoding the constraints in the domain being searched. We describe a generol parallel search method, based on statistical mechanics, and we show how it leads to a general learning rule for modifying the connection strengths so as to incorporate knowledge obout o task domain in on efficient way. We describe some simple examples in which the learning algorithm creates internal representations thot ore demonstrobly the most efficient way of using the preexisting connectivity structure. 1.
Learning symmetry groups with hidden units: Beyond the perceptron
 Physica
, 1986
"... Learning to recognize mirror, rotational and translational symmetries is a difficult problem for massivelyparallel network models. These symmetries cannot be learned by firstorder perceptrons or Hopfield networks, which have no means for incorporating additional adaptive units that are hidden from ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
Learning to recognize mirror, rotational and translational symmetries is a difficult problem for massivelyparallel network models. These symmetries cannot be learned by firstorder perceptrons or Hopfield networks, which have no means for incorporating additional adaptive units that are hidden from the input and output layers. We demonstrate that the Boltzmann learning algorithm is capable of finding sets of weights which turn hidden units into useful higherorder feature detectors capable of solving symmetry problems. 1.
Grammarbased Connectionist Approaches to Language
, 1994
"... This article describes an approach to connectionist language research which relies on the development of grammar formalisms rather than computer models. From formulations of the fundamental theoretical commitments of connectionism and of generative grammar, it is argued that these two paradigms are ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
This article describes an approach to connectionist language research which relies on the development of grammar formalisms rather than computer models. From formulations of the fundamental theoretical commitments of connectionism and of generative grammar, it is argued that these two paradigms are mutually compatible. Integrating the basic assumptions of the paradigms results in formal theories of grammar that centrally incorporate a certain degree of connectionist computation. Two such grammar formalisms  Harmonic Grammar (Legendre, Miyata and Smolensky, 1990ab) and Optimality Theory (Prince and Smolensky, 1991, 1993)  are briefly introduced to illustrate grammarbased approaches to connectionist language research. The strengths and weaknesses of grammarbased research and more traditional modelbased research are argued to be complementary, suggesting a significant role for both strategies in the spectrum of connectionist language research. This article is addressed to basic ...
Integrating Connectionist and Symbolic Computation for the Theory of Language
, 1992
"... ... or so, neural or connectionist networks have produced an explosion of results and a great deal of interest. Yet this approach to the computational modeling of intelligent cognitive systems faces fundamental problems. The research proposed here has a major connectionist component, but it distingu ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
... or so, neural or connectionist networks have produced an explosion of results and a great deal of interest. Yet this approach to the computational modeling of intelligent cognitive systems faces fundamental problems. The research proposed here has a major connectionist component, but it distinguishes itself from the bulk of connectionist research in the following respects: (0) a. It is strongly guided by symbolic computation, but not a "hybrid" in the usual sense: the connectionist and symbolic computation involved are not two components of a composite system, but two descriptions of a single system. We call this "integrated connection ist/symbolic computation." b. The emphasis is on higher level cognitive processes, with a main focus on language, which provides a particularly chaJJenging testbed, since symbolic computation is so central to existing theory. c. The main emphasis in the language research is on formal grammars for natural language, with supporting research on the gram
LEARNING AND REPRESENTATION IN CONNECTIONIST MODELS
 PERSPECTIVES IN MEMORY RESEARCH, EDITED MICHAEL S. GAZZANIGA
, 1988
"... ..."
Optimization and Quantization in Gradient Symbol Systems: A Framework for Integrating the Continuous and the Discrete in Cognition
"... Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical fram ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Mental representations have continuous as well as discrete, combinatorial properties. For example, while predominantly discrete, phonological representations also vary continuously; this is reflected by gradient effects in instrumental studies of speech production. Can an integrated theoretical framework address both aspects of structure? The framework we introduce here, Gradient Symbol Processing, characterizes the emergence of grammatical macrostructure from the Parallel Distributed Processing microstructure (McClelland & Rumelhart, 1986) of language processing. The mental representations that emerge, Distributed Symbol Systems, have both combinatorial and gradient structure. They are processed through Subsymbolic Optimization‐‑Quantization, in which an optimization process favoring representations that satisfy well‐‑formedness constraints operates in parallel with a distributed quantization process favoring discrete symbolic structures. We apply a particular instantiation of this framework, λ‐‑Diffusion Theory, to
ARTIFICIAL INTELLIGENCE 47 Mapping PartWhole Hierarchies into
"... Three different ways of mapping partwhole hierarchies into connectionist networks are described. The simplest scheme uses a fixed mapping and is inadequate for most tasks because it fails to share units and connections between different pieces of the partwhole hierarchy. Two alternative schemes ar ..."
Abstract
 Add to MetaCart
Three different ways of mapping partwhole hierarchies into connectionist networks are described. The simplest scheme uses a fixed mapping and is inadequate for most tasks because it fails to share units and connections between different pieces of the partwhole hierarchy. Two alternative schemes are described, each of which involves a different method of timesharing connections and units. The scheme we finally arrive at suggests that neural networks have two quite different methods for performing inference. Simple "intuitive " inferences can be performed by a single settling of a network without changing the way in which the world is mapped into the network. More complex "rational " inferences involve a sequence of such settlings with mapping changes after each settling. 1.