Results 11  20
of
14,876
Physical symbol systems
 Cogn. Sci
"... to review the basis of common understanding between the various disciplines. In my estimate, the most fundamental contribution so far of artificial intelligence and computer science to the joint enterprise of cognitive science has been the notion of a physical symbol system, i.e., the concept of D b ..."
Abstract

Cited by 267 (3 self)
 Add to MetaCart
to review the basis of common understanding between the various disciplines. In my estimate, the most fundamental contribution so far of artificial intelligence and computer science to the joint enterprise of cognitive science has been the notion of a physical symbol system, i.e., the concept of D broad class of systems capable of having and manipulating symbois, yet realizable in the physical universe. The notion of symbol so defined is internal to this concept, so it becomes a hypothesis that this notion of symbols includes the symbols that we humans use every day of our lives. In this paper we attempt systematically, but plainly, to lay out the nature of physical symbol systems. Such IJ review is in ways familiar, but not thereby useless. Restatement of fundamentals is an important exercise. 1.
Speaker recognition: A tutorial
"... A tutorial on the design and development of automatic speakerrecognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or to verify a person’s claimed id ..."
Abstract

Cited by 260 (2 self)
 Add to MetaCart
A tutorial on the design and development of automatic speakerrecognition systems is presented. Automatic speaker recognition is the use of a machine to recognize a person from a spoken phrase. These systems can operate in two modes: to identify a particular person or to verify a person’s claimed identity. Speech processing and the basic components of automatic speakerrecognition systems are shown and design tradeoffs are discussed. Then, a new automatic speakerrecognition system is given. This recognizer performs with 98.9 % correct identification. Last, the performances of various systems are compared.
act * AEC Kleenex: Compiling Nondeterministic Transducers to Deterministic Streaming Transducers
"... ns iste nt * Complete * W ell D ocumented*Easyto ..."
The Complexity of Compositions of Deterministic Tree Transducers
 Proceedings of the 22nd Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2002), volume 2556 of LNCS
, 2002
"... Macro tree transducers can simulate most models of tree transducers (e.g., topdown and bottomup tree transducers, attribute grammars, and pebble tree transducers which, in turn, can simulate all known models of XML transformers). The string languages generated by compositions of macro tree transdu ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
transducers (obtained by reading the leaves of the output trees) form a large class which contains, e.g., the IO hierarchy and the EDT0L control hierarchy. Consider an arbitrary composition of (deterministic) macro tree transducers. How dicffiult is it, for a given input tree s, to compute the translation
A Theory of Networks for Approximation and Learning
 Laboratory, Massachusetts Institute of Technology
, 1989
"... Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, t ..."
Abstract

Cited by 237 (25 self)
 Add to MetaCart
Learning an inputoutput mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multidimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nonlinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. Wedevelop a theoretical framework for approximation based on regularization techniques that leads to a class of threelayer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the wellknown Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods suchasParzen windows and potential functions and to several neural network algorithms, suchas Kanerva's associative memory,backpropagation and Kohonen's topology preserving map. They also haveaninteresting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.
Connectionist and Diffusion Models of Reaction Time
, 1997
"... Two connectionist frameworks, GRAIN (McClelland, 1993) and BSB (Anderson, 1991), and the diffusion model (Ratcliff, 1978) were evaluated using data from a signal detection task. Subjects were asked to choose one of two possible responses to a stimulus and were provided feedback about whether the cho ..."
Abstract

Cited by 226 (53 self)
 Add to MetaCart
Two connectionist frameworks, GRAIN (McClelland, 1993) and BSB (Anderson, 1991), and the diffusion model (Ratcliff, 1978) were evaluated using data from a signal detection task. Subjects were asked to choose one of two possible responses to a stimulus and were provided feedback about whether the choice was correct. The dependent variables included response probabilities, reaction times for correct and error responses, and reaction time distributions, and the independent variables were stimulus value, stimulus probability, and lag from an abrupt switch in stimulus probability. The diffusion model accounted for all aspects of the asymptotic data, including error reaction times, which had previously been a problem. The connectionist models accounted for many aspects of the data adequately, but each failed to a greater or lesser degree in important ways except for one model very similar to the diffusion model. The connectionist learning mechanisms were unable to account for initial learning or abrupt changes in stimulus probability. The results provide an advance in the development of the diffusion model and show that the long tradition of reaction time research and theory is a fertile domain for development and testing of connectionist assumptions about how decisions are generated over time.
Machine Translation with Inferred Stochastic FiniteState Transducers
 COMPUTATIONAL LINGUISTICS
, 2004
"... Finitestate transducers are models that are being used in different areas of pattern recognition and computational linguistics. One of these areas is machine translation, in which the approaches that are based on building models automatically from training examples are becoming more and more attrac ..."
Abstract

Cited by 79 (17 self)
 Add to MetaCart
Finitestate transducers are models that are being used in different areas of pattern recognition and computational linguistics. One of these areas is machine translation, in which the approaches that are based on building models automatically from training examples are becoming more and more
Learning Subsequential Transducers for Pattern Recognition Interpretation Tasks
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1993
"... AbstractThe “interpretation ” framework in pattern recognition (PR) arises in the many cases in which the more classical paradigm of “classification ” is not properly applicable generally because the number of classes is rather large or simply because the concept of “class ” does not hold. A very g ..."
Abstract

Cited by 115 (17 self)
 Add to MetaCart
and compact transducers for the corresponding tasks. Index TermsFormal languages, inductive inference, learning, rational transducers, subsequential functions, syntactic pattern recognition. I.
Transducing Markov Sequences Extended Abstract
"... A Markov sequence is a basic statistical model representing uncertain sequential data, and it is used within a plethora of applications, including speech recognition, image processing, computational biology, radiofrequency identification (RFID), and information extraction. The problem of querying a ..."
Abstract
 Add to MetaCart
a Markov sequence is studied under the conventional semantics of querying a probabilistic database, where queries are formulated as finitestate transducers. Specifically, the complexity of two main problems is analyzed. The first problem is that of computing the confidence (probability
Scaling up MIMO: Opportunities and challenges with very large arrays
 IEEE Signal Process. Mag
, 2013
"... N.B.: When citing this work, cite the original article. ©2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to ..."
Abstract

Cited by 214 (26 self)
 Add to MetaCart
N.B.: When citing this work, cite the original article. ©2011 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.
Results 11  20
of
14,876