Results 1  10
of
30
A New Approach to Manipulator Control: The Cerebellar Model Articulation Controller
 (CMAC), TRANS. ASME, SERIES G. JOURNAL OF DYNAMIC SYSTEMS, MEASUREMENT AND CONTROL
, 1975
"... (CMAC) [1, 2] is a neural network that models the structure and function of the part of the brain known as the cerebellum. The cerebellum provides precise coordination of motor control for such body parts as the eyes, arms, fingers, legs, and wings. It stores and retrieves information required to co ..."
Abstract

Cited by 261 (3 self)
 Add to MetaCart
(CMAC) [1, 2] is a neural network that models the structure and function of the part of the brain known as the cerebellum. The cerebellum provides precise coordination of motor control for such body parts as the eyes, arms, fingers, legs, and wings. It stores and retrieves information required to control thousands of muscles in producing coordinated behavior as a function of time. CMAC was designed to provide this kind of motor control for robotic manipulators. CMAC is a kind of memory, or table lookup mechanism, that is capable of learning motor behavior. It exhibits properties such as generalization, learning interference, discrimination, and forgetting that are characteristic of motor learning in biological creatures. In a biological motor system, the drive signal for each
A Simple Algorithm for Nearest Neighbor Search in High Dimensions
 IEEE Transactions on Pattern Analysis and Machine Intelligence
, 1997
"... Abstractâ€”The problem of finding the closest point in highdimensional spaces is common in pattern recognition. Unfortunately, the complexity of most existing search algorithms, such as kd tree and Rtree, grows exponentially with dimension, making them impractical for dimensionality above 15. In ne ..."
Abstract

Cited by 126 (1 self)
 Add to MetaCart
Abstractâ€”The problem of finding the closest point in highdimensional spaces is common in pattern recognition. Unfortunately, the complexity of most existing search algorithms, such as kd tree and Rtree, grows exponentially with dimension, making them impractical for dimensionality above 15. In nearly all applications, the closest point is of interest only if it lies within a userspecified distance e. We present a simple and practical algorithm to efficiently search for the nearest neighbor within Euclidean distance e. The use of projection search combined with a novel data structure dramatically improves performance in high dimensions. A complexity analysis is presented which helps to automatically determine e in structured problems. A comprehensive set of benchmarks clearly shows the superiority of the proposed algorithm for a variety of structured and unstructured search problems. Object recognition is demonstrated as an example application. The simplicity of the algorithm makes it possible to construct an inexpensive hardware search engine which can be 100 times faster than its software equivalent. A C++ implementation of our algorithm is available upon request to search@cs.columbia.edu/CAVE/.
The Ring of kRegular Sequences
, 1992
"... The automatic sequence is the central concept at the intersection of formal language theory and number theory. It was introduced by Cobham, and has been extensively studied by Christol, Kamae, Mendes France and Rauzy, and other writers. Since the range of automatic sequences is nite, however, their ..."
Abstract

Cited by 39 (7 self)
 Add to MetaCart
The automatic sequence is the central concept at the intersection of formal language theory and number theory. It was introduced by Cobham, and has been extensively studied by Christol, Kamae, Mendes France and Rauzy, and other writers. Since the range of automatic sequences is nite, however, their descriptive power is severely limited.
LRU stack processing
 IBM Journal of Research and Development
, 1975
"... Abstract: Stack processing, and in particular stack processing for the least recently used replacement algorithms, may present computational problems when it is applied to a sequence of page references with many different pages. This paper describes a new technique for LRU stack processing that pe ..."
Abstract

Cited by 32 (0 self)
 Add to MetaCart
Abstract: Stack processing, and in particular stack processing for the least recently used replacement algorithms, may present computational problems when it is applied to a sequence of page references with many different pages. This paper describes a new technique for LRU stack processing that permits efficient processing of these sequences. An analysis of the algorithm and a comparison of its running times with those of the conventional stack processing algorithms are presented. Finally we discuss a multipass implementation, which was found necessary to process trace data from a large data base system.
A skip list cookbook
, 1990
"... Skip lists are a probabilistic data structure that seem likely to supplant balanced trees as the implementation method of choice for many applications. Skip list algorithms have the same asymptotic expected time bounds as balanced trees and are simpler, faster and use less space. The original paper ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Skip lists are a probabilistic data structure that seem likely to supplant balanced trees as the implementation method of choice for many applications. Skip list algorithms have the same asymptotic expected time bounds as balanced trees and are simpler, faster and use less space. The original paper on skip lists only presented algorithms for search, insertion and deletion. In this paper, we show that skip lists are as versatile as balanced trees. We describe and analyze algorithms to use search fingers, merge, split and concatenate skip lists, and implement linear list operations using skip lists. The skip list algorithms for these actions are faster and simpler than their balanced tree cousins. The merge algorithm for skip lists we describe has better asymptotic time complexity than any previously described merge algorithm for balanced trees.
An asymptotic theory for CauchyEuler differential equations with applications to the analysis of algorithms
, 2002
"... CauchyEuler differential equations surfaced naturally in a number of sorting and searching problems, notably in quicksort and binary search trees and their variations. Asymptotics of coefficients of functions satisfying such equations has been studied for several special cases in the literature. We ..."
Abstract

Cited by 22 (10 self)
 Add to MetaCart
CauchyEuler differential equations surfaced naturally in a number of sorting and searching problems, notably in quicksort and binary search trees and their variations. Asymptotics of coefficients of functions satisfying such equations has been studied for several special cases in the literature. We study in this paper the most general framework for CauchyEuler equations and propose an asymptotic theory that covers almost all applications where CauchyEuler equations appear. Our approach is very general and requires almost no background on differential equations. Indeed the whole theory can be stated in terms of recurrences instead of functions. Old and new applications of the theory are given. New phase changes of limit laws of new variations of quicksort are systematically derived. We apply our theory to about a dozen of diverse examples in quicksort, binary search trees, urn models, increasing trees, etc.
Integrating diverse knowledge sources in text recognition
 ACM Transactions on Office Information Systems
, 1983
"... A new algorithm for text recognition that corrects character substitution errors in words of text is presented. The search for a correct word effectively integrates three knowledge sources: channel characteristics, bottomup context, and topdown context. Channel characteristics are used in the form ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
A new algorithm for text recognition that corrects character substitution errors in words of text is presented. The search for a correct word effectively integrates three knowledge sources: channel characteristics, bottomup context, and topdown context. Channel characteristics are used in the form of probabilities that observed letters are corruptions of other letters; bottomup context is in the form of the probability of a letter when the previous letters of the word are known; and topdown context is in the form of a lexicon. A onepass algorithm is obtained by merging a previously known dynamic programming algorithm to compute the maximum a posteriori probability string (known as the Viterbi algorithm) with searching a lexical trie. Analysis of the computational compexity of the algorithm and results of experimentation with a PASCAL implementation are presented.
Nearly Optimal ExpectedCase Planar Point Location
"... We consider the planar point location problem from the perspective of expected search time. We are given a planar polygonal subdivision S and for each polygon of the subdivision the probability that a query point lies within this polygon. The goal is to compute a search structure to determine which ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
We consider the planar point location problem from the perspective of expected search time. We are given a planar polygonal subdivision S and for each polygon of the subdivision the probability that a query point lies within this polygon. The goal is to compute a search structure to determine which cell of the subdivision contains a given query point, so as to minimize the expected search time. This is a generalization of the classical problem of computing an optimal binary search tree for onedimensional keys. In the onedimensional case it has long been known that the entropy H of the distribution is the dominant term in the lower bound on the expectedcase search time, and further there exist search trees achieving expected search times of at most H + 2. Prior to this work, there has been no known structure for planar point location with an expected search time better than 2H, and this result required strong assumptions on the nature of the query point distribution. Here we present a data structure whose expected search time is nearly equal to the entropy lower bound, namely H + o(H). The result holds for any polygonal subdivision in which the number of sides of each of the polygonal cells is bounded, and there are no assumptions on the query distribution within each cell. We extend these results to subdivisions with convex cells, assuming a uniform query distribution within each cell.
Generalized Parking Functions, Tree Inversions and Multicolored Graphs
"... A generalized xparking function associated to x = (a, b, b, . . . , b) # N n is a sequence (a 1 , a 2 , . . . , an ) of positive integers whose nondecreasing rearrangement b 1 # b 2 # # b n satisfies b i # a + (i  1)b. The set of xparking functions is equinumerate with the set of se ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
A generalized xparking function associated to x = (a, b, b, . . . , b) # N n is a sequence (a 1 , a 2 , . . . , an ) of positive integers whose nondecreasing rearrangement b 1 # b 2 # # b n satisfies b i # a + (i  1)b. The set of xparking functions is equinumerate with the set of sequences of rooted bforests on [n]. We construct a bijection between these two sets. We show that the sum enumerator of complements of xparking functions is identical to the inversion enumerator of sequences of rooted bforests by generating function analysis. Combinatorial correspondences between the sequences of rooted forests and xparking functions is also given in terms of depthfirst search and breadthfirst search on multicolored graphs. 1 Introduction The notion of parking function was introduced by Konheim and Weiss as a colorful way to study a hashing problem. In the paper [9], they proved the number of parking functions of length n is (n + 1) n1 . Later the subject has attra...
An Optical Model of Computation
 Theoretical Computer Science
, 2004
"... We prove computability and complexity results for an original model of computation called the continuous space machine. Our model is inspired by the theory of Fourier optics. We prove our model can simulate analog recurrent neural networks, thus establishing a lower bound on its computational power. ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
We prove computability and complexity results for an original model of computation called the continuous space machine. Our model is inspired by the theory of Fourier optics. We prove our model can simulate analog recurrent neural networks, thus establishing a lower bound on its computational power. We also define a \Theta (log_2 n) unordered search algorithm with our model.