Results 11  20
of
29
Adapting to Babel  Adaptivity and ContextSensitivity in Parsing: from a^n b^n c^n to RNA
, 2006
"... Since the time of Noam Chomsky’s introduction of phrase structure grammars in 19561957, it has been known that formal methods can be applied to the parsing of an infinite variety of input. In practice, however, grammars of sufficient formal power to generate contextsensitive languages in their var ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Since the time of Noam Chomsky’s introduction of phrase structure grammars in 19561957, it has been known that formal methods can be applied to the parsing of an infinite variety of input. In practice, however, grammars of sufficient formal power to generate contextsensitive languages in their various forms have lacked efficient generalizable algorithms, or have been of such cumbersome notation and implementation that practitioners in various fields who could otherwise greatly benefit from applied language theory have been shackled to much less powerful underlying parsing engines. The §Calculus (pronounced: metaess calculus), an adaptive grammar formalism, and the corresponding adaptive(k) parsing algorithm are shown to have theoretical and practical utility in the fields of classical formal language, combinatorics, computational linguistics, bioinformatics, data mining, and programming language semantics, making tractable many previously difficult to parse languages. This is demonstrated by first building a formal foundation around the septuple of the §Calculus, and then introducing the predicated pushdown automaton augmented with nameindexed tries (PDATs), a new computational machine built upon the classical single stack PDA. After the formal model is established, implementation issues and their time complexity are examined in terms of optimizations used in practice to reduce time complexity. After this framework is established, each of the areas benefited by the §Calculus and its corresponding automata and algorithms is examined in turn using empirical
to BestMatch File Searching
, 1971
"... In the experimental results given above, the precision of 6 hexadecimal digits is approximately statistically equivalent to that of 22 binary digits. It must be emphasized, however, that the comparison of average accuracies is but one of several methods of comparison. In order to evaluate the relati ..."
Abstract
 Add to MetaCart
(Show Context)
In the experimental results given above, the precision of 6 hexadecimal digits is approximately statistically equivalent to that of 22 binary digits. It must be emphasized, however, that the comparison of average accuracies is but one of several methods of comparison. In order to evaluate the relative merits of both systems, it is necessary to consider other aspects, such as the worst case accuracy and the ability to preserve algebraic identities. Our tests do show the statistical superiority of Rmode arithmetic over Cmode in all cases except that of mixed sign sums, where the Cmode without guard characters has an advantage traceable to a slight bias in the Rmode. The unbiased R*mode with guard characters, however, is statistically more accurate than the Cmode for all of our tests. Test results in the simpler cases have been verified analytically. In the remaining cases where we failed to substantiate the test results analytically, we nevertheless feel that the tests provide useful and welldefined information. We finally note that machine implementations of hexadecimal or binary Cmode arithmetic systems using 0 or 1 guard digits are common. Implementations of a binary Rmode, or variations thereof, also exist. However, to our knowledge, there are no commercial implementations of the R*mode. Acknowledgment. The authors wish to express their appreciation to W. Kahan, whose detailed comments on an early version of this work led to the pursuit of the analytic estimates presented here, as well as to other refinements in the presentation.
IIl II. A Scatter Storage Scheme For Dictionary Lookups
"... A document retrieval system must have some means of recording the subject matter of each document in its data base. Some systems store the ..."
Abstract
 Add to MetaCart
(Show Context)
A document retrieval system must have some means of recording the subject matter of each document in its data base. Some systems store the
HASH SORT: A LINEAR TIME COMPLEXITY MULITIPLEDIMENSIONAL SORT ALGORITHM  ORIGINALLY ENTITLED ”MAKING A HASH OF SORTS”
, 2004
"... Sorting and hashing are two completely different concepts in computer science, and appear mutually exclusive to one another. Hashing is a search method using the data as a key to map to the location within memory, and is used for rapid storage and retrieval. Sorting is a process of organizing data f ..."
Abstract
 Add to MetaCart
Sorting and hashing are two completely different concepts in computer science, and appear mutually exclusive to one another. Hashing is a search method using the data as a key to map to the location within memory, and is used for rapid storage and retrieval. Sorting is a process of organizing data from a random permutation into an ordered arrangement, and is a common activity performed frequently in a variety of applications. Almost all conventional sorting algorithms work by comparison, and in doing so have a linearithmic greatest lower bound on the algorithmic time complexity. Any improvement in the theoretical time complexity of a sorting algorithm can result in overall larger gains in implementation performance. A gain in algorithmic performance leads to much larger gains in speed for the application that uses the sort algorithm. Such a sort algorithm needs to use an alternative method for ordering the data than comparison, to exceed the linearithmic time complexity boundary on algorithmic performance. The hash sort is a general purpose noncomparison based sorting algorithm by hashing, which has some interesting features not found in conventional sorting algorithms. The hash sort asymptotically outperforms the fastest traditional sorting algorithm, the quick sort. The hash sort algorithm has a
unknown title
"... A novel extension to external double hashing providing significant reduction to both successful and unsuccessful search lengths is presented. The experimental and analytical results demonstrate the reductions possible. This method does not restrict the hashing table configuration parameters and util ..."
Abstract
 Add to MetaCart
(Show Context)
A novel extension to external double hashing providing significant reduction to both successful and unsuccessful search lengths is presented. The experimental and analytical results demonstrate the reductions possible. This method does not restrict the hashing table configuration parameters and utilizes very little additional storage space per bucket. The runtime performance for insertion is slightly greater than for ordinary external double hashing. 1 1.
Sorting
"... The bibliography appearing at the end of this article lists 37 sorting algorithms and 100 books and papers on sorting published in the last 20 years. The basic ideas presented here have been abstracted from this body of work, and the best algorithms known are given as examples. As the algorithms are ..."
Abstract
 Add to MetaCart
(Show Context)
The bibliography appearing at the end of this article lists 37 sorting algorithms and 100 books and papers on sorting published in the last 20 years. The basic ideas presented here have been abstracted from this body of work, and the best algorithms known are given as examples. As the algorithms are explained,
AN ANALYSIS OF OPTIMAL RETRIEVAL SYSTEMS WITH UPDATES
, 1974
"... The performance of computerimplemented systems for data storage, retrieval, and update is investigated. A data structure is modeled by a set D = {d 1, d. d D of data bases. A set of questions A = {Xlk 2."' about any d E D may be answered. A memory that is bitaddressable by an algorit ..."
Abstract
 Add to MetaCart
(Show Context)
The performance of computerimplemented systems for data storage, retrieval, and update is investigated. A data structure is modeled by a set D = {d 1, d. d D of data bases. A set of questions A = {Xlk 2.&quot;' about any d E D may be answered. A memory that is bitaddressable by an algorithm or an automaton models a computer. A retrieval system is composed of a particular mapping of data bases onto memory representations and a particular algorithm or automaton. By accessing bits of memory the algorithm can answer any X E A about the d represented in memory and can update memory to represent a new d * E D. Lower bounds are derived for the performance measures of storage efficiency, retrieval efficiency, and update efficiency. The minima are simultaneously
Full Hash Table Search using Primitive Roots of the Prime Residue Group Z/p
"... Abstract: After a brief introduction to hashcoding (scatter storage) and discussion of methods described in the literature, it is shown that for hash tables of length p>2, prime, the primitive roots r of the cyclic group Z/p of prime residues mod p can be used for a simple collision strategy q(p ..."
Abstract
 Add to MetaCart
Abstract: After a brief introduction to hashcoding (scatter storage) and discussion of methods described in the literature, it is shown that for hash tables of length p>2, prime, the primitive roots r of the cyclic group Z/p of prime residues mod p can be used for a simple collision strategy q(p,i) = r i mod p for f i(k) = f 0(k) +q(p,i) mod p. It is similar to the strategy which uses quadratic residues q(p,i) = i 2 mod p in avoiding secondary clustering, but reaches all table positions for probing. A table of n primes for typical table lengths and their primitive roots is added. In cases where r = 2 j is such a primitive root, the collision strategy can be implemented simply by repeated shifts to the left (by j places in all). To make the paper selfcontained and easy to read, the relevant definitions and the theorems used from the Theory of Numbers are included in the paper.