Results 1  10
of
155
The NPcompleteness column: an ongoing guide
 Journal of Algorithms
, 1985
"... This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co ..."
Abstract

Cited by 188 (0 self)
 Add to MetaCart
This is the nineteenth edition of a (usually) quarterly column that covers new developments in the theory of NPcompleteness. The presentation is modeled on that used by M. R. Garey and myself in our book ‘‘Computers and Intractability: A Guide to the Theory of NPCompleteness,’ ’ W. H. Freeman & Co., New York, 1979 (hereinafter referred to as ‘‘[G&J]’’; previous columns will be referred to by their dates). A background equivalent to that provided by [G&J] is assumed, and, when appropriate, crossreferences will be given to that book and the list of problems (NPcomplete and harder) presented there. Readers who have results they would like mentioned (NPhardness, PSPACEhardness, polynomialtimesolvability, etc.) or open problems they would like publicized, should
A fast algorithm for finding dominators in a flowgraph
 ACM Transactions on Programming Languages and Systems
, 1979
"... A fast algoritbm for finding dominators in a flowgraph is presented. The algorithm uses depthfirst search and an efficient method of computing functions defined on paths in trees. A simple implementation of the algorithm runs in O(m log n) time, where m is the number of edges and n is the number o ..."
Abstract

Cited by 144 (3 self)
 Add to MetaCart
A fast algoritbm for finding dominators in a flowgraph is presented. The algorithm uses depthfirst search and an efficient method of computing functions defined on paths in trees. A simple implementation of the algorithm runs in O(m log n) time, where m is the number of edges and n is the number of vertices in the problem graph. A more sophisticated implementation runs in O(ma(m, n)) time, where a(m, n) is a functional inverse of Ackermann's function. Both versions of the algorithm were implemented in Algol W, a Stanford University version of Algol, and tested on an IBM 370/168. The programs were compared with an implementation by Purdom and Moore of a straightforward O(mn)time algorithm, and with ~a bit vector algorithm described by Aho and Ullman. The fast algorithm beat the straightforward algorithm and the bit vector algorithm on all but the smallest graphs tested.
Experience in the Automatic Parallelization of Four PerfectBenchmark Programs
 Lecture Notes in Computer Science 589. Proceedings of the Fourth Workshop on Languages and Compilers for Parallel Computing
, 1991
"... . This paper discusses the techniques used to handparallelize, for the Alliant FX/80, four Fortran programs from the PerfectBenchmark suite. The paper also includes the execution times of the programs before and after the transformations. The four programs considered here were not effectively par ..."
Abstract

Cited by 92 (25 self)
 Add to MetaCart
. This paper discusses the techniques used to handparallelize, for the Alliant FX/80, four Fortran programs from the PerfectBenchmark suite. The paper also includes the execution times of the programs before and after the transformations. The four programs considered here were not effectively parallelized by the automatic translators available to the authors. However, most of the techniques used for hand parallelization, and perhaps all of them, have wide applicability and can be incorporated into existing translators. 1. Introduction It is by now widely accepted that in many reallife applications, supercomputers have been unable to deliver a reasonable fraction of their peak performance. An illustration of this is provided by the Perfect Benchmark programs [3], many of which effectively use less than 1% of the computational resources available in the most powerful supercomputers. While it is apparent that the reason for this dismal behavior is sometimes the result of the machine ...
Efficient Parsing for Bilexical ContextFree Grammars and Head Automaton Grammars
 IN ACL 37
, 1999
"... Several recent stochastic parsers use bilexical grammars, where each word type idiosyncratically prefers particular complements with particular head words. We present O(n^4) parsing algorithms for two bilexical formalisms, improving the prior upper bounds of O(n^5). For a common special case that wa ..."
Abstract

Cited by 92 (19 self)
 Add to MetaCart
Several recent stochastic parsers use bilexical grammars, where each word type idiosyncratically prefers particular complements with particular head words. We present O(n^4) parsing algorithms for two bilexical formalisms, improving the prior upper bounds of O(n^5). For a common special case that was known to allow O(n³) parsing (Eisner, 1997), we present an O(n³) algorithm with an improved grammar constant.
Decompilation of Binary Programs
, 1995
"... this paper is structured in the following way: a thorough description of the structure of a decompiler, followed by the description of our implementation of an # An idiom is a sequence of instruction that forms a logical entity and has a meaning that cannot be derived by considering the primary mean ..."
Abstract

Cited by 86 (12 self)
 Add to MetaCart
this paper is structured in the following way: a thorough description of the structure of a decompiler, followed by the description of our implementation of an # An idiom is a sequence of instruction that forms a logical entity and has a meaning that cannot be derived by considering the primary meanings of the individual instructions # # # # HLL program (language dependent) Backend (analysis) UDM (machine dependent) Frontend binary program Figure 1. Decompiler modules automatic decompiling system, and conclusions. The paper is followed by the definitions of graph theoretical concepts used throughout the paper (Appendix I), and sample output from different phases of the decompilation of a program (Appendix II)
ContextFree Languages and PushDown Automata
 Handbook of Formal Languages
, 1997
"... Contents 1. Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.1 Grammars : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.2 Examples : : : : : : : : : : : : : : : : : : : : : : : : : : : ..."
Abstract

Cited by 62 (0 self)
 Add to MetaCart
Contents 1. Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.1 Grammars : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 1.2 Examples : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 4 2. Systems of equations : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 2.1 Systems : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 6 2.2 Resolution : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 11 2.3 Linear systems : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 12 2.4 Parikh's theorem : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : :
What is the Search Space of the Regular Inference?
 In Proceedings of the Second International Colloquium on Grammatical Inference (ICGI'94
, 1994
"... This paper revisits the theory of regular inference, in particular by extending the definition of structural completeness of a positive sample and by demonstrating two basic theorems. This framework enables to state the regular inference problem as a search through a boolean lattice built from the p ..."
Abstract

Cited by 45 (5 self)
 Add to MetaCart
This paper revisits the theory of regular inference, in particular by extending the definition of structural completeness of a positive sample and by demonstrating two basic theorems. This framework enables to state the regular inference problem as a search through a boolean lattice built from the positive sample. Several properties of the search space are studied and generalization criteria are discussed. In this framework, the concept of border set is introduced, that is the set of the most general solutions excluding a negative sample. Finally, the complexity of regular language identification from both a theoritical and a practical point of view is discussed. 1 Introduction Regular inference is the process of learning a regular language from a set of examples, consisting of a positive sample, i.e. a finite subset of a regular language. A negative sample, i.e. a finite set of strings not belonging to this language, may also be available. This problem has been studied as early as th...
GLR*: A Robust GrammarFocused Parser for Spontaneously Spoken Language
, 1996
"... The analysis of spoken language is widely considered to be a more challenging task than the analysis of written text. All of the difficulties of written language can generally be found in spoken language as well. Parsing spontaneous speech must, however, also deal with problems such as speech disflu ..."
Abstract

Cited by 43 (11 self)
 Add to MetaCart
The analysis of spoken language is widely considered to be a more challenging task than the analysis of written text. All of the difficulties of written language can generally be found in spoken language as well. Parsing spontaneous speech must, however, also deal with problems such as speech disfluencies, the looser notion of grammaticality, and the lack of clearly marked sentence boundaries. The contamination of the input with errors of a speech recognizer can further exacerbate these problems. Most natural language parsing algorithms are designed to analyze "clean" grammatical input. Because they reject any input which is found to be ungrammatical in even the slightest way, such parsers are unsuitable for parsing spontaneous speech, where completely grammatical input is the exception more than the rule. This thesis describes GLR*, a parsing system based on Tomita's Generalized LR parsing algorithm, that was designed to be robust to two particular types of extragrammaticality: noise...
A ControlFlow Normalization Algorithm and Its Complexity
 IEEE Transactions on Software Engineering
, 1992
"... We present a simple method for normalizing the controlflow of programs to facilitate program transformations, program analysis, and automatic parallelization. While previous methods result in programs whose control flowgraphs are reducible, programs normalized by this technique satisfy a stronger c ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
We present a simple method for normalizing the controlflow of programs to facilitate program transformations, program analysis, and automatic parallelization. While previous methods result in programs whose control flowgraphs are reducible, programs normalized by this technique satisfy a stronger condition than reducibility and are therefore simpler in their syntax and structure than with previous methods. In particular, all controlflow cycles are normalized into singleentry, singleexit while loops, and all goto's are eliminated. Furthermore, the method avoids problems of code replication that are characteristic of nodesplitting techniques. This restructuring obviates the control dependence graph, since afterwards control dependence relations are manifest in the syntax tree of the program. In this paper we present transformations that effect this normalization, and study the complexity of the method. Index Terms: Continuations, controlflow, elimination algorithms, normalization,...
Dynamic Programming for LinearTime Incremental Parsing
"... Incremental parsing techniques such as shiftreduce have gained popularity thanks to their efficiency, but there remains a major problem: the search is greedy and only explores a tiny fraction of the whole space (even with beam search) as opposed to dynamic programming. We show that, surprisingly, d ..."
Abstract

Cited by 42 (3 self)
 Add to MetaCart
Incremental parsing techniques such as shiftreduce have gained popularity thanks to their efficiency, but there remains a major problem: the search is greedy and only explores a tiny fraction of the whole space (even with beam search) as opposed to dynamic programming. We show that, surprisingly, dynamic programming is in fact possible for many shiftreduce parsers, by merging “equivalent ” stacks based on feature values. Empirically, our algorithm yields up to a fivefold speedup over a stateoftheart shiftreduce dependency parser with no loss in accuracy. Better search also leads to better learning, and our final parser outperforms all previously reported dependency parsers for English and Chinese, yet is much faster. 1