Results 11  20
of
33
Practical Earley Parsing
, 2002
"... Earley's parsing algorithm is a general algorithm, able to handle any contextfree grammar. As with most parsing algorithms, however, the presence of grammar rules having empty righthand sides complicates matters. By analyzing why Earley's algorithm struggles with these grammar rules, we have devis ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Earley's parsing algorithm is a general algorithm, able to handle any contextfree grammar. As with most parsing algorithms, however, the presence of grammar rules having empty righthand sides complicates matters. By analyzing why Earley's algorithm struggles with these grammar rules, we have devised a simple solution to the problem. Our emptyrule solution leads to a new type of finite automaton expressly suited for use in Earley parsers, and to a new statement of Earley's algorithm. We show that this new form of Earley parser is much more timeefficient in practice than the original.
Converting intermediate code to assembly code using declarative machine descriptions
 In CC
, 2006
"... Abstract. Writing an optimizing back end is expensive, in part because it requires mastery of both a target machine and a compiler’s internals. We separate these concerns by isolating targetmachine knowledge in declarative machine descriptions. We then analyze these descriptions to automatically ge ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
Abstract. Writing an optimizing back end is expensive, in part because it requires mastery of both a target machine and a compiler’s internals. We separate these concerns by isolating targetmachine knowledge in declarative machine descriptions. We then analyze these descriptions to automatically generate machinespecific components of the back end. In this work, we generate a recognizer; this component, which identifies register transfers that correspond to targetmachine instructions, plays a key role in instruction selection in such compilers as vpo, gcc and Quick C. We present analyses and transformations that address the major challenge in generating a recognizer: accounting for compiletime abstractions not present in a machine description, including variables, pseudoregisters, stack slots, and labels. 1
Concise Specifications of Locally Optimal Code Generators
, 1987
"... Dynamic programming allows locally optimal instruction selection for expression trees. More importantly, the algorithm allows concise and elegant specification of code generators. Aho, Ganapathi, and Tjiang have built the Twig codegeneratorgenerator, which produces dynamicprogramming codegenerat ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Dynamic programming allows locally optimal instruction selection for expression trees. More importantly, the algorithm allows concise and elegant specification of code generators. Aho, Ganapathi, and Tjiang have built the Twig codegeneratorgenerator, which produces dynamicprogramming codegenerators from grammarlike specifications. Encoding a complex architecture as a grammar for a dynamicprogramming codegenerator generator shows the expressive power of the technique. Each instruction, addressing mode, register and class can be expressed individually in the grammar. The grammar can be factored much more readily than with the GrahamGlanville LR(1) algorithm, so it can be much more concise. Twig specifications for the VAX and MC68020 are described, and the corresponding code generators select very good (and under the right assumptions, optimal) instruction sequences. Limitations and possible improvements to the specification language are discussed. 1. Introduction One of the last...
Code selection by tree series transducers, in
 Proc. 9th Int. Conf. on Implementation and Application of Automata, Vol. 3317 of LNCS
, 2004
"... Abstract. In this paper we model code selection by tree series transducers. We are given an intermediate representation of some compiler as well as a machine grammar with weights, which reflect the number of machine cycles of the instructions. The derivations of the machine grammar are machine codes ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Abstract. In this paper we model code selection by tree series transducers. We are given an intermediate representation of some compiler as well as a machine grammar with weights, which reflect the number of machine cycles of the instructions. The derivations of the machine grammar are machine codes. In general, a machine grammar is ambiguous and hence there might exist more than one derivation of an intermediate code. We show how to filter out a cheapest such derivation and thereby perform tree parsing and tree pattern matching using tree series transducers. 1
Bottomup Tree Acceptors
 Science of Computer Programming
, 1986
"... This paper deals with the formal derivation of an efficient tabulation algorithm for tabledriven bottomup tree acceptors. Bottomup tree acceptors are based on a notion of match sets. First we derive a naive acceptance algorithm using dynamic computation of match sets. Tabulation of match sets lead ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
This paper deals with the formal derivation of an efficient tabulation algorithm for tabledriven bottomup tree acceptors. Bottomup tree acceptors are based on a notion of match sets. First we derive a naive acceptance algorithm using dynamic computation of match sets. Tabulation of match sets leads to an efficient acceptance algorithm, but tables may be so large that they can not be generated due to lack of space. Introduction of a convenient equivalence relation on match sets reduces this effect and improves the tabulation algorithm. 1 Introduction Nowadays, many parts of a compiler can be generated automatically. For instance, the automatic generation of lexical and syntactic analyzers using notations based on regular expressions and contextfree grammars is commonly used (see e.g. [1]). However, much research is still going on in the field of universal codegenerator generators, which take a description of a machine as input and deliver a (good) code generator for that machine. C...
A New Algorithm for Linear Regular Tree Pattern Matching
 Theoretical Computer Science
, 1998
"... ..."
An Algorithm for the Allocation of Functional Units from Realistic RT Component Libraries
 in Proceedings of the Seventh International Symposium on Highlevel Synthesis
, 1994
"... Existing algorithms in HighLevel Synthesis (HLS) typically assume a direct mapping of hardware description language (HDL) operators to RT units. This assumption simplifies synthesis to generic RT components, but prevents effective use of complex databook components, custom designed cells, previousl ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Existing algorithms in HighLevel Synthesis (HLS) typically assume a direct mapping of hardware description language (HDL) operators to RT units. This assumption simplifies synthesis to generic RT components, but prevents effective use of complex databook components, custom designed cells, previously synthesized RT modules, and RT module generators. In this paper, we present an algorithm for allocation in HLS for reuse of existing RTlevel components. This approach can be used to customize HLS tools to userspecific RT libraries. Our experiments show improvements of 1037% in area over conventional approaches. 1 Introduction Most approaches towards highlevel synthesis (HLS) of synchronous circuits from hardware description languages (HDLs) assume a simple scheme for mapping behavioral operators to registertransfer (RT) components. Synthesis tools often assume an abstract HDL operator directly maps to a single RT operation which in turn can be performed by a few generic components....
FiniteState Code Generation
, 1999
"... This paper describes GBURG, which generates tiny, fast code generators based on finitestate machine pattern matching. The code generators translate postfix intermediate code into machine instructions in one pass (except, of course, for backpatching addresses). A stackbased virtual machineknown a ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper describes GBURG, which generates tiny, fast code generators based on finitestate machine pattern matching. The code generators translate postfix intermediate code into machine instructions in one pass (except, of course, for backpatching addresses). A stackbased virtual machineknown as the Lean Virtual Machine (LVM)tuned for fast code generation is also described. GBURG translates the twopage LVMtox86 specification into a code generator that fits entirely in an 8 KB Icache and that emits x86 code at 3.6 MB/set on a 266MHz P6. Our justintime code generator translates and executes small benchmarks at speeds within a factor of two of executables derived from the conventional compiletime code generator on which it is based.
NearOptimal Instruction Selection on DAGs
, 2008
"... Instruction selection is a key component of code generation. High quality instruction selection is of particular importance in the embedded space where complex instruction sets are common and code size is a prime concern. Although instruction selection on tree expressions is a well understood and ea ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Instruction selection is a key component of code generation. High quality instruction selection is of particular importance in the embedded space where complex instruction sets are common and code size is a prime concern. Although instruction selection on tree expressions is a well understood and easily solved problem, instruction selection on directed acyclic graphs is NPcomplete. In this paper we present NOLTIS, a nearoptimal, linear time instruction selection algorithm for DAG expressions. NOLTIS is easy to implement, fast, and effective with a demonstrated average code size improvement of 5.1 % compared to the traditional tree decomposition and tiling approach.
Even faster generalized LR parsing
 ACTA INFORMATICA
, 2000
"... We prove a property of generalized LR (GLR) parsing  if the grammar is without right and hidden left recursions, then the number of consecutive reductions between the shifts of two adjacent symbols cannot be greater than a constant. Further, we show that this property can be used for constructin ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We prove a property of generalized LR (GLR) parsing  if the grammar is without right and hidden left recursions, then the number of consecutive reductions between the shifts of two adjacent symbols cannot be greater than a constant. Further, we show that this property can be used for constructing an optimized version of our GLR parser. Compared with a standard GLR parser, our optimized parser reads one symbol on every transition and performs significantly fewer stack operations. Our timings show that, especially for highly ambiguous grammars, our parser is significantly faster than a standard GLR parser.