Results 1  10
of
50
The Structure of Shared Forests in Ambiguous Parsing
, 1989
"... The ContextFree Backbone of some natural language analyzers produces all possible CF parses as some kind of shared forest, from which a singie tree is to be chosen by a disam blguation process that may be Based on the finer features of the language. We study the structure of these forests with res ..."
Abstract

Cited by 120 (5 self)
 Add to MetaCart
The ContextFree Backbone of some natural language analyzers produces all possible CF parses as some kind of shared forest, from which a singie tree is to be chosen by a disam blguation process that may be Based on the finer features of the language. We study the structure of these forests with respect to optimality of sharing, and in relation with the parsing schema used to produce them. In addition to a theo retical and experimental framework for studying these issues, the main results presented are:  sophist/cat/on in chart parsing schemata (e.g. use of lookahead) may reduce time and space efficiency instead of improving it,  there is a shared forest structure w/th at most cubic size for any CF grammar,  when O(n z) complexity is reqnired, the shape of a shared forest is dependent on the parsing schema used.
Analysis of Recursive State Machines
 In Proceedings of CAV 2001
, 2001
"... . Recursive state machines (RSMs) enhance the power of ordinary state machines by allowing vertices to correspond either to ordinary states or to potentially recursive invocations of other state machines. RSMs can model the control flow in sequential imperative programs containing recursive proc ..."
Abstract

Cited by 111 (21 self)
 Add to MetaCart
. Recursive state machines (RSMs) enhance the power of ordinary state machines by allowing vertices to correspond either to ordinary states or to potentially recursive invocations of other state machines. RSMs can model the control flow in sequential imperative programs containing recursive procedure calls. They can be viewed as a visual notation extending Statechartslike hierarchical state machines, where concurrency is disallowed but recursion is allowed. They are also related to various models of pushdown systems studied in the verification and program analysis communities. After introducing RSMs, we focus on whether statespace analysis can be performed efficiently for RSMs. We consider the two central problems for algorithmic analysis and model checking, namely, reachability (is a target state reachable from initial states) and cycle detection (is there a reachable cycle containing an accepting state). We show that both these problems can be solved in time O(n` 2 ) and space O(n`), where n is the size of the recursive machine and ` is the maximum, over all component state machines, of the minimum of the number of entries and the number of exits of each component. We also study the precise relationship between RSMs and closely related models. 1
Multiplying matrices faster than coppersmithwinograd
 In Proc. 44th ACM Symposium on Theory of Computation
, 2012
"... We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1 ..."
Abstract

Cited by 39 (5 self)
 Add to MetaCart
We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1
Parsing Incomplete Sentences
, 1988
"... An efficient contextfree parsing algorithln is preseuted that can parse sentences with unknown parts of unknown length. It produc in finite form all possible parses (often infinite in number) that could account for the missing parts. The algorithm is a variation on the construction due to Earl ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
An efficient contextfree parsing algorithln is preseuted that can parse sentences with unknown parts of unknown length. It produc in finite form all possible parses (often infinite in number) that could account for the missing parts. The algorithm is a variation on the construction due to Earley. ltowever, its presentation is such that it can readily be adapted to any chart parsing schema (top down, bottomup, etc...).
Cacheefficient Dynamic Programming Algorithms for Multicores
, 2008
"... We present cacheefficient chip multiprocessor (CMP) algorithms with good speedup for some widely used dynamic programming algorithms. We consider three types of caching systems for CMPs: DCMP with a private cache for each core, SCMP with a single cache shared by all cores, and Multicore, which h ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
We present cacheefficient chip multiprocessor (CMP) algorithms with good speedup for some widely used dynamic programming algorithms. We consider three types of caching systems for CMPs: DCMP with a private cache for each core, SCMP with a single cache shared by all cores, and Multicore, which has private L1 caches and a shared L2 cache. We derive results for three classes of problems: local dependency dynamic programming (LDDP), Gaussian Elimination Paradigm (GEP), and parenthesis problem. For each class of problems, we develop a generic CMP algorithm with an associated tiling sequence. We then tailor this tiling sequence to each caching model and provide a parallel schedule that results in a cacheefficient parallel execution up to the critical path length of the underlying dynamic programming algorithm. We present experimental results on an 8core Opteron for two sequence alignment problems that are important examples of LDDP. Our experimental results show good speedups for simple versions of our algorithms.
Fast ContextFree Grammar Parsing Requires Fast Boolean Matrix Multiplication
, 2002
"... In 1975, Valiant showed that Boolean matrix multiplication can be used for parsing contextfree grammars (CFGs), yielding the asympotically fastest (although not practical) CFG parsing algorithm known. We prove a dual result: any CFG parser with time complexity $O(g n^{3  \epsilson})$, where $g$ is ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
In 1975, Valiant showed that Boolean matrix multiplication can be used for parsing contextfree grammars (CFGs), yielding the asympotically fastest (although not practical) CFG parsing algorithm known. We prove a dual result: any CFG parser with time complexity $O(g n^{3  \epsilson})$, where $g$ is the size of the grammar and $n$ is the length of the input string, can be efficiently converted into an algorithm to multiply $m \times m$ Boolean matrices in time $O(m^{3  \epsilon/3})$. Given that practical, substantially subcubic Boolean matrix multiplication algorithms have been quite difficult to find, we thus explain why there has been little progress in developing practical, substantially subcubic general CFG parsers. In proving this result, we also develop a formalization of the notion of parsing.
Cacheoblivious dynamic programming
 In Proc. of the Seventeenth Annual ACMSIAM Symposium on Discrete Algorithms, SODA ’06
, 2006
"... We present efficient cacheoblivious algorithms for several fundamental dynamic programs. These include new algorithms with improved cache performance for longest common subsequence (LCS), edit distance, gap (i.e., edit distance with gaps), and least weight subsequence. We present a new cacheoblivi ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We present efficient cacheoblivious algorithms for several fundamental dynamic programs. These include new algorithms with improved cache performance for longest common subsequence (LCS), edit distance, gap (i.e., edit distance with gaps), and least weight subsequence. We present a new cacheoblivious framework called the Gaussian Elimination Paradigm (GEP) for Gaussian elimination without pivoting that also gives cacheoblivious algorithms for FloydWarshall allpairs shortest paths in graphs and ‘simple DP’, among other problems. 1
Expressivity and complexity of the Grammatical Framework
 Göteborg University and Chalmers University of Technology, Gothenburg, Sweden
, 2004
"... Varje varelse, varje skapelse, varje dröm som människan n˚agonsin drömt finns här. Ni formade dem i era drömmar och fabler och i era böcker, ni gav dem form och substans och ni trodde p˚a dem och gav dem makt att göra det och det ända tills de fick eget liv. Och sedan övergav ni dem. i Lundwall (197 ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Varje varelse, varje skapelse, varje dröm som människan n˚agonsin drömt finns här. Ni formade dem i era drömmar och fabler och i era böcker, ni gav dem form och substans och ni trodde p˚a dem och gav dem makt att göra det och det ända tills de fick eget liv. Och sedan övergav ni dem. i Lundwall (1974, p. 114) ii This thesis investigates the expressive power and parsing complexity of the grammatical framework (gf), a formalism originally designed for displaying formal propositions and proofs in natural language. This is done by relating gf with two more wellknown grammar formalisms; generalized contextfree grammar (gcfg), best seen as a framework for describing various grammar formalisms; and parallel multiple contextfree grammar (pmcfg), an instance of gcfg. Since gf is a fairly new theory, some questions about expressivity and parsing complexity have until now not been answered; and these questions are the main focus of this thesis. The main result is that the important subclass contextfree gf is equivalent to pmcfg, which has polynomial parsing complexity, and whose expressive power is fairly well known. Furthermore, we give a number of tabular parsing algorithms for pmcfg with polynomial complexity, by extending existing algorithms for contextfree grammars. We suggest three possible extensions of gf/pmcfg, and discuss how the expressive power and parsing complexity are influenced. Finally, we discuss the parsing problem for unrestricted gf grammars, which is undecidable in general. We nevertheless describe a procedure for parsing grammars containing higherorder functions and dependent types.
Regularity lemmas and combinatorial algorithms
 In Proc. FOCS
"... Abstract — We present new combinatorial algorithms for Boolean matrix multiplication (BMM) and preprocessing a graph to answer independent set queries. We give the first asymptotic improvements on combinatorial algorithms for dense BMM in many years, improving on the “Four Russians ” O(n 3 /(w log n ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Abstract — We present new combinatorial algorithms for Boolean matrix multiplication (BMM) and preprocessing a graph to answer independent set queries. We give the first asymptotic improvements on combinatorial algorithms for dense BMM in many years, improving on the “Four Russians ” O(n 3 /(w log n)) bound for machine models with wordsize w. (For a pointer machine, we can set w = log n.) The algorithms utilize notions from Regularity Lemmas for graphs in a novel way. • We give two randomized combinatorial algorithms for BMM. The first algorithm is essentially a reduction from BMM to the Triangle Removal Lemma. The best known bounds for the Triangle Removal Lemma only imply an O ` (n 3 log β)/(βw log n) ´ time algorithm for BMM where β = (log ∗ n) δ for some δ> 0, but improvements on the Triangle Removal Lemma would yield corresponding runtime improvements. The second algorithm applies the Weak Regularity Lemma of Frieze and Kannan along with “ several information compression ideas, running in O n 3 (log log n) 2 /(log n) 9/4 ”) time with probability exponentially “ close to 1. When w ≥ log n, it can be implemented in O n 3 (log log n) 2 /(w log n) 7/6 ”) time. Our results immediately imply improved combinatorial methods for CFG parsing, detecting trianglefreeness, and transitive closure. Using Weak Regularity, we also give an algorithm for answering queries of the form is S ⊆ V an independent set? in a graph. Improving on prior work, we show how to randomly preprocess a graph in O(n 2+ε) time (for all ε> 0) so that with high probability, all subsequent batches of log n independent “ set queries can be answered deterministically in O n 2 (log log n) 2 /((log n) 5/4 ”) time. When w ≥ log n, w queries can be answered in O n 2 (log log n) 2 /((log n) 7/6 ” time. In addition to its nice applications, this problem is interesting in that it is not known how to do better than O(n 2) using “algebraic ” methods. 1.
Breaking the CoppersmithWinograd barrier. Unpublished manuscript
, 2011
"... We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1 ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
We develop new tools for analyzing matrix multiplication constructions similar to the CoppersmithWinograd construction, and obtain a new improved bound on ω < 2.3727. 1