Results 1 
6 of
6
Complexity Metrics in an Incremental Rightcorner Parser
"... Hierarchical HMM (HHMM) parsers make promising cognitive models: while they use a bounded model of working memory and pursue incremental hypotheses in parallel, they still achieve parsing accuracies competitive with chartbased techniques. This paper aims to validate that a rightcorner HHMM parser ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Hierarchical HMM (HHMM) parsers make promising cognitive models: while they use a bounded model of working memory and pursue incremental hypotheses in parallel, they still achieve parsing accuracies competitive with chartbased techniques. This paper aims to validate that a rightcorner HHMM parser is also able to produce complexity metrics, which quantify a reader’s incremental difficulty in understanding a sentence. Besides defining standard metrics in the HHMM framework, a new metric, embedding difference, is also proposed, which tests the hypothesis that HHMM store elements represents syntactic working memory. Results show that HHMM surprisal outperforms all other evaluated metrics in predicting reading times, and that embedding difference makes a significant, independent contribution. 1
Incremental Parsing in Bounded Memory
"... This tutorial will describe the use of a factored probabilistic sequence model for parsing speech and text using a bounded store of three to four incomplete constituents over time, in line with recent estimates of human shortterm working memory capacity. This formulation uses a grammar transform to ..."
Abstract
 Add to MetaCart
This tutorial will describe the use of a factored probabilistic sequence model for parsing speech and text using a bounded store of three to four incomplete constituents over time, in line with recent estimates of human shortterm working memory capacity. This formulation uses a grammar transform to minimize memory usage during parsing. Incremental operations on incomplete constituents in this transformed representation then define an extended domain of locality similar to those defined in mildly contextsensitive grammar formalisms, which can similarly be used to process longdistance and crossedandnested dependencies. 1
Effects of Fillergap Dependencies on Working Memory Requirements for Parsing
"... Corpus studies by Schuler, AbdelRahman, Miller, and Schwartz (2010), appear to support a model of comprehension taking place in a generalpurpose working memory store, by providing an existence proof that a simple probabilistic sequence model over stores of up to four syntacticallycontiguous memory ..."
Abstract
 Add to MetaCart
Corpus studies by Schuler, AbdelRahman, Miller, and Schwartz (2010), appear to support a model of comprehension taking place in a generalpurpose working memory store, by providing an existence proof that a simple probabilistic sequence model over stores of up to four syntacticallycontiguous memory elements has the capacity to reconstruct phrase structure trees for over 99.9 % of the sentences in the Penn Treebank Wall Street Journal corpus (Marcus, Santorini, & Marcinkiewicz, 1993), in line with capacity estimates for generalpurpose working memory, e.g. by Cowan (2001). But capacity predictions of this simple structurebased model ignore nonstructural dependencies, such as longdistance fillergap dependencies, that may place additional demands on working memory. Distinguishing unattached gap fillers from open attachment sites in syntacticallycontiguous memory elements requires this contiguity constraint to be strengthened to a constraint that working memory elements be semantically contiguous. This paper presents corpus results showing that this stricter semantic contiguity constraint still predicts working memory requirements in line with capacity estimates such as that of Cowan (2001). Keywords:
Ling 5801: Lecture Notes 11 From CFG Recognition to Probabilistic Parsing 1. Generalization of algorithms using semiring substitution
"... Operations in an algorithm can be replaced, keeping the same structure. For ‘dynamic programming ’ algorithms, this can be done using semiring substitution: A semiring is a tuple〈V,⊕,⊗, v⊥, v⊤ 〉 such that: • V is a domain of values • ⊕ is a function V × V → V such that: – ⊕ is associative (parens in ..."
Abstract
 Add to MetaCart
Operations in an algorithm can be replaced, keeping the same structure. For ‘dynamic programming ’ algorithms, this can be done using semiring substitution: A semiring is a tuple〈V,⊕,⊗, v⊥, v⊤ 〉 such that: • V is a domain of values • ⊕ is a function V × V → V such that: – ⊕ is associative (parens in sequences of operands don’t matter): v⊕(v ′ ⊕ v ′ ′)=(v⊕v ′)⊕v ′′ – ⊕ is commutative (order of operands doesn’t matter): v⊕v ′ = v ′ ⊕ v • ⊗ is a function V × V → V such that: – ⊗ is associative (parens in sequences of operands don’t matter): v⊗(v ′ ⊗ v ′′)=(v⊗v ′)⊗v′ ′ – ⊗ distributes over⊕(that is,⊗with common operands can jump outside⊕): (v⊗v ′)⊕(v⊗v ′′)=v⊗(v ′ ⊕ v ′′), (v ′ ⊗ v)⊕(v ′ ′ ⊗ v)=(v ′ ⊕ v ′′)⊗v or ⊕in the case of limit operators (which we often use in dynamic programming): v⊗v ′ ⊕ = v⊗ v ′ v ′ v ′ e.g. ∑ products involving variables not bound by sums may move outside sum ‘loop’: p · p ′ ∑ = p · p ′ ( 5·1+5·2=5·(1+2) a.k.a. ∑ p ′ ∈{1,2} 5 · p ′ = 5 · ∑ p ′ ∈{1,2} p ′) p ′ p ′ or ∨conjuncts may move outside disjunct ‘loop’: b∧b ′ ∨ = b∧ b ′ b ′ b ′ • v ⊥ is an identity element for⊕and annihilator for⊗(like 0 in reals): – v⊥ ∈ V – v⊕v⊥ = v and v⊥ ⊕ v=v – v⊗v⊥ = v ⊥ and v⊥ ⊗ v=v⊥ • v ⊤ is an identity element for⊗(like 1 in reals): – v⊤ ∈ V – v⊗v⊤ = v and v⊤ ⊗ v=v Parser can generalize, using different semirings for operators⊕, ⊗ and initial values of V: • boolean semiring〈{TRUE, FALSE},∨,∧, FALSE, TRUE〉: get original recognizer 1 • state sequences〈Q ∗,,◦, q⊥,ǫ〉: get set of possible trees/sequences • forward/inside〈R ∞ 0,+,·, 0, 1〉: get probability • tropical semiring〈R 0 −∞∪{−∞}, min,+,−∞, 0〉: get best tree/sequence prob • state sequence×tropical: best tree/sequence and probability 2. Generalized parsing: Any time you want to calculate something of the form: f (c, xi..x j)= τ w. root〈c,i, j 〉 〈c′,i′, j ′〉∈τ if i ′ = j ′ { if c = xi ′ : v⊤ if c ′ � xi ′ : v⊥ if i ′< j ′ ⊕: R(c ′ → d ′ e ′) k ′,d ′,e ′ s.t.〈d ′,i ′,k ′ 〉,〈e ′,k ′ +1, j ′ 〉∈τ you can apply generalized distributive axiom (pull metaconjunct out of metadisjunction): if c = xi: v⊤ if i = j: ⎪ ⎨ if c � xi: v⊥ f (c, xi..x j) = ⊕ ⊕ ⊗ { ⎪⎩
ConnectionistInspired IncrementalPCFG Parsing Marten van Schijndel TheOhio State University
"... Probabilistic contextfree grammars (PCFGs) are a popular cognitive model of syntax (Jurafsky, 1996). These can be formulated to be sensitive to human working memory constraints by application of a rightcorner transform (Schuler, 2009). One sideeffect of the transform is that it guarantees at most ..."
Abstract
 Add to MetaCart
Probabilistic contextfree grammars (PCFGs) are a popular cognitive model of syntax (Jurafsky, 1996). These can be formulated to be sensitive to human working memory constraints by application of a rightcorner transform (Schuler, 2009). One sideeffect of the transform is that it guarantees at most a single expansion (push) and at most a single reduction (pop) during a syntactic parse. The primary finding of this paper is that this propertyofrightcornerparsingcanbeexploitedto obtain a dramatic reduction in the number of random variables in a probabilistic sequence model parser. This yields a simpler structure that more closely resembles existing simple recurrentnetworkmodelsofsentencecomprehension.
A Crosslanguage Study on Automatic Speech Disfluency Detection
"... We investigate two systems for automatic disfluency detection on English and Mandarin conversational speech data. The first system combines various lexical and prosodic features in a Conditional Random Field model for detecting edit disfluencies. The second system combines acoustic and language mode ..."
Abstract
 Add to MetaCart
We investigate two systems for automatic disfluency detection on English and Mandarin conversational speech data. The first system combines various lexical and prosodic features in a Conditional Random Field model for detecting edit disfluencies. The second system combines acoustic and language model scores for detecting filled pauses through constrained speech recognition. We compare the contributions of different knowledge sources to detection performance between these two languages. 1