Results 1 
7 of
7
A heartbeat mechanism and its application in Gigascope
 In VLDB
, 2005
"... Data stream management systems often rely on ordering properties of tuple attributes in order to implement nonblocking operators. However, query operators that work with multiple streams, such as stream merge or join, can often still block if one of the input stream is very slow or bursty. In princ ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
Data stream management systems often rely on ordering properties of tuple attributes in order to implement nonblocking operators. However, query operators that work with multiple streams, such as stream merge or join, can often still block if one of the input stream is very slow or bursty. In principle, punctuation and heartbeat mechanisms have been proposed to unblock streaming operators. In practice, it is a challenge to incorporate such mechanisms into a highperformance stream management system that is operational in an industrial application. In this paper, we introduce a system for punctuationcarrying heartbeat generation that we developed for Gigascope, a highperformance streaming database for network monitoring, that is operationally used within AT&T's IP backbone. We show how heartbeats can be regularly generated by lowlevel nodes in query execution plans and propagated upward unblocking all streaming operators on its way. Additionally, our heartbeat mechanism can be used for other applications in distributed settings such as detecting node failures, performance monitoring, and query optimization. A performance evaluation using live data feeds shows that our system is capable of working at multiple Gigabit line speeds in a live, industrial deployment and can significantly decrease the query memory utilization.
The learnability of abstract syntactic principles
"... Children acquiring language infer the correct form of syntactic constructions for which they appear to have little or no direct evidence, avoiding simple but incorrect generalizations that would be consistent with the data they receive. These generalizations must be guided by some inductive bias – ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Children acquiring language infer the correct form of syntactic constructions for which they appear to have little or no direct evidence, avoiding simple but incorrect generalizations that would be consistent with the data they receive. These generalizations must be guided by some inductive bias – some abstract knowledge – that leads them to prefer the correct hypotheses even in the absence of directly supporting evidence. What form do these inductive constraints take? It is often argued or assumed that they reflect innately specified knowledge of language. A classic example of such an argument moves from the phenomenon of auxiliary fronting in English interrogatives to the conclusion that children must innately know that syntactic rules are defined over hierarchical phrase structures rather than linear sequences of words (e.g., Chomsky 1965, 1971, 1980; Crain & Nakayama, 1987). Here we use a Bayesian framework for grammar induction to argue for a different possibility. We show that, given typical childdirected speech and certain innate domaingeneral capacities, an unbiased ideal learner could recognize the hierarchical phrase structure of language without having this knowledge innately specified as part of the language faculty. We discuss the implications of this analysis for accounts of human language acquisition.
A new probability inequality using typical moments and concentration results
 Proceedings of the 50th Annual Symposium on Foundations of Computer Science
, 2009
"... It is of wide interest to prove upper bounds on the probability that the sum of random variables X1, X2,...Xn deviates much from its mean. If the Xi are independent realvalued mean 0 random variables, then we know from the Central Limit Theorem (CLT) that their sum (in the limit) has Gaussian ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
It is of wide interest to prove upper bounds on the probability that the sum of random variables X1, X2,...Xn deviates much from its mean. If the Xi are independent realvalued mean 0 random variables, then we know from the Central Limit Theorem (CLT) that their sum (in the limit) has Gaussian
Claire Kenyon's research statement
"... this paper, by providing a detailed analysis of a simple model, will help to clarify a diverse area ..."
Abstract
 Add to MetaCart
this paper, by providing a detailed analysis of a simple model, will help to clarify a diverse area
A Probability inequality using typical moments and Concentration Results
, 809
"... It is of wide interest to prove upper bounds on the probability that the sum of random variables X1, X2,...Xn deviates much from its mean. If the Xi are independent realvalued mean 0 random variables, then we know from the Central Limit Theorem (CLT) that their sum (in the limit) has Gaussian ..."
Abstract
 Add to MetaCart
It is of wide interest to prove upper bounds on the probability that the sum of random variables X1, X2,...Xn deviates much from its mean. If the Xi are independent realvalued mean 0 random variables, then we know from the Central Limit Theorem (CLT) that their sum (in the limit) has Gaussian
Learnability of syntax 1 Running head: LEARNABILITY OF SYNTAX The learnability of abstract syntactic principles
"... The learnability of abstract syntactic principles Children acquiring language infer the correct form of syntactic constructions for which they appear to have little or no direct evidence, avoiding simple but incorrect generalizations that would be consistent with the data they receive. These general ..."
Abstract
 Add to MetaCart
The learnability of abstract syntactic principles Children acquiring language infer the correct form of syntactic constructions for which they appear to have little or no direct evidence, avoiding simple but incorrect generalizations that would be consistent with the data they receive. These generalizations must be guided by some inductive bias – some abstract knowledge – that leads them to prefer the correct hypotheses even in the absence of directly supporting evidence. What form do these inductive constraints take? It is often argued or assumed that they reflect innately specified knowledge of language. A classic example of such an argument moves from the phenomenon of auxiliary fronting in English interrogatives to the conclusion that children must innately know that syntactic rules are defined over hierarchical phrase structures rather than linear sequences of words (e.g., Chomsky 1965, 1971, 1980; Crain & Nakayama, 1987). Here we use a Bayesian framework for grammar induction to argue for a different possibility. We show that, given typical childdirected speech and certain innate domaingeneral capacities, an unbiased ideal learner could recognize the hierarchical phrase structure of language without having this knowledge innately specified as part of the language faculty. We discuss the implications of this analysis for accounts of human language acquisition. Learnability of syntax 3