• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 119,746
Next 10 →

Linguistic Complexity: Locality of Syntactic Dependencies

by Edward Gibson - COGNITION , 1998
"... This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory -- the Syntactic Prediction Locality Theory (SPLT) -- has two components: an integration cost component and a component for the memory cost associa ..."
Abstract - Cited by 486 (31 self) - Add to MetaCart
This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory -- the Syntactic Prediction Locality Theory (SPLT) -- has two components: an integration cost component and a component for the memory cost

Accurate Unlexicalized Parsing

by Dan Klein, Christopher D. Manning - IN PROCEEDINGS OF THE 41ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS , 2003
"... We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its ..."
Abstract - Cited by 1026 (70 self) - Add to MetaCart
We demonstrate that an unlexicalized PCFG can parse much more accurately than previously shown, by making use of simple, linguistically motivated state splits, which break down false independence assumptions latent in a vanilla treebank grammar. Indeed, its

Head-Driven Statistical Models for Natural Language Parsing

by Michael Collins , 1999
"... ..."
Abstract - Cited by 1145 (16 self) - Add to MetaCart
Abstract not found

Automatic Word Sense Discrimination

by Hinrich Schütze - Journal of Computational Linguistics , 1998
"... This paper presents context-group discrimination, a disambiguation algorithm based on clustering. Senses are interpreted as groups (or clusters) of similar contexts of the ambiguous word. Words, contexts, and senses are represented in Word Space, a high-dimensional, real-valued space in which closen ..."
Abstract - Cited by 530 (1 self) - Add to MetaCart
This paper presents context-group discrimination, a disambiguation algorithm based on clustering. Senses are interpreted as groups (or clusters) of similar contexts of the ambiguous word. Words, contexts, and senses are represented in Word Space, a high-dimensional, real-valued space in which

Greedy Function Approximation: A Gradient Boosting Machine

by Jerome H. Friedman - Annals of Statistics , 2000
"... Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed for additi ..."
Abstract - Cited by 951 (12 self) - Add to MetaCart
Function approximation is viewed from the perspective of numerical optimization in function space, rather than parameter space. A connection is made between stagewise additive expansions and steepest{descent minimization. A general gradient{descent \boosting" paradigm is developed for additive expansions based on any tting criterion. Specic algorithms are presented for least{squares, least{absolute{deviation, and Huber{M loss functions for regression, and multi{class logistic likelihood for classication. Special enhancements are derived for the particular case where the individual additive components are regression trees, and tools for interpreting such \TreeBoost" models are presented. Gradient boosting of regression trees produces competitive, highly robust, interpretable procedures for both regression and classication, especially appropriate for mining less than clean data. Connections between this approach and the boosting methods of Freund and Shapire 1996, and Frie...

Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora

by Dekai Wu , 1997
"... ..."
Abstract - Cited by 562 (33 self) - Add to MetaCart
Abstract not found

Understanding Normal and Impaired Word Reading: Computational Principles in Quasi-Regular Domains

by David C. Plaut , James L. McClelland, Mark S. Seidenberg, Karalyn Patterson - PSYCHOLOGICAL REVIEW , 1996
"... We develop a connectionist approach to processing in quasi-regular domains, as exemplified by English word reading. A consideration of the shortcomings of a previous implementation (Seidenberg & McClelland, 1989, Psych. Rev.) in reading nonwords leads to the development of orthographic and phono ..."
Abstract - Cited by 583 (94 self) - Add to MetaCart
We develop a connectionist approach to processing in quasi-regular domains, as exemplified by English word reading. A consideration of the shortcomings of a previous implementation (Seidenberg & McClelland, 1989, Psych. Rev.) in reading nonwords leads to the development of orthographic

Sparse Bayesian Learning and the Relevance Vector Machine

by Michael E. Tipping, Alex Smola , 2001
"... This paper introduces a general Bayesian framework for obtaining sparse solutions to regression and classication tasks utilising models linear in the parameters. Although this framework is fully general, we illustrate our approach with a particular specialisation that we denote the `relevance vec ..."
Abstract - Cited by 958 (5 self) - Add to MetaCart
vector machine' (RVM), a model of identical functional form to the popular and state-of-the-art `support vector machine' (SVM). We demonstrate that by exploiting a probabilistic Bayesian learning framework, we can derive accurate prediction models which typically utilise dramatically fewer

Evolving Neural Networks through Augmenting Topologies

by Kenneth O. Stanley, Risto Miikkulainen - Evolutionary Computation
"... An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning task ..."
Abstract - Cited by 524 (113 self) - Add to MetaCart
An important question in neuroevolution is how to gain an advantage from evolving neural network topologies along with weights. We present a method, NeuroEvolution of Augmenting Topologies (NEAT), which outperforms the best fixed-topology method on a challenging benchmark reinforcement learning

The program dependence graph and its use in optimization

by Jeanne Ferrante, Karl J. Ottenstein, Joe D. Warren - ACM Transactions on Programming Languages and Systems , 1987
"... In this paper we present an intermediate program representation, called the program dependence graph (PDG), that makes explicit both the data and control dependence5 for each operation in a program. Data dependences have been used to represent only the relevant data flow relationships of a program. ..."
Abstract - Cited by 989 (3 self) - Add to MetaCart
computationally related parts of the program, a single walk of these dependences is sufficient to perform many optimizations. The PDG allows transformations such as vectorization, that previ-ously required special treatment of control dependence, to be performed in a manner that is uniform for both control
Next 10 →
Results 1 - 10 of 119,746
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University