• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 64,571
Next 10 →

An Empirical Study of Smoothing Techniques for Language Modeling

by Stanley F. Chen , 1998
"... We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Br ..."
Abstract - Cited by 1224 (21 self) - Add to MetaCart
We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e

Determinants of Economic Growth: A Cross-Country Empirical Study

by Robert J. Barro , 1996
"... Empirical findings for a panel of around 100 countries from 1960 to 1990 strongly support the general notion of conditional convergence. For a given starting level of real per capita GDP, the growth rate is enhanced by higher initial schooling and life expectancy, lower fertility, lower government c ..."
Abstract - Cited by 892 (12 self) - Add to MetaCart
Empirical findings for a panel of around 100 countries from 1960 to 1990 strongly support the general notion of conditional convergence. For a given starting level of real per capita GDP, the growth rate is enhanced by higher initial schooling and life expectancy, lower fertility, lower government

Loopy belief propagation for approximate inference: An empirical study. In:

by Kevin P Murphy , Yair Weiss , Michael I Jordan - Proceedings of Uncertainty in AI, , 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" -the use of Pearl's polytree algorithm in a Bayesian network with loops -can perform well in the context of error-correcting codes. The most dramatic instance of this is the near Shannon-limit performanc ..."
Abstract - Cited by 676 (15 self) - Add to MetaCart
to work well. In this paper we investigate loopy prop agation empirically under a wider range of conditions. Is there something special about the error-correcting code setting, or does loopy propagation work as an approximation scheme for a wider range of networks? ..\ x(:x).) (1) where: and: The message

An extensive empirical study of feature selection metrics for text classification

by George Forman, Isabelle Guyon, André Elisseeff - J. of Machine Learning Research , 2003
"... Machine learning for text classification is the cornerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and more accurate. This paper presents an empirical comparison ..."
Abstract - Cited by 496 (15 self) - Add to MetaCart
Machine learning for text classification is the cornerstone of document categorization, news filtering, document routing, and personalization. In text domains, effective feature selection is essential to make the learning task efficient and more accurate. This paper presents an empirical comparison

An Empirical Study of Operating System Errors

by Andy Chou, Junfeng Yang, Benjamin Chelf, Seth Hallem, Dawson Engler , 2001
"... We present a study of operating system errors found by automatic, static, compiler analysis applied to the Linux and OpenBSD kernels. Our approach differs from previ-ous studies that consider errors found by manual inspec-tion of logs, testing, and surveys because static analysis is applied uniforml ..."
Abstract - Cited by 363 (9 self) - Add to MetaCart
We present a study of operating system errors found by automatic, static, compiler analysis applied to the Linux and OpenBSD kernels. Our approach differs from previ-ous studies that consider errors found by manual inspec-tion of logs, testing, and surveys because static analysis is applied

The Strength of Weak Ties: A Network Theory Revisited

by Mark Granovetter - Sociological Theory , 1982
"... In this chapter I review empirical studies directly testing the ..."
Abstract - Cited by 903 (2 self) - Add to MetaCart
In this chapter I review empirical studies directly testing the

Popular ensemble methods: an empirical study

by David Opitz, Richard Maclin - Journal of Artificial Intelligence Research , 1999
"... An ensemble consists of a set of individually trained classifiers (such as neural networks or decision trees) whose predictions are combined when classifying novel instances. Previous research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. Baggi ..."
Abstract - Cited by 296 (4 self) - Add to MetaCart
. Analysis indicates that the performance of the Boosting methods is dependent on the characteristics of the data set being examined. In fact, further results show that Boosting ensembles may overfit noisy data sets, thus decreasing its performance. Finally, consistent with previous studies, our work

An Empirical Study of the Reliability of UNIX Utilities

by Barton P. Miller, Lars Fredriksen, Bryan So - In Proceedings of the Workshop of Parallel and Distributed Debugging , 1990
"... This report describes these tests and an analysis of the program bugs that caused the crashes. Content Indicators D.2.5 (Testing and Debugging), D.4.9 (Programs and Utilities), General term: reliability, UNIX. #################################### ..."
Abstract - Cited by 292 (5 self) - Add to MetaCart
This report describes these tests and an analysis of the program bugs that caused the crashes. Content Indicators D.2.5 (Testing and Debugging), D.4.9 (Programs and Utilities), General term: reliability, UNIX. ####################################

An empirical study of learning speed in back-propagation networks

by Scott E. Fahlman , 1988
"... Most connectionist or "neural network" learning systems use some form of the back-propagation algorithm. However, back-propagation learning is too slow for many applications, and it scales up poorly as tasks become larger and more complex. The factors governing learning speed are poorly un ..."
Abstract - Cited by 278 (0 self) - Add to MetaCart
understood. I have begun a systematic, empirical study of learning speed in backprop-like algorithms, measured against a variety of benchmark problems. The goal is twofold: to develop faster learning algorithms and to contribute to the development of a methodology that will be of value in future studies

An empirical comparison of voting classification algorithms: Bagging, boosting, and variants.

by Eric Bauer , Philip Chan , Salvatore Stolfo , David Wolpert - Machine Learning, , 1999
"... Abstract. Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several vari ..."
Abstract - Cited by 707 (2 self) - Add to MetaCart
Abstract. Methods for voting classification algorithms, such as Bagging and AdaBoost, have been shown to be very successful in improving the accuracy of certain classifiers for artificial and real-world datasets. We review these algorithms and describe a large empirical study comparing several
Next 10 →
Results 1 - 10 of 64,571
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University