• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 1,245
Next 10 →

Bayesian Network Classifiers

by Nir Friedman, Dan Geiger, Moises Goldszmidt , 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract - Cited by 796 (20 self) - Add to MetaCart
restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly

Boosting the margin: A new explanation for the effectiveness of voting methods

by Robert E. Schapire, Yoav Freund, Peter Bartlett, Wee Sun Lee - IN PROCEEDINGS INTERNATIONAL CONFERENCE ON MACHINE LEARNING , 1997
"... One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this ..."
Abstract - Cited by 897 (52 self) - Add to MetaCart
that techniques used in the analysis of Vapnik’s support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins

Beyond Independence: Conditions for the Optimality of the Simple Bayesian Classifier

by Pedro Domingos, Michael Pazzani
"... The simple Bayesian classifier (SBC) is commonly thought to assume that attributes are independent given the class, but this is apparently contradicted by the surprisingly good performance it exhibits in many domains that contain clear attribute dependences. No explanation for this has been proposed ..."
Abstract - Cited by 361 (8 self) - Add to MetaCart
The simple Bayesian classifier (SBC) is commonly thought to assume that attributes are independent given the class, but this is apparently contradicted by the surprisingly good performance it exhibits in many domains that contain clear attribute dependences. No explanation for this has been

Discriminative probabilistic models for relational data

by Ben Taskar , 2002
"... In many supervised learning tasks, the entities to be labeled are related to each other in complex ways and their labels are not independent. For example, in hypertext classification, the labels of linked pages are highly correlated. A standard approach is to classify each entity independently, igno ..."
Abstract - Cited by 415 (12 self) - Add to MetaCart
, ignoring the correlations between them. Recently, Probabilistic Relational Models, a relational version of Bayesian networks, were used to define a joint probabilistic model for a collection of related entities. In this paper, we present an alternative framework that builds on (conditional) Markov networks

Mining Concept-Drifting Data Streams Using Ensemble Classifiers

by Haixun Wang, Wei Fan, Philip S. Yu, Jiawei Han , 2003
"... Recently, mining data streams with concept drifts for actionable insights has become an important and challenging task for a wide range of applications including credit card fraud protection, target marketing, network intrusion detection, etc. Conventional knowledge discovery tools are facing two ch ..."
Abstract - Cited by 280 (37 self) - Add to MetaCart
challenges, the overwhelming volume of the streaming data, and the concept drifts. In this paper, we propose a general framework for mining concept-drifting data streams using weighted ensemble classifiers. We train an ensemble of classification models, such as C4.5, RIPPER, naive Bayesian, etc., from

Internet traffic classification using bayesian analysis techniques

by Andrew W. Moore, Denis Zuev - In ACM SIGMETRICS , 2005
"... Accurate traffic classification is of fundamental importance to numerous other network activities, from security monitoring to accounting, and from Quality of Service to providing operators with useful forecasts for long-term provisioning. We apply a Naïve Bayes estimator to categorize traffic by ap ..."
Abstract - Cited by 271 (8 self) - Add to MetaCart
by application. Uniquely, our work capitalizes on hand-classified network data, using it as input to a supervised Naïve Bayes estimator. In this paper we illustrate the high level of accuracy achievable with the Naïve Bayes estimator. We further illustrate the improved accuracy of refined variants

Comparing Bayesian Network Classifiers

by Jie Cheng, Russell Greiner , 1999
"... In this paper, we empirically evaluate algorithms for learning four types of Bayesian network (BN) classifiers -- Naïve-Bayes, tree augmented Naïve-Bayes, BN augmented Naïve-Bayes and general BNs, where the latter two are learned using two variants of a conditional-independence (CI) based BNlearnin ..."
Abstract - Cited by 105 (5 self) - Add to MetaCart
In this paper, we empirically evaluate algorithms for learning four types of Bayesian network (BN) classifiers -- Naïve-Bayes, tree augmented Naïve-Bayes, BN augmented Naïve-Bayes and general BNs, where the latter two are learned using two variants of a conditional-independence (CI) based

Learning Limited Dependence Bayesian Classifiers

by Mehran Sahami - In KDD-96: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining , 1996
"... We present a framework for characterizing Bayesian classification methods. This framework can be thought of as a spectrum of allowable dependence in a given probabilistic model with the Naive Bayes algorithm at the most restrictive end and the learning of full Bayesian networks at the most general e ..."
Abstract - Cited by 131 (4 self) - Add to MetaCart
We present a framework for characterizing Bayesian classification methods. This framework can be thought of as a spectrum of allowable dependence in a given probabilistic model with the Naive Bayes algorithm at the most restrictive end and the learning of full Bayesian networks at the most general

Building Classifiers using Bayesian Networks

by Nir Friedman, Moises Goldszmidt - In Proceedings of the thirteenth national conference on artificial intelligence , 1996
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state of the art classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract - Cited by 92 (2 self) - Add to MetaCart
restrictive assumptions can perform even better. In this paper we examine and evaluate approaches for inducing classifiers from data, based on recent results in the theory of learning Bayesian networks. Bayesian networks are factored representations of probability distributions that generalize the naive Bayes

Addressing the Curse of Imbalanced Training Sets: One-Sided Selection

by Miroslav Kubat, Stan Matwin - In Proceedings of the Fourteenth International Conference on Machine Learning , 1997
"... Adding examples of the majority class to the training set can have a detrimental effect on the learner's behavior: noisy or otherwise unreliable examples from the majority class can overwhelm the minority class. The paper discusses criteria to evaluate the utility of classifiers induced f ..."
Abstract - Cited by 234 (1 self) - Add to MetaCart
Adding examples of the majority class to the training set can have a detrimental effect on the learner's behavior: noisy or otherwise unreliable examples from the majority class can overwhelm the minority class. The paper discusses criteria to evaluate the utility of classifiers induced
Next 10 →
Results 1 - 10 of 1,245
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University