• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 1,604,683
Next 10 →

Bayesian Network Classifiers

by Nir Friedman, Dan Geiger, Moises Goldszmidt , 1997
"... Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restr ..."
Abstract - Cited by 788 (23 self) - Add to MetaCart
Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less

Invariance Theory, the Heat Equation, and the Atiyah-Singer Index Theorem

by Peter B. Gilkey , 1986
"... ..."
Abstract - Cited by 719 (40 self) - Add to MetaCart
Abstract not found

Exploiting Generative Models in Discriminative Classifiers

by Tommi Jaakkola, David Haussler - In Advances in Neural Information Processing Systems 11 , 1998
"... Generative probability models such as hidden Markov models provide a principled way of treating missing information and dealing with variable length sequences. On the other hand, discriminative methods such as support vector machines enable us to construct flexible decision boundaries and often resu ..."
Abstract - Cited by 538 (11 self) - Add to MetaCart
result in classification performance superior to that of the model based approaches. An ideal classifier should combine these two complementary approaches. In this paper, we develop a natural way of achieving this combination by deriving kernel functions for use in discriminative methods such as support

Base-calling of automated sequencer traces using phred. I. Accuracy Assessment

by Brent Ewing, Ladeana Hillier, Michael C. Wendl, Phil Green - GENOME RES , 1998
"... The availability of massive amounts of DNA sequence information has begun to revolutionize the practice of biology. As a result, current large-scale sequencing output, while impressive, is not adequate to keep pace with growing demand and, in particular, is far short of what will be required to obta ..."
Abstract - Cited by 1602 (4 self) - Add to MetaCart
improved accuracy of the data processing software and reliable accuracy measures to reduce the need for human involvement in error correction and make human review more efficient. Here, we describe one step toward that goal: a base-calling program for automated sequencer traces, phred, with improved

Estimating Continuous Distributions in Bayesian Classifiers

by George John, Pat Langley - In Proceedings of the Eleventh Conference on Uncertainty in Artificial Intelligence , 1995
"... When modeling a probability distribution with a Bayesian network, we are faced with the problem of how to handle continuous variables. Most previous work has either solved the problem by discretizing, or assumed that the data are generated by a single Gaussian. In this paper we abandon the normality ..."
Abstract - Cited by 489 (2 self) - Add to MetaCart
the normality assumption and instead use statistical methods for nonparametric density estimation. For a naive Bayesian classifier, we present experimental results on a variety of natural and artificial domains, comparing two methods of density estimation: assuming normality and modeling each conditional

A training algorithm for optimal margin classifiers

by Bernhard E. Boser, et al. - PROCEEDINGS OF THE 5TH ANNUAL ACM WORKSHOP ON COMPUTATIONAL LEARNING THEORY , 1992
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract - Cited by 1848 (44 self) - Add to MetaCart
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leave-one-out method and the VC-dimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.

Factoring wavelet transforms into lifting steps

by Ingrid Daubechies, Wim Sweldens - J. Fourier Anal. Appl , 1998
"... ABSTRACT. This paper is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple filter-ing steps, which we call lifting steps but that are also known as ladder structures. This dec ..."
Abstract - Cited by 573 (8 self) - Add to MetaCart
ABSTRACT. This paper is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple filter-ing steps, which we call lifting steps but that are also known as ladder structures

On Discriminative vs. Generative classifiers: A comparison of logistic regression and naive Bayes

by Andrew Y. Ng, Michael I. Jordan , 2001
"... We compare discriminative and generative learning as typified by logistic regression and naive Bayes. We show, contrary to a widely held belief that discriminative classifiers are almost always to be preferred, that there can often be two distinct regimes of performance as the training set size is i ..."
Abstract - Cited by 513 (8 self) - Add to MetaCart
We compare discriminative and generative learning as typified by logistic regression and naive Bayes. We show, contrary to a widely held belief that discriminative classifiers are almost always to be preferred, that there can often be two distinct regimes of performance as the training set size

Dictionary of protein secondary structure: pattern recognition of hydrogen-bonded and geometrical features

by Wolfgang Kabsch, Christian Sander - Biopolymers , 1983
"... structure ..."
Abstract - Cited by 2066 (4 self) - Add to MetaCart
structure

The Theory of Hybrid Automata

by Thomas A. Henzinger , 1996
"... A hybrid automaton is a formal model for a mixed discrete-continuous system. We classify hybrid automata acoording to what questions about their behavior can be answered algorithmically. The classification reveals structure on mixed discrete-continuous state spaces that was previously studied on pur ..."
Abstract - Cited by 680 (13 self) - Add to MetaCart
A hybrid automaton is a formal model for a mixed discrete-continuous system. We classify hybrid automata acoording to what questions about their behavior can be answered algorithmically. The classification reveals structure on mixed discrete-continuous state spaces that was previously studied
Next 10 →
Results 1 - 10 of 1,604,683
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University