Results 1  10
of
89,135
Chromatic pac bayes bounds for noniid data
 In Twelfth International Conference on Artificial Intelligence and Statistics. Omnipress
, 2009
"... PACBayes bounds are among the most accurate generalization bounds for classifiers learned with IID data, and it is particularly so for margin classifiers. However, there are many practical cases where the training data show some dependencies and where the traditional IID assumption does not apply. ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
PACBayes bounds are among the most accurate generalization bounds for classifiers learned with IID data, and it is particularly so for margin classifiers. However, there are many practical cases where the training data show some dependencies and where the traditional IID assumption does not apply
Journal of Machine Learning Research () Submitted; Published Chromatic PACBayes Bounds for NonIID Data: Applications to Ranking and Stationary βMixing Processes
, 909
"... PacBayes bounds are among the most accurate generalization bounds for classifiers learned from independently and identically distributed (IID) data, and it is particularly so for margin classifiers: there have been recent contributions showing how practical these bounds can be either to perform mod ..."
Abstract
 Add to MetaCart
PacBayes bounds are among the most accurate generalization bounds for classifiers learned from independently and identically distributed (IID) data, and it is particularly so for margin classifiers: there have been recent contributions showing how practical these bounds can be either to perform
PACBayes & Margins
 Advances in Neural Information Processing Systems 15
, 2002
"... We show two related things: (1) Given a classi er which consists of a weighted sum of features with a large margin, we can construct a stochastic classi er with negligibly larger training error rate. The stochastic classi er has a future error rate bound that depends on the margin distributio ..."
Abstract

Cited by 80 (12 self)
 Add to MetaCart
We show two related things: (1) Given a classi er which consists of a weighted sum of features with a large margin, we can construct a stochastic classi er with negligibly larger training error rate. The stochastic classi er has a future error rate bound that depends on the margin
Boosting a Weak Learning Algorithm By Majority
, 1995
"... We present an algorithm for improving the accuracy of algorithms for learning binary concepts. The improvement is achieved by combining a large number of hypotheses, each of which is generated by training the given learning algorithm on a different set of examples. Our algorithm is based on ideas pr ..."
Abstract

Cited by 516 (15 self)
 Add to MetaCart
presented by Schapire in his paper "The strength of weak learnability", and represents an improvement over his results. The analysis of our algorithm provides general upper bounds on the resources required for learning in Valiant's polynomial PAC learning framework, which are the best general
Survey on Independent Component Analysis
 NEURAL COMPUTING SURVEYS
, 1999
"... A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation of the ..."
Abstract

Cited by 2241 (104 self)
 Add to MetaCart
A common problem encountered in such disciplines as statistics, data analysis, signal processing, and neural network research, is nding a suitable representation of multivariate data. For computational and conceptual simplicity, such a representation is often sought as a linear transformation
Estimating the Support of a HighDimensional Distribution
, 1999
"... Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We propo ..."
Abstract

Cited by 766 (29 self)
 Add to MetaCart
Suppose you are given some dataset drawn from an underlying probability distribution P and you want to estimate a "simple" subset S of input space such that the probability that a test point drawn from P lies outside of S is bounded by some a priori specified between 0 and 1. We
Where the REALLY Hard Problems Are
 IN J. MYLOPOULOS AND R. REITER (EDS.), PROCEEDINGS OF 12TH INTERNATIONAL JOINT CONFERENCE ON AI (IJCAI91),VOLUME 1
, 1991
"... It is well known that for many NPcomplete problems, such as KSat, etc., typical cases are easy to solve; so that computationally hard cases must be rare (assuming P != NP). This paper shows that NPcomplete problems can be summarized by at least one "order parameter", and that the hard p ..."
Abstract

Cited by 681 (1 self)
 Add to MetaCart
problems. We show that for some P problems either there is no phase transition or it occurs for bounded N (and so bound...
SemiSupervised Learning Literature Survey
, 2006
"... We review the literature on semisupervised learning, which is an area in machine learning and more generally, artificial intelligence. There has been a whole
spectrum of interesting ideas on how to learn from both labeled and unlabeled data, i.e. semisupervised learning. This document is a chapter ..."
Abstract

Cited by 757 (8 self)
 Add to MetaCart
We review the literature on semisupervised learning, which is an area in machine learning and more generally, artificial intelligence. There has been a whole
spectrum of interesting ideas on how to learn from both labeled and unlabeled data, i.e. semisupervised learning. This document is a
Results 1  10
of
89,135