Results 1  10
of
22
Noiseadaptive Marginbased Active Learning and Lower Bounds under Tsybakov Noise Condition
"... We present a simple noiserobust marginbased active learning algorithm to find homogeneous (passing the origin) linear separators and analyze its error convergence when labels are corrupted by noise. We show that when the imposed noise satisfies the Tsybakov low noise condition (Mammen, Tsybakov, ..."
Abstract
 Add to MetaCart
We present a simple noiserobust marginbased active learning algorithm to find homogeneous (passing the origin) linear separators and analyze its error convergence when labels are corrupted by noise. We show that when the imposed noise satisfies the Tsybakov low noise condition (Mammen, Tsybakov
Optimal rates for firstorder stochastic convex optimization under Tsybakov noise condition
"... We focus on the problem of minimizing a convex function f over a convex set S given T queries to a stochastic first order oracle. We argue that the complexity of convex minimization is only determined by the rate of growth of the function around its minimizer x ∗ f,S, as quantified by a Tsybakovlik ..."
Abstract
 Add to MetaCart
We focus on the problem of minimizing a convex function f over a convex set S given T queries to a stochastic first order oracle. We argue that the complexity of convex minimization is only determined by the rate of growth of the function around its minimizer x ∗ f,S, as quantified by a Tsybakov
Multiview active learning in the nonrealizable case
 In Advances in Neural Information Processing Systems
"... The sample complexity of active learning under the realizability assumption has been wellstudied. The realizability assumption, however, rarely holds in practice. In this paper, we theoretically characterize the sample complexity of active learning in the nonrealizable case under multiview setti ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
view setting. We prove that, with unbounded Tsybakov noise, the sample complexity of multiview active learning can be Õ(log 1ǫ), contrasting to singleview setting where the polynomial improvement is the best possible achievement. We also prove that in general multiview setting the sample complexity
Active Learning with a Drifting Distribution
"... Abstract. We study the problem of active learning in a streambased setting, allowing the distribution of the examples to change over time. We prove upper bounds on the number of prediction mistakes and number of label requests for established disagreementbased active learning algorithms, both in t ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
in the realizable case and under Tsybakov noise. We further prove minimax lower bounds for this problem. 1
SVM soft margin classifiers: linear programming versus quadratic programming
 Neural Comp
"... Support vector machine soft margin classifiers are important learning algorithms for classification problems. They can be stated as convex optimization problems and are suitable for a large data setting. Linear programming SVM classifier is specially efficient for very large size samples. But little ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
for deterministic and weakly separable distributions, and for distributions satisfying some Tsybakov noise condition. 1 1
Optimal rates of aggregation in classification under low noise assumption
, 2007
"... In the same spirit as Tsybakov, we define the optimality of an aggregation procedure in the problem of classification. Using an aggregate with exponential weights, we obtain an optimal rate of convex aggregation for the hinge risk under the margin assumption. Moreover, we obtain an optimal rate of m ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
In the same spirit as Tsybakov, we define the optimality of an aggregation procedure in the problem of classification. Using an aggregate with exponential weights, we obtain an optimal rate of convex aggregation for the hinge risk under the margin assumption. Moreover, we obtain an optimal rate
Fast rates for support vector machines using gaussian kernels
 Ann. Statist
, 2004
"... We establish learning rates up to the order of n −1 for support vector machines with hinge loss (L1SVMs) and nontrivial distributions. For the stochastic analysis of these algorithms we use recently developed concepts such as Tsybakov’s noise assumption and local Rademacher averages. Furthermore we ..."
Abstract

Cited by 68 (9 self)
 Add to MetaCart
We establish learning rates up to the order of n −1 for support vector machines with hinge loss (L1SVMs) and nontrivial distributions. For the stochastic analysis of these algorithms we use recently developed concepts such as Tsybakov’s noise assumption and local Rademacher averages. Furthermore
Adaptive Sampling Under Low Noise Conditions
 41ÈMES JOURNÉES DE STATISTIQUE, SFDS, BORDEAUX
, 2009
"... We survey some recent results on efficient marginbased algorithms for adaptive sampling in binary classification tasks. Using the socalled MammenTsybakov low noise condition to parametrize the distribution of covariates, and assuming linear label noise, we state bounds on the convergence rate of ..."
Abstract
 Add to MetaCart
We survey some recent results on efficient marginbased algorithms for adaptive sampling in binary classification tasks. Using the socalled MammenTsybakov low noise condition to parametrize the distribution of covariates, and assuming linear label noise, we state bounds on the convergence rate
Margin based active learning
 Proc. of the 20 th Conference on Learning Theory
, 2007
"... Abstract. We present a framework for margin based active learning of linear separators. We instantiate it for a few important cases, some of which have been previously considered in the literature. We analyze the effectiveness of our framework both in the realizable case and in a specific noisy sett ..."
Abstract

Cited by 56 (9 self)
 Add to MetaCart
setting related to the Tsybakov small noise condition. 1
Linear Classification and Selective Sampling Under Low Noise Conditions
"... We provide a new analysis of an efficient marginbased algorithm for selective sampling in classification problems. Using the socalled Tsybakov low noise condition to parametrize the instance distribution, we show bounds on the convergence rate to the Bayes risk of both the fully supervised and the ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
We provide a new analysis of an efficient marginbased algorithm for selective sampling in classification problems. Using the socalled Tsybakov low noise condition to parametrize the instance distribution, we show bounds on the convergence rate to the Bayes risk of both the fully supervised
Results 1  10
of
22