## Analysis of a greedy active learning strategy (2004)

### Cached

### Download Links

- [www.cs.ucsd.edu]
- [www.csail.mit.edu]
- [www.cs.ucsd.edu]
- [cseweb.ucsd.edu]
- [www-connex.lip6.fr]
- [books.nips.cc]
- [www-poleia.lip6.fr]
- DBLP

### Other Repositories/Bibliography

Venue: | Advances in Neural Information Processing Systems 17 |

Citations: | 75 - 3 self |

### BibTeX

@INPROCEEDINGS{Dasgupta04analysisof,

author = {Sanjoy Dasgupta},

title = {Analysis of a greedy active learning strategy},

booktitle = {Advances in Neural Information Processing Systems 17},

year = {2004}

}

### Years of Citing Articles

### OpenURL

### Abstract

We abstract out the core search problem of active learning schemes, to better understand the extent to which adaptive labeling can improve sample complexity. We give various upper and lower bounds on the number of labels which need to be queried, and we prove that a popular greedy active learning rule is approximately as good as any other strategy for minimizing this number of labels. 1

### Citations

682 |
Approximation algorithms for combinatorial problems
- Johnson
- 1974
(Show Context)
Citation Context ...tion, then some query must (again in expectation) “cut off” a chunk of � H of π-mass Ω(1/Q ∗ ). Therefore, the root query of the greedy tree TG is at least this good (cf. Johnson’s set cover analysis =-=[8]-=-). Things get trickier when we try to show that the rest of TG is also good, because although T ∗ uses just Q ∗ queries on average, it may need many more queries for certain hypotheses. Subtrees of TG... |

647 |
Queries and concept learning
- Angluin
- 1988
(Show Context)
Citation Context ...cally chosen from an unlabeled data set, a practice called pool-based learning [10]. There has also been some work on creating query points synthetically, including a rich body of theoretical results =-=[1, 2]-=-, but this approach suffers from two problems: first, from a practical viewpoint, the queries thus produced can be quite unnatural and therefore bewildering for a human to classify [3]; second, since ... |

596 |
An Introduction to Computational Learning Theory
- Kearns, Vazirani
- 1994
(Show Context)
Citation Context ... m(ɛ, d) such that if we want a hypothesis of error ≤ ɛ (on P , modulo some fixed confidence level), and if m ≥ m(ɛ, d), then we need only pick a hypothesis h ∈ H consistent with these labeled points =-=[9]-=-. Now suppose just the pool of unlabeled data x1, . . . , xm is available. The possible labelings of these points form a subset of {0, 1} m , the effective hypothesis class �H ∼ = {(h(x1), . . . , h(x... |

528 | Active learning with statistical models
- Cohn, Ghahramani, et al.
- 1996
(Show Context)
Citation Context ...e TS of T , Q(TS, πS) ≤ 4Q(T ∗ π(S) , πS) ln minh∈S π(h) . 6 Related work and promising directions Rather than attempting to summarize the wide range of proposed active learning methods, for instance =-=[5, 7, 10, 13, 14]-=-, we will discuss three basic techniques upon which they rely. Greedy search. This is the technique we have abstracted and rigorously validated in this paper. It is the foundation of most of the schem... |

508 | Support vector machine active learning with applications to text classification
- Tong, Koller
- 2002
(Show Context)
Citation Context ...e TS of T , Q(TS, πS) ≤ 4Q(T ∗ π(S) , πS) ln minh∈S π(h) . 6 Related work and promising directions Rather than attempting to summarize the wide range of proposed active learning methods, for instance =-=[5, 7, 10, 13, 14]-=-, we will discuss three basic techniques upon which they rely. Greedy search. This is the technique we have abstracted and rigorously validated in this paper. It is the foundation of most of the schem... |

334 | Selective sampling using the query by committee algorithm
- Freund, Seung, et al.
- 1997
(Show Context)
Citation Context ...e TS of T , Q(TS, πS) ≤ 4Q(T ∗ π(S) , πS) ln minh∈S π(h) . 6 Related work and promising directions Rather than attempting to summarize the wide range of proposed active learning methods, for instance =-=[5, 7, 10, 13, 14]-=-, we will discuss three basic techniques upon which they rely. Greedy search. This is the technique we have abstracted and rigorously validated in this paper. It is the foundation of most of the schem... |

257 | Employing EM in pool-based active learning for text classification
- McCallum
- 1998
(Show Context)
Citation Context ... the learner is able to ask for the labels of specific points, but is charged for each label. These query points are typically chosen from an unlabeled data set, a practice called pool-based learning =-=[10]-=-. There has also been some work on creating query points synthetically, including a rich body of theoretical results [1, 2], but this approach suffers from two problems: first, from a practical viewpo... |

254 | Structural risk minimization over data-dependent hierarchies
- Shawe-Taylor, Bartlett, et al.
- 1998
(Show Context)
Citation Context ...margin), then its final error bound is excellent if it guessed right, and worse-than-usual if it guessed wrong. This technique is not specific to active learning, and has been analyzed elsewhere (eg. =-=[12]-=-). One interesting line of work investigates a flexible family of priors specified by pairwise similarities between data points, eg. [14]. Bayesian assumptions. In our analysis, although π can be seen... |

252 | Towards optimal active learning through sampling estimation of error reduction
- Roy, McCallum
(Show Context)
Citation Context ...istic classifiers, the Bayesian assumption has also been used in another way: to approximate the greedy selection rule using the MAP estimate instead of an expensive summation over the posterior (eg. =-=[11]-=-). In terms of theoretical results, another work which considers the tradeoff between labels and generalization error is [7], in which a greedy scheme, realized using sampling, is analyzed in a Bayesi... |

130 | Oriented matroids - Björner, Vergnas, et al. - 1999 |

76 | Combining active learning and semi-supervised learning using gaussian fields and harmonic functions
- Zhu, Lafferty, et al.
- 2003
(Show Context)
Citation Context |

53 |
Queries revisited
- Angluin
- 2001
(Show Context)
Citation Context ...cally chosen from an unlabeled data set, a practice called pool-based learning [10]. There has also been some work on creating query points synthetically, including a rich body of theoretical results =-=[1, 2]-=-, but this approach suffers from two problems: first, from a practical viewpoint, the queries thus produced can be quite unnatural and therefore bewildering for a human to classify [3]; second, since ... |

40 |
Query learning can work poorly when a human oracle is used
- Baum, Lang
- 1992
(Show Context)
Citation Context ...ical results [1, 2], but this approach suffers from two problems: first, from a practical viewpoint, the queries thus produced can be quite unnatural and therefore bewildering for a human to classify =-=[3]-=-; second, since these queries are not picked from the underlying data distribution, they might have limited value in terms of generalization. In this paper, we focus on pool-based learning. We are int... |

11 | A theoretical analysis of query selection for collaborative filtering
- Dasgupta, Lee, et al.
(Show Context)
Citation Context ... more than Q ∗ queries are needed, and the roots of these subtrees might not cut down the version space much... For a worst-case model, a proof of approximate optimality is known in a related context =-=[6]-=-; as we saw in Claim 1, that model is trivial in our situation. The average-case model, and especially the use of arbitrary weights π, require more care. Details. For want of space, we only discuss so... |