by
Leslie G. Valiant

Venue: | MACHINE LEARNING |

Citations: | 18 - 3 self |

@INPROCEEDINGS{Valiant97projectionlearning,

author = {Leslie G. Valiant},

title = {Projection Learning},

booktitle = {MACHINE LEARNING},

year = {1997},

pages = {287--293},

publisher = {}

}

A method of combining learning algorithms is described that preserves attribute efficiency. It yields learning algorithms that require a number of examples that is polynomial in the number of relevant variables and logarithmic in the number of irrelevant ones. The algorithms are simple to implement and realizable on networks with a number of nodes linear in the total number of variables. They can be viewed as strict generalizations of Littlestone's Winnow algorithm, and, therefore, appropriate to domains having very large numbers of attributes, but where nonlinear hypotheses are sought.

962 |
Human problem solving
- Newell, Simon
- 1972
(Show Context)
Citation Context ...tentially prohibitive n k+1 . 5 Sequential Structures Production systems, or sequences of condition-action rules have been frequently suggested as appropriate for representing cognitive computations (=-=Newell and Simon, 1972-=-). These can be formalized skeletally as decision lists (Rivest 1987, Khardon 1996). No polynomial time learning algorithm is known that can learn decision lists attribute efficiently in the strong se... |

672 | Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm
- Littlestone
- 1988
(Show Context)
Citation Context ...had been reduced to logarithmic, rather than linear, represented a major breakthrough. Soon afterwards Littlestone proved a similar result for a surprisingly elegant algorithm which he called Winnow (=-=Littlestone, 1988-=-). It resembles the perceptron algorithm in simplicity and form, but achieves its attribute-efficient behavior by having multiplicative rather than additive updates. The algorithm has been shown to be... |

596 | An Introduction to Computational Learning Theory - Kearns, Vazirani - 1994 |

375 | Learning Decision Lists
- Rivest
- 1987
(Show Context)
Citation Context ...quences of condition-action rules have been frequently suggested as appropriate for representing cognitive computations (Newell and Simon, 1972). These can be formalized skeletally as decision lists (=-=Rivest 1987-=-, Khardon 1996). No polynomial time learning algorithm is known that can learn decision lists attribute efficiently in the strong sense so far considered here, that the computation time is polynomial ... |

223 | Quantifying inductive bias: AI learning algorithms and Valiantâ€™s learning framework - Haussler - 1988 |

78 | Applying winnow to context-sensitive spelling correction
- Golding, Roth
- 1996
(Show Context)
Citation Context ...chieves its attribute-efficient behavior by having multiplicative rather than additive updates. The algorithm has been shown to be effective in cognitive settings with tens of thousands of variables (=-=Golding and Roth, 1996-=-). The primary purpose of this paper is to extend the known classes of algorithms that learn in this attribute-efficient sense. We do this by showing that attributeefficient learnability can be preser... |

73 | Learning boolean functions in an infinite attribute space - Blum - 1992 |

54 | The perceptron algorithm vs. winnow: linear vs. logarithmic mistake bounds when few input variables are relevant - Kivinen, Warmuth, et al. - 1997 |

50 | Learning to Take Actions
- Khardon
- 1999
(Show Context)
Citation Context ...ndition-action rules have been frequently suggested as appropriate for representing cognitive computations (Newell and Simon, 1972). These can be formalized skeletally as decision lists (Rivest 1987, =-=Khardon 1996-=-). No polynomial time learning algorithm is known that can learn decision lists attribute efficiently in the strong sense so far considered here, that the computation time is polynomial in all the par... |

10 | On learning width two branching programs - Bshouty, Tamon, et al. - 1998 |

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2014 The Pennsylvania State University