Results 1  10
of
1,060,759
A new perspective on an old perceptron algorithm
 In Proceedings of the Sixteenth Annual Conference on Computational Learning Theory
, 2005
"... Abstract. We present a generalization of the Perceptron algorithm. The new algorithm performs a Perceptronstyle update whenever the margin of an example is smaller than a predefined value. We derive worst case mistake bounds for ouralgorithm. As a byproduct we obtain a new mistake bound for the Pe ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
Abstract. We present a generalization of the Perceptron algorithm. The new algorithm performs a Perceptronstyle update whenever the margin of an example is smaller than a predefined value. We derive worst case mistake bounds for ouralgorithm. As a byproduct we obtain a new mistake bound
A New Perspective on an Old Perceptron Algorithm
"... Abstract. We present a generalization of the Perceptron algorithm. The new algorithm performs a Perceptronstyle update whenever the margin of an example is smaller than a predefined value. We derive worst case mistake bounds for ouralgorithm. As a byproduct we obtain a new mistake bound for the Pe ..."
Abstract
 Add to MetaCart
Abstract. We present a generalization of the Perceptron algorithm. The new algorithm performs a Perceptronstyle update whenever the margin of an example is smaller than a predefined value. We derive worst case mistake bounds for ouralgorithm. As a byproduct we obtain a new mistake bound
Large Margin Classification Using the Perceptron Algorithm
 Machine Learning
, 1998
"... We introduce and analyze a new algorithm for linear classification which combines Rosenblatt 's perceptron algorithm with Helmbold and Warmuth's leaveoneout method. Like Vapnik 's maximalmargin classifier, our algorithm takes advantage of data that are linearly separable with large ..."
Abstract

Cited by 518 (2 self)
 Add to MetaCart
We introduce and analyze a new algorithm for linear classification which combines Rosenblatt 's perceptron algorithm with Helmbold and Warmuth's leaveoneout method. Like Vapnik 's maximalmargin classifier, our algorithm takes advantage of data that are linearly separable
Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms
, 2002
"... We describe new algorithms for training tagging models, as an alternative to maximumentropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a modific ..."
Abstract

Cited by 641 (16 self)
 Add to MetaCart
We describe new algorithms for training tagging models, as an alternative to maximumentropy models or conditional random fields (CRFs). The algorithms rely on Viterbi decoding of training examples, combined with simple additive updates. We describe theory justifying the algorithms through a
The Perceptron: A Probabilistic Model for Information Storage and Organization in The Brain
 Psychological Review
, 1958
"... If we are eventually to understand the capability of higher organisms for perceptual recognition, generalization, recall, and thinking, we must first have answers to three fundamental questions: 1. How is information about the physical world sensed, or detected, by the biological system? 2. In what ..."
Abstract

Cited by 1143 (0 self)
 Add to MetaCart
If we are eventually to understand the capability of higher organisms for perceptual recognition, generalization, recall, and thinking, we must first have answers to three fundamental questions: 1. How is information about the physical world sensed, or detected, by the biological system? 2. In what form is information stored, or remembered? 3. How does information contained in storage, or in memory, influence recognition and behavior? The first of these questions is in the
Instancebased learning algorithms
 Machine Learning
, 1991
"... Abstract. Storing and using specific instances improves the performance of several supervised learning algorithms. These include algorithms that learn decision trees, classification rules, and distributed networks. However, no investigation has analyzed algorithms that use only specific instances to ..."
Abstract

Cited by 1359 (18 self)
 Add to MetaCart
to solve incremental learning tasks. In this paper, we describe a framework and methodology, called instancebased learning, that generates classification predictions using only specific instances. Instancebased learning algorithms do not maintain a set of abstractions derived from specific instances
Planning Algorithms
, 2004
"... This book presents a unified treatment of many different kinds of planning algorithms. The subject lies at the crossroads between robotics, control theory, artificial intelligence, algorithms, and computer graphics. The particular subjects covered include motion planning, discrete planning, planning ..."
Abstract

Cited by 1108 (51 self)
 Add to MetaCart
This book presents a unified treatment of many different kinds of planning algorithms. The subject lies at the crossroads between robotics, control theory, artificial intelligence, algorithms, and computer graphics. The particular subjects covered include motion planning, discrete planning
Experiments with a New Boosting Algorithm
, 1996
"... In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced the relate ..."
Abstract

Cited by 2176 (21 self)
 Add to MetaCart
In an earlier paper, we introduced a new “boosting” algorithm called AdaBoost which, theoretically, can be used to significantly reduce the error of any learning algorithm that consistently generates classifiers whose performance is a little better than random guessing. We also introduced
A new learning algorithm for blind signal separation

, 1996
"... A new online learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number of ..."
Abstract

Cited by 614 (80 self)
 Add to MetaCart
A new online learning algorithm which minimizes a statistical dependency among outputs is derived for blind separation of mixed signals. The dependency is measured by the average mutual information (MI) of the outputs. The source signals and the mixing matrix are unknown except for the number
The Science of Monetary Policy: A New Keynesian Perspective
 Journal of Economic Literature
, 1999
"... “Having looked at monetary policy from both sides now, I can testify that central banking in practice is as much art as science. Nonetheless, while practicing this dark art, I have always found the science quEite useful.” 2 Alan S. Blinder ..."
Abstract

Cited by 1809 (45 self)
 Add to MetaCart
“Having looked at monetary policy from both sides now, I can testify that central banking in practice is as much art as science. Nonetheless, while practicing this dark art, I have always found the science quEite useful.” 2 Alan S. Blinder
Results 1  10
of
1,060,759