Results 1  10
of
178
Making LargeScale SVM Learning Practical
, 1998
"... Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large lea ..."
Abstract

Cited by 1861 (17 self)
 Add to MetaCart
Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large
Making LargeScale Support Vector Machine Learning Practical
, 1998
"... Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large lea ..."
Abstract

Cited by 628 (1 self)
 Add to MetaCart
Training a support vector machine (SVM) leads to a quadratic optimization problem with bound constraints and one linear equality constraint. Despite the fact that this type of problem is well understood, there are many issues to be considered in designing an SVM learner. In particular, for large
Learning from demonstration”.
 Advances in Neural Information Processing Systems 9.
, 1997
"... Abstract By now it is widely accepted that learning a task from scratch, i.e., without any prior knowledge, is a daunting undertaking. Humans, however, rarely attempt to learn from scratch. They extract initial biases as well as strategies how to approach a learning problem from instructions and/or ..."
Abstract

Cited by 399 (32 self)
 Add to MetaCart
speed up learning. In general nonlinear learning problems, only modelbased reinforcement learning shows significant speedup after a demonstration, while in the special case of linear quadratic regulator (LQR) problems, all methods profit from the demonstration. In an implementation of pole balancing
GENERALIZATION BOUNDS FOR LEARNING WITH LINEAR, POLYGONAL, QUADRATIC AND CONIC SIDE KNOWLEDGE
"... Abstract. In this paper, we consider a supervised learning setting where side knowledge is provided about the labels of unlabeled examples. The side knowledge has the effect of reducing the hypothesis space, leading to tighter generalization bounds, and thus possibly better generalization. We consid ..."
Abstract
 Add to MetaCart
Abstract. In this paper, we consider a supervised learning setting where side knowledge is provided about the labels of unlabeled examples. The side knowledge has the effect of reducing the hypothesis space, leading to tighter generalization bounds, and thus possibly better generalization. We
Support vector machines: Training and applications
 A.I. MEMO 1602, MIT A. I. LAB
, 1997
"... The Support Vector Machine (SVM) is a new and very promising classification technique developed by Vapnik and his group at AT&T Bell Laboratories [3, 6, 8, 24]. This new learning algorithm can be seen as an alternative training technique for Polynomial, Radial Basis Function and MultiLayer Perc ..."
Abstract

Cited by 223 (3 self)
 Add to MetaCart
of Support Vector Machines, its relationship with SRM, and its geometrical insight, are discussed in this paper. Since Structural Risk Minimization is an inductive principle that aims at minimizing a bound on the generalization error of a model, rather than minimizing the Mean Square Error over the data set
Copositive Relaxation for General Quadratic Programming
 OPTIM. METHODS SOFTW
, 1998
"... We consider general, typically nonconvex, Quadratic Programming Problems. The Semidefinite relaxation proposed by Shor provides bounds on the optimal solution, but it does not always provide sufficiently strong bounds if linear constraints are also involved. To get rid of the linear sideconstraint ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
We consider general, typically nonconvex, Quadratic Programming Problems. The Semidefinite relaxation proposed by Shor provides bounds on the optimal solution, but it does not always provide sufficiently strong bounds if linear constraints are also involved. To get rid of the linear side
Generalization Bounds for Decision Trees
, 2000
"... We derive a new bound on the error rate for decision trees. The bound depends both on the structure of the tree and the specific sample (not just the size of the sample). This bound is tighter than traditional bounds for unbalanced trees and justifies "compositional" algorithms for constru ..."
Abstract

Cited by 26 (2 self)
 Add to MetaCart
.g., fitting a linear curve to clearly quadratic data. The fundamental question is how many parameters, or what concept size, should one allow for a given amount of training data. A standard theoretical approach is to prove a bound on generalization error as a function of the training error and the concept
Tree Elaboration Strategies In Branch and Bound Algorithms For Solving the Quadratic Assignment Problem
, 1999
"... This paper presents a new strategy for selecting nodes in a branchandbound algorithm for solving exactly the Quadratic Assignment Problem (QAP). It was developed when it was learned that older strategies failed on the larger size problems. The strategy is a variation of polytomic depthfirst searc ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
This paper presents a new strategy for selecting nodes in a branchandbound algorithm for solving exactly the Quadratic Assignment Problem (QAP). It was developed when it was learned that older strategies failed on the larger size problems. The strategy is a variation of polytomic depth
SVM soft margin classifiers: linear programming versus quadratic programming
 Neural Comp
"... Support vector machine soft margin classifiers are important learning algorithms for classification problems. They can be stated as convex optimization problems and are suitable for a large data setting. Linear programming SVM classifier is specially efficient for very large size samples. But little ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
as that of the quadratic programming SVM. This is implemented by setting a stepping stone between the linear programming SVM and the classical 1–norm soft margin classifier. An upper bound for the misclassification error is presented for general probability distributions. Explicit learning rates are derived
BoxInequalities for Quadratic Assignment Polytopes
 Mathematical Programming
, 1997
"... Linear Programming based lower bounds have been considered both for the general as well as for the symmetric quadratic assignment problem several times in the recent years. They have turned out to be quite good in practice. Investigations of the polytopes underlying the corresponding integer linear ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Linear Programming based lower bounds have been considered both for the general as well as for the symmetric quadratic assignment problem several times in the recent years. They have turned out to be quite good in practice. Investigations of the polytopes underlying the corresponding integer linear
Results 1  10
of
178