Results 11  20
of
251
On the Influence of the Kernel on the Consistency of Support Vector Machines
 Journal of Machine Learning Research
, 2001
"... In this article we study the generalization abilities of several classifiers of support vector machine (SVM) type using a certain class of kernels that we call universal. It is shown that the soft margin algorithms with universal kernels are consistent for a large class of classification problems ..."
Abstract

Cited by 212 (21 self)
 Add to MetaCart
precise study of the underlying optimization problems of the classifiers. Furthermore, we show consistency for the maximal margin classifier as well as for the soft margin SVM's in the presence of large margins. In this case it turns out that also constant regularization parameters ensure
Perceptronlike Large Margin Classifiers
"... We consider perceptronlike algorithms with margin in which the standard classification condition is modified to require a specific value of the margin in the augmented space. The new algorithms are shown to converge in a finite number of steps and used to approximately locate the optimal weight vec ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
algorithmic procedure could be regarded as an approximate maximal margin classifier. An important property of our method is that the computational cost for its implementation scales only linearly with the number of training patterns. 1 1
Noise Reduction for InstanceBased Learning with a Local Maximal Margin Approach
 Journal of Intelligent Information Systems
"... To some extent the problem of noise reduction in machine learning has been finessed by the development of learning techniques that are noisetolerant. However, it is difficult to make instancebased learning noise tolerant and noise reduction still plays an important role in knearest neighbour cla ..."
Abstract

Cited by 7 (4 self)
 Add to MetaCart
of maximal margin classifiers to bear on noise reduction. This provides a more robust alternative to the majority rule on which almost all the existing noise reduction techniques are based. Roughly speaking, for each training sample an SVM is trained on its neighbourhood and if the SVM classification
Large margin dags for multiclass classification
 Advances in Neural Information Processing Systems 12
, 2000
"... We present a new learning architecture: the Decision Directed Acyclic Graph (DDAG), which is used to combine many twoclass classifiers into a multiclass classifier. For anclass problem, the DDAG contains � classifiers, one for each pair of classes. We present a VC analysis of the case when the nod ..."
Abstract

Cited by 374 (1 self)
 Add to MetaCart
the node classifiers are hyperplanes; the resulting bound on the test error depends on and on the margin achieved at the nodes, but not on the dimension of the space. This motivates an algorithm, DAGSVM, which operates in a kernelinduced feature space and uses twoclass maximal margin hyperplanes at each
Empirical assessment of classification accuracy of Local SVM,” Dipartimento di Ingegneria e Scienza dell’Informazione
, 2008
"... The combination of maximal margin classifiers and knearest neighbors rule constructing an SVM on the neighborhood of the test sample in the feature space (called kNNSVM), was presented as a promising way of improving classification accuracy. Since no extensive validation of the method was performed ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
The combination of maximal margin classifiers and knearest neighbors rule constructing an SVM on the neighborhood of the test sample in the feature space (called kNNSVM), was presented as a promising way of improving classification accuracy. Since no extensive validation of the method
Boosting in the limit: Maximizing the margin of learned ensembles
 In Proceedings of the Fifteenth National Conference on Artificial Intelligence
, 1998
"... The "minimum margin" of an ensemble classifier on a given training set is, roughly speaking, the smallest vote it gives to any correct training label. Recent work has shown that the Adaboost algorithm is particularly effective at producing ensembles with large minimum margins, and theory s ..."
Abstract

Cited by 124 (0 self)
 Add to MetaCart
The "minimum margin" of an ensemble classifier on a given training set is, roughly speaking, the smallest vote it gives to any correct training label. Recent work has shown that the Adaboost algorithm is particularly effective at producing ensembles with large minimum margins, and theory
Efficient Margin Maximizing with Boosting
, 2003
"... AdaBoost produces a linear combination of base hypotheses and predicts with the sign of this linear combination. It has been observed that the generalization error of the algorithm continues to improve even after all examples are classified correctly by the current signed linear combination, whic ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
AdaBoost produces a linear combination of base hypotheses and predicts with the sign of this linear combination. It has been observed that the generalization error of the algorithm continues to improve even after all examples are classified correctly by the current signed linear combination
Structural Risk Minimization over DataDependent Hierarchies
, 1996
"... The paper introduces some generalizations of Vapnik's method of structural risk minimisation (SRM). As well as making explicit some of the details on SRM, it provides a result that allows one to trade off errors on the training sample against improved generalization performance. It then conside ..."
Abstract

Cited by 275 (69 self)
 Add to MetaCart
. It then considers the more general case when the hierarchy of classes is chosen in response to the data. A result is presented on the generalization performance of classifiers with a "large margin". This theoretically explains the impressive generalization performance of the maximal margin hyperplane
Maximizing the Margin with Boosting
"... AdaBoost produces a linear combination of weak hypotheses. It has been observed that the generalization error of the algorithm continues to improve even after all examples are classified correctly by the current linear combination, i.e. by a hyperplane in feature space spanned by the weak hypothese ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
Boost that explicitly maximizes the minimum margin of the examples. We bound the number of iterations and the number of hypotheses used in the final linear combination which approximates the maximum margin hyperplane with a certain precision. Our modified algorithm essentially retains the exponential convergence
A Metaheuristics Approach to Protein Active Site Detection
"... One of the aims of modern Bioinformatics is to discover the molecular mechanisms that rule the protein operation. This would allow us to understand the complex processes involved in living systems and possibly correct dysfunctions. Given the evident interest of the pharmaceutical industry, an active ..."
Abstract
 Add to MetaCart
detection of active sites are needed. There may be many ways to deal with the problem of automatic protein active site identification. We have defined it as a binary classification task and we have applied efficient linear maximal margin classifiers as SVMs extended with the use of kernel methods. We
Results 11  20
of
251