Results 1  10
of
30
Convex formulations of radiusmargin based Support Vector Machines
"... We consider Support Vector Machines (SVMs) learned together with linear transformations of the feature spaces on which they are applied. Under this scenario the radius of the smallest data enclosing sphere is no longer fixed. Therefore optimizing the SVM error bound by considering both the radius an ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We consider Support Vector Machines (SVMs) learned together with linear transformations of the feature spaces on which they are applied. Under this scenario the radius of the smallest data enclosing sphere is no longer fixed. Therefore optimizing the SVM error bound by considering both the radius
Radius Margin Bounds for Support Vector . . .
 NEURAL COMPUTATION
, 2003
"... An important approach for ecient support vector machine (SVM) model selection is to use differentiable bounds of the leaveoneout (loo) error. Past efforts focused on finding tight bounds of loo, for example, radius margin bounds, span bounds, etc. However, their practical viability is still not ve ..."
Abstract
 Add to MetaCart
An important approach for ecient support vector machine (SVM) model selection is to use differentiable bounds of the leaveoneout (loo) error. Past efforts focused on finding tight bounds of loo, for example, radius margin bounds, span bounds, etc. However, their practical viability is still
Margin and radius based multiple kernel learning
 In Proc. of the European Conf. on Machine Learning and Knowledge Discovery in Databases: Part I
, 2009
"... Abstract. A serious drawback of kernel methods, and Support Vector Machines (SVM) in particular, is the difficulty in choosing a suitable kernel function for a given dataset. One of the approaches proposed to address this problem is Multiple Kernel Learning (MKL) in which several kernels are combine ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract. A serious drawback of kernel methods, and Support Vector Machines (SVM) in particular, is the difficulty in choosing a suitable kernel function for a given dataset. One of the approaches proposed to address this problem is Multiple Kernel Learning (MKL) in which several kernels
Feature weighting using margin and radius based error bound optimization in svms
 In European Conference on Machine Learning (ECML
"... Abstract. The Support Vector Machine error bound is a function of the margin and radius. Standard SVM algorithms maximize the margin within a given feature space, therefore the radius is fixed and thus ignored in the optimization. We propose an extension of the standard SVM optimization in which we ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract. The Support Vector Machine error bound is a function of the margin and radius. Standard SVM algorithms maximize the margin within a given feature space, therefore the radius is fixed and thus ignored in the optimization. We propose an extension of the standard SVM optimization in which we
GradientBased Adaptation of General Gaussian Kernels
, 2005
"... Gradientbased optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter m ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radiusmargin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines
NOTE Gradientbased Adaptation of General Gaussian Kernels
"... Gradientbased optimizing of Gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parametrization of the kernel parameter ma ..."
Abstract
 Add to MetaCart
manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radiusmargin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines
port Vector Machines, Kernel Fisher Discriminant analysis
"... Abstract  This review provides an introduction to Sup ..."
Machine Models and Other Preliminary Definitions. The (algebraic) Random Access
"... Machine (RAM) is a sequential model of computation where each arithmetic or logical operation such as addition, subtraction, multiplication, division, and comparison can be done in one step by a given processor. For our model of computation, we assume the algebraic Parallel Random Access Machine (PR ..."
Abstract
 Add to MetaCart
time. Throughout this paper all logarithms are base 2. Let P(n) be the processor bound to multiply two degree n polynomials in O(log n) parallel time using a PRAM. The best known bounds for P(n) over arbitrary fields is n log log n [CK], but if the field supports an FFT of size n, then the best bound
Feasible Adaptation Criteria for Hybrid Wavelet Large Margin Classifiers
, 2002
"... In the context of signal classification, this paper assembles and compares criteria to easily judge the discrimination quality of a set of feature vectors. The quality measures are based on the assumption that a Support Vector Machine is used for the final classification. Thus, the ultimate criterio ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
, the criteria are easily computable while still reliably predicting the classification performance. We also present a novel approach for computing the radius of a set of points in feature space. The radius, in relation to the margin, forms the most commonly used error bound for Support Vector Machines
Permutation invariant svms
 In International Conference on Machine Learning, ICML
, 2006
"... We extend Support Vector Machines to input spaces that are sets by ensuring that the classifier is invariant to permutations of subelements within each input. Such permutations include reordering of scalars in an input vector, reorderings of tuples in an input matrix or reorderings of general obje ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
objects (in Hilbert spaces) within a set as well. This approach induces permutational invariance in the classifier which can then be directly applied to unusual setbased representations of data. The permutation invariant Support Vector Machine alternates the Hungarian method for maximum weight matching
Results 1  10
of
30