Results 1  10
of
14
Approximating Parameterized Convex Optimization Problems ∗
, 2010
"... We consider parameterized convex optimization problems over the unit simplex, that depend on one parameter. We provide a simple and efficient scheme for maintaining an εapproximate solution (and a corresponding εcoreset) along the entire parameter path. We prove correctness and parameterized optim ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
(Show Context)
We consider parameterized convex optimization problems over the unit simplex, that depend on one parameter. We provide a simple and efficient scheme for maintaining an εapproximate solution (and a corresponding εcoreset) along the entire parameter path. We prove correctness and parameterized optimization problem are for example regularization paths of support vector machines, multiple kernel learning, and minimum enclosing balls of moving points. 1
Streaming algorithms for extent problems in high dimensions
 in SODA ’10: Proc. TwentyFirst ACMSIAM Symposium on Discrete Algorithms
, 2010
"... We develop (singlepass) streaming algorithms for maintaining extent measures of a stream S of n points in R d. We focus on designing streaming algorithms whose working space is polynomial in d (poly(d)) and sublinear in n. For the problems of computing diameter, width and minimum enclosing ball of ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
(Show Context)
We develop (singlepass) streaming algorithms for maintaining extent measures of a stream S of n points in R d. We focus on designing streaming algorithms whose working space is polynomial in d (poly(d)) and sublinear in n. For the problems of computing diameter, width and minimum enclosing ball of S, we obtain lower bounds on the worstcase approximation ratio of any streaming algorithm that uses poly(d) space. On the positive side, we introduce the notion of blurred ball cover and use it for answering approximate farthestpoint queries and maintaining approximate minimum enclosing ball and diameter of S. We describe a streaming algorithm for maintaining a blurred ball cover whose working space is linear in d and independent of n. 1
An equivalence between the lasso and support vector machines,” arXiv:1303.1152
, 2013
"... Abstract We investigate the relation of two fundamental tools in machine learning, that is the support vector machine (SVM) for classification, and the Lasso technique used in regression. We show that the resulting optimization problems are equivalent, in the following sense: Given any instance of ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
Abstract We investigate the relation of two fundamental tools in machine learning, that is the support vector machine (SVM) for classification, and the Lasso technique used in regression. We show that the resulting optimization problems are equivalent, in the following sense: Given any instance of an 2 loss softmargin (or hardmargin) SVM, we construct a Lasso instance having the same optimal solutions, and vice versa. In consequence, many existing optimization algorithms for both SVMs and Lasso can also be applied to the respective other problem instances. Also, the equivalence allows for many known theoretical insights for SVM and Lasso to be translated between the two settings. One such implication gives a simple kernelized version of the Lasso, analogous to the kernels used in the SVM setting. Another consequence is that the sparsity of a Lasso solution is equal to the number of support vectors for the corresponding SVM instance, and that one can use screening rules to prune the set of support vectors. Furthermore, we can relate sublinear time algorithms for the two problems, and give a new such algorithm variant for the Lasso.
New Approximation Algorithms for Minimum Enclosing Convex Shapes
"... Given n points in a d dimensional Euclidean space, the Minimum Enclosing Ball (MEB) problem is to find the ball with the smallest radius which contains all n points. We give two approximation algorithms for producing an enclosing ball whose radius is at most ɛ away from the optimum. The first requir ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Given n points in a d dimensional Euclidean space, the Minimum Enclosing Ball (MEB) problem is to find the ball with the smallest radius which contains all n points. We give two approximation algorithms for producing an enclosing ball whose radius is at most ɛ away from the optimum. The first requires O(ndL / √ ɛ) effort, where L is a constant that depends on the scaling of the data. The second is a O ∗ (ndQ / √ ɛ) approximation algorithm, where Q is an upper bound on the norm of the points. This is in contrast with coresets based algorithms which yield a O(nd/ɛ) greedy algorithm. Finding the Minimum Enclosing Convex Polytope (MECP) is a related problem wherein a convex polytope of a fixed shape is given and the aim is to find the smallest magnification of the polytope which encloses the given points. For this problem we present O(mndL/ɛ) and O ∗ (mndQ/ɛ) approximation algorithms, where m is the number of faces of the polytope. Our algorithms borrow heavily from convex duality and recently developed techniques in nonsmooth optimization, and are in contrast with existing methods which rely on geometric arguments. In particular, we specialize the excessive gap framework of Nesterov [19] to obtain our results. 1
A novel FrankWolfe algorithm. analysis and applications to largescale SVM training. Information Sciences (in press
, 2014
"... Recently, there has been a renewed interest in the machine learning community for variants of a sparse greedy approximation procedure for concave optimization known as the FrankWolfe (FW) method. In particular, this procedure has been successfully applied to train largescale instances of nonline ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
(Show Context)
Recently, there has been a renewed interest in the machine learning community for variants of a sparse greedy approximation procedure for concave optimization known as the FrankWolfe (FW) method. In particular, this procedure has been successfully applied to train largescale instances of nonlinear Support Vector Machines (SVMs). Specializing FW to SVM training has allowed to obtain efficient algorithms but also important theoretical results, including convergence analysis of training algorithms and new characterizations of model sparsity. In this paper, we present and analyze a novel variant of the FWmethod based on a new way to perform away steps, a classic strategy used to accelerate the convergence of the basic FW procedure. Our formulation and analysis is focused on a general concave maximization problem on the simplex. However, the specialization of our algorithm to quadratic forms is strongly related to some classic methods in computational geometry, namely the Gilbert and MDM algorithms. On the theoretical side, we demonstrate that the method matches the guarantees in terms of convergence rate and number of iterations obtained by using classic away steps. In particular, the method enjoys a linear rate of convergence, a result that has been recently proved for MDM on quadratic forms. On the practical side, we provide experiments on several classification datasets, and evaluate the results using statistical tests. Experiments show that our method is faster than the FW method with classic away steps, and works well even in the cases in which classic away steps slow down the algorithm. Furthermore, these improvements are obtained without sacrificing the predictive accuracy of the obtained SVM model. 1 ar
A Linearly Convergent LinearTime FirstOrder Algorithm for Support Vector Classification with a Core Set Result
"... We present a simple, firstorder approximation algorithm for the support vector classification problem. Given a pair of linearly separable data sets and ɛ ∈ (0, 1), the proposed algorithm computes a separating hyperplane whose margin is within a factor of (1 − ɛ) of that of the maximummargin separa ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We present a simple, firstorder approximation algorithm for the support vector classification problem. Given a pair of linearly separable data sets and ɛ ∈ (0, 1), the proposed algorithm computes a separating hyperplane whose margin is within a factor of (1 − ɛ) of that of the maximummargin separating hyperplane. We discuss how our algorithm can be extended to nonlinearly separable and inseparable data sets. The running time of our algorithm is linear in the number of data points and in 1/ɛ. In particular, the number of support vectors computed by the algorithm is bounded above by O(ζ/ɛ) for all sufficiently small ɛ> 0, where ζ is the square of the ratio of the distances between the farthest and closest points in the two data sets. Furthermore, we establish that our algorithm exhibits linear convergence. We adopt the real number model of computation in our analysis.
A characterization theorem and an algorithm for a convex hull problem
, 2013
"... ..."
(Show Context)
Fast svm training using approximate extreme points
 JMLR
"... Applications of nonlinear kernel Support Vector Machines (SVMs) to large datasets is seriously hampered by its excessive training time. We propose a modification, called the approximate extreme points support vector machine (AESVM), that is aimed at overcoming this burden. Our approach relies on co ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Applications of nonlinear kernel Support Vector Machines (SVMs) to large datasets is seriously hampered by its excessive training time. We propose a modification, called the approximate extreme points support vector machine (AESVM), that is aimed at overcoming this burden. Our approach relies on conducting the SVM optimization over a carefully selected subset, called the representative set, of the training dataset. We present analytical results that indicate the similarity of AESVM and SVM solutions. A linear time algorithm based on convex hulls and extreme points is used to compute the representative set in kernel space. Extensive computational experiments on nine datasets compared AESVM
Approximate Regularization Paths for ℓ2loss Support Vector Machines
"... We consider approximate regularization paths for kernel methods and in particular ℓ2loss Support Vector Machines (SVMs). We provide a simple and efficient framework for maintaining an εapproximate solution (and a corresponding εcoreset) along the entire regularization path. We prove correctness a ..."
Abstract
 Add to MetaCart
(Show Context)
We consider approximate regularization paths for kernel methods and in particular ℓ2loss Support Vector Machines (SVMs). We provide a simple and efficient framework for maintaining an εapproximate solution (and a corresponding εcoreset) along the entire regularization path. We prove correctness and also practical efficiency our method. Unlike previous algorithms our algorithm does not need any matrix inversion and solely relies on first order information. For the first time this makes it practical to investigate the solution path for larger problems. In particular, we show that a solution has to be updated at most O(1/ε) many times along the solution path in order keep the approximation guarantee. This also implies that we can do both training and crossvalidation over the entire solution path using only O(n/ε 2) many kernel evaluations. We also apply our method to multiple kernel learning, to find the best convex combination of two kernels for a ℓ2loss SVM with respect to crossvalidation. 1