Results 1  10
of
191
Pegasos: Primal Estimated subgradient solver for SVM
"... We describe and analyze a simple and effective stochastic subgradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a singl ..."
Abstract

Cited by 297 (15 self)
 Add to MetaCart
We describe and analyze a simple and effective stochastic subgradient descent algorithm for solving the optimization problem cast by Support Vector Machines (SVM). We prove that the number of iterations required to obtain a solution of accuracy ɛ is Õ(1/ɛ), where each iteration operates on a single training example. In contrast, previous analyses of stochastic gradient descent methods for SVMs require Ω(1/ɛ2) iterations. As in previously devised SVM solvers, the number of iterations also scales linearly with 1/λ, where λ is the regularization parameter of SVM. For a linear kernel, the total runtime of our method is Õ(d/(λɛ)), where d is a bound on the number of nonzero features in each example. Since the runtime does not depend directly on the size of the training set, the resulting algorithm is especially suited for learning from large datasets. Our approach also extends to nonlinear kernels while working solely on the primal objective function, though in this case the runtime does depend linearly on the training set size. Our algorithm is particularly well suited for large text classification problems, where we demonstrate an orderofmagnitude speedup over previous SVM learning methods.
Logarithmic regret algorithms for online convex optimization
 In 19’th COLT
, 2006
"... Abstract. In an online convex optimization problem a decisionmaker makes a sequence of decisions, i.e., choose a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters an sequence of (possibly unrelated) convex cost functions. Zinkevich [Zin03] i ..."
Abstract

Cited by 122 (25 self)
 Add to MetaCart
Abstract. In an online convex optimization problem a decisionmaker makes a sequence of decisions, i.e., choose a sequence of points in Euclidean space, from a fixed feasible set. After each point is chosen, it encounters an sequence of (possibly unrelated) convex cost functions. Zinkevich [Zin03] introduced this framework, which models many natural repeated decisionmaking problems and generalizes many existing problems such as Prediction from Expert Advice and Cover’s Universal Portfolios. Zinkevich showed that a simple online gradient descent algorithm achieves additive regret O ( √ T), for an arbitrary sequence of T convex cost functions (of bounded gradients), with respect to the best single decision in hindsight. In this paper, we give algorithms that achieve regret O(log(T)) for an arbitrary sequence of strictly convex functions (with bounded first and second derivatives). This mirrors what has been done for the special cases of prediction from expert advice by Kivinen and Warmuth [KW99], and Universal Portfolios by Cover [Cov91]. We propose several algorithms achieving logarithmic regret, which besides being more general are also much more efficient to implement. The main new ideas give rise to an efficient algorithm based on the Newton method for optimization, a new tool in the field. Our analysis shows a surprising connection to followtheleader method, and builds on the recent work of Agarwal and Hazan [AH05]. We also analyze other algorithms, which tie together several different previous approaches including followtheleader, exponential weighting, Cover’s algorithm and gradient descent. 1
Boosting Verification by Automatic Tuning of Decision Procedures
 SEVENTH INTERNATIONAL CONFERENCE ON FORMAL METHODS IN COMPUTERAIDED DESIGN
, 2007
"... Parameterized heuristics abound in computer aided design and verification, and manual tuning of the respective parameters is difficult and timeconsuming. Very recent results from the artificial intelligence (AI) community suggest that this tuning process can be automated, and that doing so can lead ..."
Abstract

Cited by 46 (30 self)
 Add to MetaCart
Parameterized heuristics abound in computer aided design and verification, and manual tuning of the respective parameters is difficult and timeconsuming. Very recent results from the artificial intelligence (AI) community suggest that this tuning process can be automated, and that doing so can lead to significant performance improvements; furthermore, automated parameter optimization can provide valuable guidance during the development of heuristic algorithms. In this paper, we study how such an AI approach can improve a stateoftheart SAT solver for large, realworld bounded modelchecking and software verification instances. The resulting, automaticallyderived parameter settings yielded runtimes on average 4.5 times faster on bounded model checking instances and 500 times faster on software verification problems than extensive handtuning of the decision procedure. Furthermore, the availability of automatic tuning influenced the design of the solver, and the automaticallyderived parameter settings provided a deeper insight into the properties of problem instances.
A Method for Handling Uncertainty in Evolutionary Optimization with an Application to Feedback Control of Combustion
"... Abstract — We present a novel method for handling uncertainty in evolutionary optimization. The method entails quantification and treatment of uncertainty and relies on the rank based selection operator of evolutionary algorithms. The proposed uncertainty handling is implemented in the context of th ..."
Abstract

Cited by 29 (9 self)
 Add to MetaCart
Abstract — We present a novel method for handling uncertainty in evolutionary optimization. The method entails quantification and treatment of uncertainty and relies on the rank based selection operator of evolutionary algorithms. The proposed uncertainty handling is implemented in the context of the Covariance Matrix Adaptation Evolution Strategy (CMAES) and verified on test functions. The present method is independent of the uncertainty distribution, prevents premature convergence of the evolution strategy and is well suited for online optimization as it requires only a small number of additional function evaluations. The algorithm is applied in an experimental setup to the online optimization of feedback controllers of thermoacoustic instabilities of gas turbine combustors. In order to mitigate these instabilities, gaindelay or modelbased H ∞ controllers sense the pressure and command secondary fuel injectors. The parameters of these controllers are usually specified via a trial and error procedure. We demonstrate that their online optimization with the proposed methodology enhances, in an automated fashion, the online performance of the controllers, even under highly unsteady operating conditions, and it also compensates for uncertainties in the modelbuilding and design process. I.
Global random optimization by simultaneous perturbation stochastic approximation
 in Proc. Amer. Control Conf
, 2001
"... Abstract—We examine the theoretical and numerical global convergence properties of a certain “gradient free ” stochastic approximation algorithm called the “simultaneous perturbation stochastic approximation (SPSA)” that has performed well in complex optimization problems. We establish two theorems ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Abstract—We examine the theoretical and numerical global convergence properties of a certain “gradient free ” stochastic approximation algorithm called the “simultaneous perturbation stochastic approximation (SPSA)” that has performed well in complex optimization problems. We establish two theorems on the global convergence of SPSA, the first involving the wellknown method of injected noise. The second theorem establishes conditions under which “basic ” SPSA without injected noise can achieve convergence in probability to a global optimum, a result with important practical benefits. Index Terms—Global convergence, simulated annealing, simultaneous perturbation stochastic approximation (SPSA), stochastic approximation (SA), stochastic optimization. I.
Simulated Annealing for Convex Optimization
 Mathematics of Operations Research
, 2004
"... informs ® ..."
SamplingBased Path Planning on ConfigurationSpace Costmaps
 IEEE Transactions on Robotics
, 2010
"... Abstract—This paper addresses path planning considering a cost function defined over the configuration space. The proposed Transitionbased RRT planner computes lowcost paths that follow valleys and saddle points of the configurationspace costmap. It combines the exploratory strength of RRTs with ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
Abstract—This paper addresses path planning considering a cost function defined over the configuration space. The proposed Transitionbased RRT planner computes lowcost paths that follow valleys and saddle points of the configurationspace costmap. It combines the exploratory strength of RRTs with transition tests used in stochastic optimization methods to accept or to reject new potential states. The planner is analyzed and shown to compute lowcost solutions with respect to a path quality criterion based on the notion of mechanical work. A large set of experimental results is provided to demonstrate the effectiveness of the method. Current limitations and possible extensions are also discussed. I.
Stochastic Gradient Descent Training for L1regularized Loglinear Models with Cumulative Penalty
"... Stochastic gradient descent (SGD) uses approximate gradients estimated from subsets of the training data and updates the parameters in an online fashion. This learning framework is attractive because it often requires much less training time in practice than batch training algorithms. However, L1re ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
Stochastic gradient descent (SGD) uses approximate gradients estimated from subsets of the training data and updates the parameters in an online fashion. This learning framework is attractive because it often requires much less training time in practice than batch training algorithms. However, L1regularization, which is becoming popular in natural language processing because of its ability to produce compact models, cannot be efficiently applied in SGD training, due to the large dimensions of feature vectors and the fluctuations of approximate gradients. We present a simple method to solve these problems by penalizing the weights according to cumulative values for L1 penalty. We evaluate the effectiveness of our method in three applications: text chunking, named entity recognition, and partofspeech tagging. Experimental results demonstrate that our method can produce compact and accurate models much more quickly than a stateoftheart quasiNewton method for L1regularized loglinear models. 1
Continuation Methods for Adapting Simulated Skills
 ACM SIGGRAPH CONFERENCE PROCEEDINGS
"... Modeling the large space of possible human motions requires scalable techniques. Generalizing from example motions or example controllers is one way to provide the required scalability. We present techniques for generalizing a controller for physicsbased walking to significantly different tasks, su ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
Modeling the large space of possible human motions requires scalable techniques. Generalizing from example motions or example controllers is one way to provide the required scalability. We present techniques for generalizing a controller for physicsbased walking to significantly different tasks, such as climbing a large step up, or pushing a heavy object. Continuation methods solve such problems using a progressive sequence of problems that trace a path from an existing solved problem to the final desiredbutunsolved problem. Each step in the continuation sequence makes progress towards the target problem while further adapting the solution. We describe and evaluate a number of choices in applying continuation methods to adapting walking gaits for tasks involving interaction with the environment. The methods have been successfully applied to automatically adapt a regular cyclic walk to climbing a 65cm step, stepping over a 55cm sill, pushing heavy furniture, walking up steep inclines, and walking on ice. The continuation path further provides parameterized solutions to these problems.
Face Reconstruction From Monocular Video Using Uncertainty Analysis and a Generic Model
 Computer Vision and Image Understanding
, 2003
"... Reconstructing a 3D model of a human face from a monocular video sequence is an important problem in computer vision, with applications to recognition, surveillance, multimedia etc. However, the quality of 3D reconstructions using structure from motion (SfM) algorithms is often not satisfactory. One ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Reconstructing a 3D model of a human face from a monocular video sequence is an important problem in computer vision, with applications to recognition, surveillance, multimedia etc. However, the quality of 3D reconstructions using structure from motion (SfM) algorithms is often not satisfactory. One of the reasons is the poor quality of the input video data. Hence, it is important that 3D face reconstruction algorithms take into account the statistics representing the quality of the video. Also, because of the structural similarities of most faces, it is natural that the performance of these algorithms can be improved by using a generic model of a face. Most of the existing work using this approach initializes the reconstruction algorithm with this generic model. The problem with this approach is that the algorithm can converge to a solution very close to this initial value, resulting in a reconstruction which resembles the generic model rather than the particular face in the video which needs to be modeled. In this paper, we propose a method of 3D reconstruction of a human face from video in which the 3D reconstruction algorithm and the generic model are handled separately. We show that it is possible to obtain a reasonably good 3D SfM estimate purely from the video sequence, provided the quality of the input video is statistically assessed and incorporated into the algorithm. The final 3D model is obtained after combining the SfM estimate and the generic model using an energy function that corrects for the errors in the estimate by comparing the local regions in the two models. The main advantage of our algorithm over others is that it is able to retain the specific features of the face in the video sequence even when these features are different from those of the gen...