Results 1  10
of
37
Newton's Method For Large BoundConstrained Optimization Problems
 SIAM JOURNAL ON OPTIMIZATION
, 1998
"... We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinea ..."
Abstract

Cited by 74 (4 self)
 Add to MetaCart
We analyze a trust region version of Newton's method for boundconstrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearlyconstrained problems, and yields global and superlinear convergence without assuming neither strict complementarity nor linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large boundconstrained problems.
Efficient Projections onto the ℓ1Ball for Learning in High Dimensions
"... We describe efficient algorithms for projecting a vector onto the ℓ1ball. We present two methods for projection. The first performs exact projection in O(n) expected time, where n is the dimension of the space. The second works on vectors k of whose elements are perturbed outside the ℓ1ball, proje ..."
Abstract

Cited by 67 (9 self)
 Add to MetaCart
We describe efficient algorithms for projecting a vector onto the ℓ1ball. We present two methods for projection. The first performs exact projection in O(n) expected time, where n is the dimension of the space. The second works on vectors k of whose elements are perturbed outside the ℓ1ball, projecting in O(k log(n)) time. This setting is especially useful for online learning in sparse feature spaces such as text categorization applications. We demonstrate the merits and effectiveness of our algorithms in numerous batch and online learning tasks. We show that variants of stochastic gradient projection methods augmented with our efficient projection procedures outperform interior point methods, which are considered stateoftheart optimization techniques. We also show that in online settings gradient updates with ℓ1 projections outperform the exponentiated gradient algorithm while obtaining models with high degrees of sparsity. 1.
Fast Optimization Methods for L1 Regularization: A Comparative Study and Two New Approaches
"... Abstract. L1 regularization is effective for feature selection, but the resulting optimization is challenging due to the nondifferentiability of the 1norm. In this paper we compare stateoftheart optimization techniques to solve this problem across several loss functions. Furthermore, we propose ..."
Abstract

Cited by 47 (1 self)
 Add to MetaCart
Abstract. L1 regularization is effective for feature selection, but the resulting optimization is challenging due to the nondifferentiability of the 1norm. In this paper we compare stateoftheart optimization techniques to solve this problem across several loss functions. Furthermore, we propose two new techniques. The first is based on a smooth (differentiable) convex approximation for the L1 regularizer that does not depend on any assumptions about the loss function used. The other technique is a new strategy that addresses the nondifferentiability of the L1regularizer by casting the problem as a constrained optimization problem that is then solved using a specialized gradient projection method. Extensive comparisons show that our newly proposed approaches consistently rank among the best in terms of convergence speed and efficiency by measuring the number of function evaluations required. 1
Interior Point Methods For Optimal Control Of DiscreteTime Systems
 Journal of Optimization Theory and Applications
, 1993
"... . We show that recently developed interior point methods for quadratic programming and linear complementarity problems can be put to use in solving discretetime optimal control problems, with general pointwise constraints on states and controls. We describe interior point algorithms for a discrete ..."
Abstract

Cited by 31 (5 self)
 Add to MetaCart
. We show that recently developed interior point methods for quadratic programming and linear complementarity problems can be put to use in solving discretetime optimal control problems, with general pointwise constraints on states and controls. We describe interior point algorithms for a discrete time linearquadratic regulator problem with mixed state/control constraints, and show how it can be efficiently incorporated into an inexact sequential quadratic programming algorithm for nonlinear problems. The key to the efficiency of the interiorpoint method is the narrowbanded structure of the coefficient matrix which is factorized at each iteration. Key words. interior point algorithms, optimal control, banded linear systems. 1. Introduction. The problem of optimal control of an initial value ordinary differential equation, with Bolza objectives and mixed constraints, is min x;u Z T 0 L(x(t); u(t); t) dt + OE f (x(T )); x(t) = f(x(t); u(t); t); x(0) = x init ; (1.1) g(x(t); u(...
Solving Nonlinear Multicommodity Flow Problems By The Analytic Center Cutting Plane Method
, 1995
"... The paper deals with nonlinear multicommodity flow problems with convex costs. A decomposition method is proposed to solve them. The approach applies a potential reduction algorithm to solve the master problem approximately and a column generation technique to define a sequence of primal linear prog ..."
Abstract

Cited by 29 (14 self)
 Add to MetaCart
The paper deals with nonlinear multicommodity flow problems with convex costs. A decomposition method is proposed to solve them. The approach applies a potential reduction algorithm to solve the master problem approximately and a column generation technique to define a sequence of primal linear programming problems. Each subproblem consists of finding a minimum cost flow between an origin and a destination node in an uncapacited network. It is thus formulated as a shortest path problem and solved with the Dijkstra's dheap algorithm. An implementation is described that that takes full advantage of the supersparsity of the network in the linear algebra operations. Computational results show the efficiency of this approach on wellknown nondifferentiable problems and also large scale randomly generated problems (up to 1000 arcs and 5000 commodities). This research has been supported by the Fonds National de la Recherche Scientifique Suisse, grant #12 \Gamma 34002:92, NSERCCanada and ...
Modified ProjectionType Methods For Monotone Variational Inequalities
 SIAM Journal on Control and Optimization
, 1996
"... . We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with un ..."
Abstract

Cited by 25 (9 self)
 Add to MetaCart
. We propose new methods for solving the variational inequality problem where the underlying function F is monotone. These methods may be viewed as projectiontype methods in which the projection direction is modified by a strongly monotone mapping of the form I \Gamma ffF or, if F is affine with underlying matrix M , of the form I + ffM T , with ff 2 (0; 1). We show that these methods are globally convergent and, if in addition a certain error bound based on the natural residual holds locally, the convergence is linear. Computational experience with the new methods is also reported. Key words. Monotone variational inequalities, projectiontype methods, error bound, linear convergence. AMS subject classifications. 49M45, 90C25, 90C33 1. Introduction. We consider the monotone variational inequality problem of finding an x 2 X satisfying F (x ) T (x \Gamma x ) 0 8x 2 X; (1) where X is a closed convex set in ! n and F is a monotone and continuous function from ! n to ...
An Incremental Gradient(Projection) Method With Momentum Term And Adaptive Stepsize Rule
 SIAM J. on Optimization
, 1998
"... . We consider an incremental gradient method with momentum term for minimizing the sum of continuously di#erentiable functions. This method uses a new adaptive stepsize rule that decreases the stepsize whenever su#cient progress is not made. We show that if the gradients of the functions are bounded ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
. We consider an incremental gradient method with momentum term for minimizing the sum of continuously di#erentiable functions. This method uses a new adaptive stepsize rule that decreases the stepsize whenever su#cient progress is not made. We show that if the gradients of the functions are bounded and Lipschitz continuous over a certain level set, then every cluster point of the iterates generated by the method is a stationary point. In addition, if the gradient of the functions have a certain growth property, then the method is either linearly convergent in some sense or the stepsizes are bounded away from zero. The new stepsize rule is much in the spirit of heuristic learning rules used in practice for training neural networks via backpropagation. As such, the new stepsize rule may suggest improvements on existing learning rules. Finally, extension of the method and the convergence results to constrained minimization is discussed, as are some implementation issues and numerical exp...
An implementable proximal point algorithmic framework for nuclear norm minimization
, 2010
"... The nuclear norm minimization problem is to find a matrix with the minimum nuclear norm subject to linear and second order cone constraints. Such a problem often arises from the convex relaxation of a rank minimization problem with noisy data, and arises in many fields of engineering and science. In ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
The nuclear norm minimization problem is to find a matrix with the minimum nuclear norm subject to linear and second order cone constraints. Such a problem often arises from the convex relaxation of a rank minimization problem with noisy data, and arises in many fields of engineering and science. In this paper, we study inexact proximal point algorithms in the primal, dual and primaldual forms for solving the nuclear norm minimization with linear equality and second order cone constraints. We design efficient implementations of these algorithms and present comprehensive convergence results. In particular, we investigate the performance of our proposed algorithms in which the inner subproblems are approximately solved by the gradient projection method or the accelerated proximal gradient method. Our numerical results for solving randomly generated matrix completion problems and real matrix completion problems show that our algorithms perform favorably in comparison to several recently proposed stateoftheart algorithms. Interestingly, our proposed algorithms are connected with other algorithms that have been studied in the literature. Key words. Nuclear norm minimization, proximal point method, rank minimization, gradient projection method, accelerated proximal gradient method.
A comparison of optimization methods and software for largescale l1regularized linear classification
 The Journal of Machine Learning Research
"... Largescale linear classification is widely used in many areas. The L1regularized form can be applied for feature selection; however, its nondifferentiability causes more difficulties in training. Although various optimization methods have been proposed in recent years, these have not yet been com ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
Largescale linear classification is widely used in many areas. The L1regularized form can be applied for feature selection; however, its nondifferentiability causes more difficulties in training. Although various optimization methods have been proposed in recent years, these have not yet been compared suitably. In this paper, we first broadly review existing methods. Then, we discuss stateoftheart software packages in detail and propose two efficient implementations. Extensive comparisons indicate that carefully implemented coordinate descent methods are very suitable for training large document data.
A new projection method for variational inequality problems
 SIAM J. Control Optim
, 1999
"... Abstract. We propose a new projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. I ..."
Abstract

Cited by 20 (11 self)
 Add to MetaCart
Abstract. We propose a new projection algorithm for solving the variational inequality problem, where the underlying function is continuous and satisfies a certain generalized monotonicity assumption (e.g., it can be pseudomonotone). The method is simple and admits a nice geometric interpretation. It consists of two steps. First, we construct an appropriate hyperplane which strictly separates the current iterate from the solutions of the problem. This procedure requires a single projection onto the feasible set and employs an Armijotype linesearch along a feasible direction. Then the next iterate is obtained as the projection of the current iterate onto the intersection of the feasible set with the halfspace containing the solution set. Thus, in contrast with most other projectiontype methods, only two projection operations per iteration are needed. The method is shown to be globally convergent to a solution of the variational inequality problem under minimal assumptions. Preliminary computational experience is also reported. Key words. variational inequalities, projection methods, pseudomonotone maps