Results 1 - 10
of
1,585
On the Convergence of Conditional Epsilon-Subgradient Methods for Convex Programs and Convex-Concave Saddle-Point Problems
, 2000
"... The paper provides two contributions. First, we present new convergence results for conditional "-subgradient algorithms for general convex programs. The results obtained here extend the classical ones by Polyak [Pol67, Pol69, Pol87] as well as the recent ones in [CoL93, LPS96, AIS98] to a bro ..."
Abstract
- Add to MetaCart
broader framework. Secondly, we establish the application of this technique to solve nonstrictly convex-- concave saddle point problems, such as primal-dual formulations of linear programs. Contrary to several previous solution algorithms for such problems, a saddle-point is generated by a very simple
Prox-method with rate of convergence o(1/t) for variational inequalities with lipschitz continuous monotone operators and smooth convex-concave saddle-point problems
- SIAM JOURNAL ON OPTIMIZATION
, 2004
"... We propose a prox-type method with efficiency estimate O(ɛ−1) for approximating saddle points of convex-concave C1,1 functions and solutions of variational inequalities with monotone Lipschitz continuous operators. Application examples include matrix games, eigenvalue minimization, and computing th ..."
Abstract
-
Cited by 119 (16 self)
- Add to MetaCart
We propose a prox-type method with efficiency estimate O(ɛ−1) for approximating saddle points of convex-concave C1,1 functions and solutions of variational inequalities with monotone Lipschitz continuous operators. Application examples include matrix games, eigenvalue minimization, and computing
European Journal of Operational Research 151 (2003) 461–473
, 2001
"... On the convergence of conditional e-subgradient methods for convex programs and convex–concave saddle-point problems ..."
Abstract
- Add to MetaCart
On the convergence of conditional e-subgradient methods for convex programs and convex–concave saddle-point problems
An Efficient Stochastic Approximation Algorithm for Stochastic Saddle Point Problems
, 2001
"... We show that Polyak's (1990) stochastic approximation algorithm with averaging originally developed for unconstrained minimization of a smooth strongly convex objective function observed with noise can be naturally modified to solve convex-concave stochastic saddle point problems. We also show ..."
Abstract
-
Cited by 5 (2 self)
- Add to MetaCart
We show that Polyak's (1990) stochastic approximation algorithm with averaging originally developed for unconstrained minimization of a smooth strongly convex objective function observed with noise can be naturally modified to solve convex-concave stochastic saddle point problems. We also show
Structured Prediction via the Extragradient
"... We present a simple and scalable algorithm for large-margin estimation of structured models, including an important class of Markov networks and combinatorial models. We formulate the estimation problem as a convex-concave saddle-point problem and apply the extragradient method, yielding an algo ..."
Abstract
- Add to MetaCart
We present a simple and scalable algorithm for large-margin estimation of structured models, including an important class of Markov networks and combinatorial models. We formulate the estimation problem as a convex-concave saddle-point problem and apply the extragradient method, yielding
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015 1 An Approach Towards Fast Gradient-based Image Segmentation
"... Abstract—In this paper we present and investigate an ap-proach to fast multi-label color image segmentation using convex optimization techniques. The presented model is in some ways related to the well-known Mumford-Shah model, but deviates in certain important aspects. The optimization problem has ..."
Abstract
- Add to MetaCart
that is computationally inexpensive to solve. This paper introduces such a model, the nontrivial transformation of this model into a convex-concave saddle point problem, and the numerical treatment of the problem. We evaluate our approach by applying our algorithm to various images and show that our results
Structured prediction, dual extragradient and Bregman projections
- Journal of Machine Learning Research
, 2006
"... We present a simple and scalable algorithm for maximum-margin estimation of structured output models, including an important class of Markov networks and combinatorial models. We formulate the estimation problem as a convex-concave saddle-point problem that allows us to use simple projection methods ..."
Abstract
-
Cited by 59 (2 self)
- Add to MetaCart
We present a simple and scalable algorithm for maximum-margin estimation of structured output models, including an important class of Markov networks and combinatorial models. We formulate the estimation problem as a convex-concave saddle-point problem that allows us to use simple projection
SOLVING VARIATIONAL INEQUALITIES WITH STOCHASTIC MIRROR-PROX ALGORITHM
, 2008
"... Abstract. In this paper we consider iterative methods for stochastic variational inequalities (s.v.i.) with monotone operators. Our basic assumption is that the operator possesses both smooth and nonsmooth components. Further, only noisy observations of the problem data are available. We develop a n ..."
Abstract
-
Cited by 37 (6 self)
- Add to MetaCart
to Stochastic Semidefinite Feasability problem and Eigenvalue minimization. Key words. Nash variational inequalities, stochastic convex-concave saddle-point problem, large scale stochastic approximation, reduced complexity algorithms for convex optimization AMS subject classifications. 90C15, 65K10, 90C47 1
Structured prediction via the extragradient method
"... We present a simple and scalable algorithm for large-margin estimation of structured models, including an important class of Markov networks and combinatorial models. The estimation problem can be formulated as a quadratic program (QP) that exploits the problem structure to achieve polynomial number ..."
Abstract
-
Cited by 31 (2 self)
- Add to MetaCart
number of variables and constraints. However, off-the-shelf QP solvers scale poorly with problem and training sample size. We recast the formulation as a convex-concave saddle point problem that allows us to use simple projection methods. We show the projection step can be solved using combinatorial
Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms
"... Abstract We consider empirical risk minimization of linear predictors with convex loss functions. Such problems can be reformulated as convex-concave saddle point problems, and thus are well suitable for primal-dual first-order algorithms. However, primal-dual algorithms often require explicit stro ..."
Abstract
- Add to MetaCart
Abstract We consider empirical risk minimization of linear predictors with convex loss functions. Such problems can be reformulated as convex-concave saddle point problems, and thus are well suitable for primal-dual first-order algorithms. However, primal-dual algorithms often require explicit
Results 1 - 10
of
1,585