Results 1  10
of
12
Proximal Minimization Methods with Generalized Bregman Functions
 SIAM JOURNAL ON CONTROL AND OPTIMIZATION
, 1995
"... We consider methods for minimizing a convex function f that generate a sequence fx k g by taking x k+1 to be an approximate minimizer of f(x) +D h (x; x k )=c k , where c k ? 0 and D h is the Dfunction of a Bregman function h. Extensions are made to Bfunctions that generalize Bregman func ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
We consider methods for minimizing a convex function f that generate a sequence fx k g by taking x k+1 to be an approximate minimizer of f(x) +D h (x; x k )=c k , where c k ? 0 and D h is the Dfunction of a Bregman function h. Extensions are made to Bfunctions that generalize Bregman functions and cover more applications. Convergence is established under criteria amenable to implementation. Applications are made to nonquadratic multiplier methods for nonlinear programs.
On the linear convergence of descent methods for convex essentially smooth minimization
 SIAM J. Control Optim
, 1992
"... Dedicated to those courageous people who, on June 4, 1989, sacrificed their lives in ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Dedicated to those courageous people who, on June 4, 1989, sacrificed their lives in
A Survey of Algorithms for Convex Multicommodity Flow Problems
, 1997
"... There are many problems related to the design of networks. Among them, the message routing problem plays a determinant role in the optimization of network performance. Much of the motivation for this work comes from this problem which is shown to belong to the class of nonlinear convex multicommodit ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
There are many problems related to the design of networks. Among them, the message routing problem plays a determinant role in the optimization of network performance. Much of the motivation for this work comes from this problem which is shown to belong to the class of nonlinear convex multicommodity flow problems. This paper emphasizes the message routing problem in data networks, but it includes a broader literature overview of convex multicommodity flow problems. We present and discuss the main solution techniques proposed for solving this class of largescale convex optimization problems. We conduct some numerical experiments on the message routing problem with some different techniques. 1 Introduction The literature dealing with multicommodity flow problems is rich since the publication of the works of Ford and Fulkerson's [19] and T.C. Hu [30] in the beginning of the 1960s. These problems usually have a very large number of variables and constraints and arise in a great variety o...
A Unified Description of Iterative Algorithms for Traffic Equilibria
 European Journal of Operational Research
, 1992
"... The purpose of this paper is to provide a unified description of iterative algorithms for the solution of traffic equilibrium problems. We demonstrate that a large number of well known solution techniques can be described in a unified manner through the concept of partial linearization, and establis ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
The purpose of this paper is to provide a unified description of iterative algorithms for the solution of traffic equilibrium problems. We demonstrate that a large number of well known solution techniques can be described in a unified manner through the concept of partial linearization, and establish close relationships with other algorithmic classes for nonlinear programming and variational inequalities. In the case of nonseparable travel costs, the class of partial linearization algorithms are shown to yield new results in the theory of finitedimensional variational inequalities. The possibility of applying truncated algorithms within the framework is also discussed.
FreeSteering Relaxation Methods for Problems with Strictly Convex Costs and Linear Constraints
, 1994
"... We consider dual coordinate ascent methods for minimizing a strictly convex (possibly nondifferentiable) function subject to linear constraints. Such methods are useful in largescale applications (e.g., entropy maximization, quadratic programming, network flows), because they are simple, can exploi ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
We consider dual coordinate ascent methods for minimizing a strictly convex (possibly nondifferentiable) function subject to linear constraints. Such methods are useful in largescale applications (e.g., entropy maximization, quadratic programming, network flows), because they are simple, can exploit sparsity and in certain cases are highly parallelizable. We establish their global convergence under weak conditions and a freesteering order of relaxation. Previous comparable results were restricted to special problems with separable costs and equality constraints. Our convergence framework unifies to a certain extent the approaches of Bregman, Censor and Lent, De Pierro and Iusem, and Luo and Tseng, and complements that of Bertsekas and Tseng.
RELAXATION METHODS FOR MONOTROPIC PROGRAMS
, 1990
"... We propose a dual descent method for the problem of minimizing a convex, possibly nondifferentiable, separable cost subject to linear constraints. The method has properties reminiscent of the GaussSeidel method in numerical analysis and uses the ecomplementary slackness mechanism introduced in Ber ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
We propose a dual descent method for the problem of minimizing a convex, possibly nondifferentiable, separable cost subject to linear constraints. The method has properties reminiscent of the GaussSeidel method in numerical analysis and uses the ecomplementary slackness mechanism introduced in Bertsekas, Hosein and Tseng (1987) to ensure finite convergence to near optimality. As special cases we obtain the methods in Bertsekas, Hosein and Tseng (1987) for network flow programs and the methods in Tseng and Bertsekas (1987) for linear programs.
A survey on the continuous nonlinear resource allocation problem
 Eur. J. Oper. Res
, 2008
"... Our problem of interest consists of minimizing a separable, convex and differentiable function over a convex set, defined by bounds on the variables and an explicit constraint described by a separable convex function. Applications are abundant, and vary from equilibrium problems in the engineering a ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Our problem of interest consists of minimizing a separable, convex and differentiable function over a convex set, defined by bounds on the variables and an explicit constraint described by a separable convex function. Applications are abundant, and vary from equilibrium problems in the engineering and economic sciences, through resource allocation and balancing problems in manufacturing, statistics, military operations research and production and financial economics, to subproblems in algorithms for a variety of more complex optimization models. This paper surveys the history and applications of the problem, as well as algorithmic approaches to its solution. The most common techniques are based on finding the optimal value of the Lagrange multiplier for the explicit constraint, most often through the use of a type of line search procedure. We analyze the most relevant references, especially regarding their originality and numerical findings, summarizing with remarks on possible extensions and future research. 1 Introduction and
Decomposition methods for differentiable optimization problems over Cartesian product sets
, 1997
"... . This paper presents a unified analysis of decomposition algorithms for continuously differentiable optimization problems defined on Cartesian products of convex feasible sets. The decomposition algorithms are analyzed using the framework of cost approximation algorithms. A convergence analysis is ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
. This paper presents a unified analysis of decomposition algorithms for continuously differentiable optimization problems defined on Cartesian products of convex feasible sets. The decomposition algorithms are analyzed using the framework of cost approximation algorithms. A convergence analysis is made for three decomposition algorithms: a sequential algorithm which extends the classical GaussSeidel scheme, a synchronized parallel algorithm which extends the Jacobi method, and a partially asynchronous parallel algorithm. The analysis validates inexact computations in both the subproblem and line search phases, and includes convergence rate results. The range of feasible step lengths within each algorithm is shown to have a direct correspondence to the increasing degree of parallelism and asynchronism, and the resulting usage of more outdated information in the algorithms. Keywords: Cartesian product sets, decomposition, cost approximation, sequential algorithm, parallel processing,...
Approximated Structured Prediction for Learning Large Scale Graphical Models. Arxiv preprint arXiv:1006.2899
, 2010
"... In this paper we propose an approximated structured prediction framework for large scale graphical models and derive messagepassing algorithms for learning their parameters efficiently. We first relate CRFs and structured SVMs and show that in CRFs a variant of the logpartition function, known as ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In this paper we propose an approximated structured prediction framework for large scale graphical models and derive messagepassing algorithms for learning their parameters efficiently. We first relate CRFs and structured SVMs and show that in CRFs a variant of the logpartition function, known as softmax, smoothly approximates the hinge loss function of structured SVMs. We then propose an intuitive approximation for the structured prediction problem, using duality, based on local entropy approximations and derive an efficient messagepassing algorithm that is guaranteed to converge to the optimum for concave entropy approximations. Unlike existing approaches, this allows us to learn efficiently graphical models with cycles and very large number of parameters. We demonstrate the effectiveness of our approach in an image denoising task. This task was previously solved by sharing parameters across cliques. In contrast, our algorithm is able to efficiently learn large number of parameters resulting in orders of magnitude better prediction. 1.
Parallel Cost Approximation Algorithms For Differentiable Optimization
"... This paper presents a unified analysis of decomposition algorithms for continuously differentiable optimization problems defined on Cartesian products of convex feasible sets. The decomposition algorithms are analyzed using the framework of cost approximation algorithms. A convergence analysis is ma ..."
Abstract
 Add to MetaCart
This paper presents a unified analysis of decomposition algorithms for continuously differentiable optimization problems defined on Cartesian products of convex feasible sets. The decomposition algorithms are analyzed using the framework of cost approximation algorithms. A convergence analysis is made for three decomposition algorithms: a sequential algorithm which encompasses the classical GaussSeidel scheme, a synchronized parallel algorithm which encompasses the Jacobi method, and a partially asynchronous parallel algorithm. The analysis validates inexact computations in both the subproblem and line search phases. The range of feasible step lengths within each algorithm is shown to have a direct correspondence to the increasing degree of parallelism and asynchronism, and the resulting usage of more outdated information in the algorithms. Key words: Cartesian product sets, convex optimization, decomposition, cost approximation, sequential algorithm, parallel processing, partially a...