Results 1  10
of
15
On Projection Algorithms for Solving Convex Feasibility Problems
, 1996
"... Due to their extraordinary utility and broad applicability in many areas of classical mathematics and modern physical sciences (most notably, computerized tomography), algorithms for solving convex feasibility problems continue to receive great attention. To unify, generalize, and review some of the ..."
Abstract

Cited by 330 (44 self)
 Add to MetaCart
(Show Context)
Due to their extraordinary utility and broad applicability in many areas of classical mathematics and modern physical sciences (most notably, computerized tomography), algorithms for solving convex feasibility problems continue to receive great attention. To unify, generalize, and review some of these algorithms, a very broad and flexible framework is investigated . Several crucial new concepts which allow a systematic discussion of questions on behaviour in general Hilbert spaces and on the quality of convergence are brought out. Numerous examples are given. 1991 M.R. Subject Classification. Primary 47H09, 49M45, 6502, 65J05, 90C25; Secondary 26B25, 41A65, 46C99, 46N10, 47N10, 52A05, 52A41, 65F10, 65K05, 90C90, 92C55. Key words and phrases. Angle between two subspaces, averaged mapping, Cimmino's method, computerized tomography, convex feasibility problem, convex function, convex inequalities, convex programming, convex set, Fej'er monotone sequence, firmly nonexpansive mapping, H...
BundleBased Relaxation Methods For Multicommodity Capacitated Fixed Charge Network Design
, 1999
"... To efficiently derive bounds for largescale instances of the capacitated fixedcharge network design problem, Lagrangian relaxations appear promising. This paper presents the results of comprehensive experiments aimed at calibrating and comparing bundle and subgradient methods applied to the optimi ..."
Abstract

Cited by 50 (26 self)
 Add to MetaCart
To efficiently derive bounds for largescale instances of the capacitated fixedcharge network design problem, Lagrangian relaxations appear promising. This paper presents the results of comprehensive experiments aimed at calibrating and comparing bundle and subgradient methods applied to the optimization of Lagrangian duals arising from two Lagrangian relaxations. This study substantiates the fact that bundle methods appear superior to subgradient approaches because they converge faster and are more robust relative to different relaxations, problem characteristics, and selection of the initial parameter values. It also demonstrates that effective lower bounds may be computed efficiently for largescale instances of the capacitated fixedcharge network design problem. Indeed, in a fraction of the time required by a standard simplex approach to solve the linear programming relaxation, the methods we present attain very high quality solutions.
An Adaptive Level Set Method for Nondifferentiable Constrained Image Recovery
 IEEE TRANS. IMAGE PROCESSING
, 2002
"... The formulation of a wide variety of image recovery problems leads to the minimization of a convex objective over a convex set representing the constraints derived from a priori knowledge and consistency with the observed signals. In recent years, nondifferentiable objectives have become popular due ..."
Abstract

Cited by 36 (4 self)
 Add to MetaCart
The formulation of a wide variety of image recovery problems leads to the minimization of a convex objective over a convex set representing the constraints derived from a priori knowledge and consistency with the observed signals. In recent years, nondifferentiable objectives have become popular due in part to their ability to capture certain features such as sharp edges. They also arise naturally in minimax inconsistent set theoretic recovery problems. At the same time, the issue of developing reliable numerical algorithms to solve such convex programs in the context of image recovery applications has received little attention. In this paper, we address this issue and propose an adaptive level set method for nondifferentiable constrained image recovery. The asymptotic properties of the method are analyzed and its implementation is discussed. Numerical experiments illustrate applications to total variation and minimax set theoretic image restoration and denoising problems.
Convergence of a Simple Subgradient Level Method
 Math. Programming
, 1998
"... We study the subgradient projection method for convex optimization with Brannlund 's level control for estimating the optimal value. We establish global convergence in objective values without additional assumptions employed in the literature. Key words. Nondifferentiable optimization, subgrad ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
We study the subgradient projection method for convex optimization with Brannlund 's level control for estimating the optimal value. We establish global convergence in objective values without additional assumptions employed in the literature. Key words. Nondifferentiable optimization, subgradient optimization. 1 Introduction We consider a method for the minimization problem f = inf S f under the following assumptions. S is a nonempty closed convex set in IR n , f : IR n ! IR is a convex function, for each x 2 S we can compute f(x) and a subgradient g f (x) 2 @f(x) of f at x, and for each x 2 IR n we can find P S x = arg min y2S jx \Gamma yj, its orthogonal projection on S, where j \Delta j is the Euclidean norm. The optimal set Arg min S f may be empty. Given the kth iterate x k 2 S and a target level f k lev that estimates f , we may use H k = n x : f(x k ) + D g k ; x \Gamma x k E f k lev o with g k = g f (x k ) 2 @f(x k ) (1.1) to approximate t...
An InfeasiblePoint Subgradient Method Using Adaptive Approximate Projections ⋆
"... Abstract. We propose a new subgradient method for the minimization of nonsmooth convex functions over a convex set. To speed up computations we use adaptive approximate projections only requiring to move within a certain distance of the exact projections (which decreases in the course of the algorit ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We propose a new subgradient method for the minimization of nonsmooth convex functions over a convex set. To speed up computations we use adaptive approximate projections only requiring to move within a certain distance of the exact projections (which decreases in the course of the algorithm). In particular, the iterates in our method can be infeasible throughout the whole procedure. Nevertheless, we provide conditions which ensure convergence to an optimal feasible point under suitable assumptions. One convergence result deals with step size sequences that are fixed a priori. Two other results handle dynamic Polyaktype step sizes depending on a lower or upper estimate of the optimal objective function value, respectively. Additionally, we briefly sketch two applications: Optimization with convex chance constraints, and finding the minimum ℓ1norm solution to an underdetermined linear system, an important problem in Compressed Sensing.
Fast and Low Complexity Blind Equalization via Subgradient Projections
, 2005
"... We propose a novel blind equalization method based on subgradient search over a convex cost surface. This is an alternative to the existing iterative blind equalization approaches such as the Constant Modulus Algorithm (CMA) which often suffer from the convergence problems caused by their nonconvex ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We propose a novel blind equalization method based on subgradient search over a convex cost surface. This is an alternative to the existing iterative blind equalization approaches such as the Constant Modulus Algorithm (CMA) which often suffer from the convergence problems caused by their nonconvex cost functions. The proposed method is an iterative algorithm, (called SubGradient based Blind Algorithm (SGBA) ) for both real and complex constellations, with a very simple update rule. It is based on the minimization of the l ∞ norm of the equalizer output under a linear constraint on the equalizer coefficients using subgradient iterations. The algorithm has a nice convergence behavior attributed to the convex l ∞ cost surface as well as the step size selection rules associated with the subgradient search. We illustrate the performance of the algorithm using examples with both complex and real constellations, where we show that the proposed algorithm’s convergence is less sensitive to initial point selection, and a fast convergence behavior can be achieved with a judicious selection of step sizes. Furthermore, the amount of data required for the training of the equalizer is significantly lower than most of the existing schemes.
Continuous MinMax Programs with SemiInfinite Constraints
, 2011
"... Abstract⎯In this paper, we propose an algorithm for solving a kind of nonlinear programming where the objective is the maximal function of a family of continuous functions and the feasible domain is explicitly made of infinitely many constraints. Our algorithm combines the entropic regularization an ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract⎯In this paper, we propose an algorithm for solving a kind of nonlinear programming where the objective is the maximal function of a family of continuous functions and the feasible domain is explicitly made of infinitely many constraints. Our algorithm combines the entropic regularization and the cutting plane method (the Remeztype) to deal with the nondifferentiability of the maximal function and the infinitely many constraints respectively. A finite relaxed version, which terminates within a finite number of iterations to give an approximate solution, is proposed to handle the computational issues, including the blowup problem in the entropic regularization and the global optimization subproblems in the cutting plane method. To justify the efficiency of the inexact algorithm, we also analyze the theoretical errorbound and conduct numerical experiments.
Blacksburg, VirginiaLimited Memory Space Dilation and Reduction Algorithms by
, 1998
"... In this thesis, we present variants of Shor and Zhurbenko’s ralgorithm, motivated by the memoryless and limited memory updates for differentiable quasiNewton methods. This well known ralgorithm, which employs a space dilation strategy in the direction of the difference between two successive subg ..."
Abstract
 Add to MetaCart
(Show Context)
In this thesis, we present variants of Shor and Zhurbenko’s ralgorithm, motivated by the memoryless and limited memory updates for differentiable quasiNewton methods. This well known ralgorithm, which employs a space dilation strategy in the direction of the difference between two successive subgradients, is recognized as being one of the most effective procedures for solving nondifferentiable optimization problems. However, the method needs to store the space dilation matrix and update it at every iteration, resulting in a substantial computational burden for largesized problems. To circumvent this difficulty, we first develop a memoryless update scheme. In the space transformation sense, the new update scheme can be viewed as a combination of space dilation and reduction operations. We prove convergence of this new algorithm, and demonstrate how it can be used in conjunction with a variable target value method that allows a practical, convergent implementation of the method. For performance comparisons we examine other memoryless and limited memory variants, and also prove a modification of a related algorithm due to Polyak that employs a projection on a pair of Kelley’s cutting planes. These variants are tested along with Shor’s ralgorithm on a set of standard test problems from the literature as well as on randomly generated dual transportation and assignment problems. Our