• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

A first-order primal-dual algorithm for convex problems with applications to imaging, (2011)

by A Chambolle, T Pock
Venue:J. Math. Imaging Vision,
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 436
Next 10 →

DTAM: Dense Tracking and Mapping in Real-Time

by Richard A. Newcombe, Steven J. Lovegrove, Andrew J. Davison
"... DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with mill ..."
Abstract - Cited by 132 (5 self) - Add to MetaCart
DTAM is a system for real-time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand-held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non-convex optimisation framework. Interleaved, we track the camera’s 6DOF motion precisely by frame-rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves realtime performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real-time scene interaction in a physics-enhanced augmented reality application. 1.
(Show Context)

Citation Context

...s a function of ξ, the convex sum g(u)‖∇ξ(u)‖ɛ + Q(u) is a small modification of the TV-L 2 2 ROF image denoising model term [11], and can be efficiently optimised using a primal-dual approach [1][16]=-=[3]-=-. Also, although still non-convex in the auxiliary variable α, each (7)Figure 3. Incremental cost volume construction; we show the current inverse depth map extracted as the current minimum cost for ...

A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms

by Laurent Condat , 2013
"... We propose a new first-order splitting algorithm for solving jointly the primal and dual formulations of large-scale convex minimization problems involving the sum of a smooth function with Lipschitzian gradient, a nonsmooth proximable function, and linear composite functions. This is a full splitti ..."
Abstract - Cited by 56 (9 self) - Add to MetaCart
We propose a new first-order splitting algorithm for solving jointly the primal and dual formulations of large-scale convex minimization problems involving the sum of a smooth function with Lipschitzian gradient, a nonsmooth proximable function, and linear composite functions. This is a full splitting approach, in the sense that the gradient and the linear operators involved are applied explicitly without any inversion, while the nonsmooth functions are processed individually via their proximity operators. This work brings together and notably extends several classical splitting schemes, like the forward–backward and Douglas–Rachford methods, as well as the recent primal–dual method of Chambolle and Pock designed for problems with linear composite terms.
(Show Context)

Citation Context

... together and encompasses as particular cases the forward–backward and Douglas–Rachford methods, as well as a recent method for minimizing the sum of a proximable function and a linear composite term =-=[14]-=-. It is fully split in that the gradient, proximity, and linear operators are applied individually; in particular, there is no implicit operation like an inner loop or a applying the inverse of a line...

Generalized forward-backward splitting

by Hugo Raguet, Jalal Fadili, Gabriel Peyré , 2011
"... This paper introduces the generalized forward-backward splitting algorithm for minimizing convex functions of the form F + ∑ n i=1 Gi, where F has a Lipschitz-continuous gradient and the Gi’s are simple in the sense that their Moreau proximity operators are easy to compute. While the forward-backwar ..."
Abstract - Cited by 48 (9 self) - Add to MetaCart
This paper introduces the generalized forward-backward splitting algorithm for minimizing convex functions of the form F + ∑ n i=1 Gi, where F has a Lipschitz-continuous gradient and the Gi’s are simple in the sense that their Moreau proximity operators are easy to compute. While the forward-backward algorithm cannot deal with more than n = 1 non-smooth function, our method generalizes it to the case of arbitrary n. Our method makes an explicit use of the regularity of F in the forward step, and the proximity operators of the Gi’s are applied in parallel in the backward step. This allows the generalized forward-backward to efficiently address an important class of convex problems. We prove its convergence in infinite dimension, and its robustness to errors on the computation of the proximity operators and of the gradient of F. Examples on inverse problems in imaging demonstrate the advantage of the proposed methods in comparison to other splitting algorithms.

Real-time minimization of the piecewise smooth Mumford-Shah functional

by Evgeny Strekalovskiy, Daniel Cremers - in Proceedings of the European Conference on Computer Vision (ECCV), 2014
"... Abstract. We propose an algorithm for eciently minimizing the piece-wise smooth Mumford-Shah functional. The algorithm is based on an extension of a recent primal-dual algorithm from convex to non-convex optimization problems. The key idea is to rewrite the proximal operator in the primal-dual algor ..."
Abstract - Cited by 36 (26 self) - Add to MetaCart
Abstract. We propose an algorithm for eciently minimizing the piece-wise smooth Mumford-Shah functional. The algorithm is based on an extension of a recent primal-dual algorithm from convex to non-convex optimization problems. The key idea is to rewrite the proximal operator in the primal-dual algorithm using Moreau’s identity. The resulting al-gorithm computes piecewise smooth approximations of color images at 15-20 frames per second at VGA resolution using GPU acceleration. Compared to convex relaxation approaches [18], it is orders of mag-nitude faster and does not require a discretization of color values. In contrast to the popular Ambrosio-Tortorelli approach [2], it naturally combines piecewise smooth and piecewise constant approximations, it does not require an epsilon-approximation and it is not based on an alternation scheme. The achieved energies are in practice at most 5% o ↵ the optimal value for one-dimensional problems. Numerous experi-ments demonstrate that the proposed algorithm is well-suited to perform discontinuity-preserving smoothing and real-time video cartooning.

Iteration-complexity of block-decomposition algorithms and the alternating minimization augmented Lagrangian method

by Renato D. C. Monteiro, et al. , 2010
"... In this paper, we consider the monotone inclusion problem consisting of the sum of a continuous monotone map and a point-to-set maximal monotone operator with a separable two-block structure and introduce a framework of block-decomposition prox-type algorithms for solving it which allows for each on ..."
Abstract - Cited by 33 (4 self) - Add to MetaCart
In this paper, we consider the monotone inclusion problem consisting of the sum of a continuous monotone map and a point-to-set maximal monotone operator with a separable two-block structure and introduce a framework of block-decomposition prox-type algorithms for solving it which allows for each one of the single-block proximal subproblems to be solved in an approximate sense. Moreover, by showing that any method in this framework is also a special instance of the hybrid proximal extragradient (HPE) method introduced by Solodov and Svaiter, we derive corresponding convergence rate results. We also describe some instances of the framework based on specific and inexpensive schemes for solving the singleblock proximal subproblems. Finally, we consider some applications of our methodology to: i) propose new algorithms for the monotone inclusion problem consisting of the sum of two maximal monotone operators, and; ii) study the complexity of the classical alternating minimization augmented Lagrangian method for a class of linearly constrained convex programming problems with proper closed convex objective functions.

X.: Convergence analysis of primal-dual algorithms for total variation image restoration

by Bingsheng He, Xiaoming Yuan , 2010
"... Abstract. Recently, some attractive primal-dual algorithms have been proposed for solving a saddle-point problem, with particular applications in the area of total variation (TV) image restoration. This paper focuses on the convergence analysis of existing primal-dual algorithms and shows that the i ..."
Abstract - Cited by 31 (2 self) - Add to MetaCart
Abstract. Recently, some attractive primal-dual algorithms have been proposed for solving a saddle-point problem, with particular applications in the area of total variation (TV) image restoration. This paper focuses on the convergence analysis of existing primal-dual algorithms and shows that the involved parameters of those primal-dual algorithms (including the step sizes) can be significantly enlarged if some simple correction steps are supplemented. As a result, we present some primal-dual-based contraction methods for solving the saddle-point problem. These contraction methods are in the prediction-correction fashion in the sense that the predictor is generated by a primal-dual method and it is corrected by some simple correction step at each iteration. In addition, based on the context of contraction type methods, we provide a novel theoretical framework for analyzing the convergence of primal-dual algorithms which simplifies existing convergence analysis substantially.
(Show Context)

Citation Context

... of Education of China 20060284001. Email: hebma@nju.edu.cn 2 Corresponding author. This author was supported by an Hong Kong General Research Grant. Email: xmyuan@hkbu.edu.hk 1noisy image, see e.g. =-=[6, 9, 22]-=-. Note that we can consider the saddle-point problem in more general setting, for example, exactly as [6]: min x∈X max y∈Y g(x) − 〈Ax, y〉 − f ∗ (y) (1.2) where X ⊂ X and Y ⊂ Y are closed convex sets; ...

Continuous Multiclass Labeling Approaches and Algorithms

by J. Lellmann - SIAM J. Imag. Sci , 2011
"... We study convex relaxations of the image labeling problem on a con-tinuous domain with regularizers based on metric interaction potentials. The generic framework ensures existence of minimizers and covers a wide range of relaxations of the originally combinatorial problem. We focus on two specific r ..."
Abstract - Cited by 28 (5 self) - Add to MetaCart
We study convex relaxations of the image labeling problem on a con-tinuous domain with regularizers based on metric interaction potentials. The generic framework ensures existence of minimizers and covers a wide range of relaxations of the originally combinatorial problem. We focus on two specific relaxations that differ in flexibility and simplicity – one can be used to tightly relax any metric interaction potential, while the other one only covers Euclidean metrics but requires less computational effort. For solving the nonsmooth discretized problem, we propose a globally conver-gent Douglas-Rachford scheme, and show that a sequence of dual iterates can be recovered in order to provide a posteriori optimality bounds. In a quantitative comparison to two other first-order methods, the approach shows competitive performance on synthetical and real-world images. By combining the method with an improved binarization technique for non-standard potentials, we were able to routinely recover discrete solutions within 1%–5 % of the global optimum for the combinatorial image labeling problem. 1 Problem Formulation The multi-class image labeling problem consists in finding, for each pixel x in the image domain Ω ⊆ Rd, a label `(x) ∈ {1,..., l} which assigns one of l class labels to x so that the labeling function ` adheres to some local data fidelity as well as nonlocal spatial coherency constraints. This problem class occurs in many applications, such as segmentation, mul-tiview reconstruction, stitching, and inpainting [PCF06]. We consider the vari-ational formulation inf `:Ω→{1,...,l} f(`), f(`):= Ω s(x, `(x))dx ︸ ︷ ︷ ︸ data term + J(`). ︸ ︷ ︷ ︸ regularizer
(Show Context)

Citation Context

...s the Douglas-Rachford approach also allows to use the primal-dual gap f(u(k))− fD(w′′(k)) (130) as a stopping criterion. Very recently, a generalization of the FPD method [PCBC09a] has been proposed =-=[CP10]-=-. The authors show that under certain circumstances, their method is equivalent to Douglas-Rachford splitting. As a result, it is possible to show that Alg. 4 can alternatively be interpreted as an ap...

Distributed basis pursuit

by João F. C. Mota, João M. F. Xavier, Pedro M. Q. Aguiar, Markus Püschel - IEEE Trans. Sig. Proc , 2012
"... Abstract—We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least-norm solution of the underdetermined linear system and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a s ..."
Abstract - Cited by 28 (6 self) - Add to MetaCart
Abstract—We propose a distributed algorithm for solving the optimization problem Basis Pursuit (BP). BP finds the least-norm solution of the underdetermined linear system and is used, for example, in compressed sensing for reconstruction. Our algorithm solves BP on a distributed platform such as a sensor network, and is designed to minimize the communication between nodes. The algorithm only requires the network to be connected, has no notion of a central processing node, and no node has access to the entire matrix at any time. We consider two scenarios in which either the columns or the rows of are distributed among the compute nodes. Our algorithm, named D-ADMM, is a decentralized implementation of the alternating direction method of multipliers. We show through numerical simulation that our algorithm requires considerably less communications between the nodes than the state-of-the-art algorithms. Index Terms—Augmented Lagrangian, basis pursuit (BP), distributed optimization, sensor networks.
(Show Context)

Citation Context

...ven if the quadratic term of in (25) is linearized, which can many times simplify the solution of that optimization problem. For more properties of ADMM and its relation to other algorithms see [59], =-=[60]-=-. We now present a generalization of ADMM, which we call “generalized ADMM.” The generalized ADMM solves minimize subject to (28) where is the variable, , the functions are convex, are full column-ran...

Parallel proximal algorithm for image restoration using hybrid regularization

by Nelly Pustelnik, Caroline Chaux, Jean-christophe Pesquet - IEEE Transactions on Image Processing , 2011
"... Regularization approaches have demonstrated their effectiveness for solving ill-posed problems. However, in the context of variational restoration methods, a challenging question remains, namely how to find a good regularizer. While total variation introduces staircase effects, wavelet domain regula ..."
Abstract - Cited by 25 (8 self) - Add to MetaCart
Regularization approaches have demonstrated their effectiveness for solving ill-posed problems. However, in the context of variational restoration methods, a challenging question remains, namely how to find a good regularizer. While total variation introduces staircase effects, wavelet domain regularization brings other artefacts, e.g. ringing. However, a trade-off can be made by introducing a hybrid regularization including several terms non necessarily acting in the same domain (e.g. spatial and wavelet transform domains). While this approachwas shown to provide good results for solving deconvolution problems in the presence of additive Gaussian noise, an important issue is to efficiently deal with this hybrid regularization for more general noise models. To solve this problem, we adopt a convex optimization framework where the criterion to be minimized is split in the sum of more than two terms. For spatial domain regularization, isotropic or anisotropic total variation definitions using various gradient filters are considered. An accelerated version of the Parallel Proximal Algorithm is proposed to perform the minimization. Some difficulties in the computation of the proximity operators involved in this algorithm are also addressed in this paper. Numerical experiments performed in the context of Poisson data recovery, show the good behaviour of the algorithm as well as promising results concerning the use of hybrid regularization techniques.
(Show Context)

Citation Context

...proach. One can note that, even if the paper is devoted to the case of convolutive operators, this approach could be generalized to more general linear operators. Note that the primal-dual approaches =-=[71, 72, 73, 74, 75]-=- can offer alternative solutions to the ones developed in this paper. However, one of the advantages of PPXA is that it easily leads to efficient parallel implementations 21hal-00826121, version 1 - ...

A convex approach to minimal partitions

by Antonin Chambolle, Daniel Cremers, Thomas Pock - J. IMAGING SCI , 2012
"... We describe a convex relaxation for a family of problems of minimal perimeter partitions. The minimization of the relaxed problem can be tackled numerically, we describe an algorithm and show some results. In most cases, our relaxed problem finds a correct numerical approximation of the optimal solu ..."
Abstract - Cited by 24 (10 self) - Add to MetaCart
We describe a convex relaxation for a family of problems of minimal perimeter partitions. The minimization of the relaxed problem can be tackled numerically, we describe an algorithm and show some results. In most cases, our relaxed problem finds a correct numerical approximation of the optimal solution: we give some arguments to explain why it should be so, and also discuss some situation where it fails.
(Show Context)

Citation Context

...n [57]. The version we propose (first in [45], with a first proof of convergence), which is slightly different, is improved in the sense that we can provide an estimate for the error in the objective =-=[18, 20]-=-. It is in fact a variant of the Douglas-Rachford splitting method [39], and is inspired from similar algorithms in [44, 47]. (We refer to [45, 18, 20] for details.) We fix a scale h = 1/N > 0, and fi...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University