Results 1  10
of
50
TwoPhase Kernel Estimation for Robust Motion Deblurring
"... Abstract. We discuss a few new motion deblurring problems that are significant to kernel estimation and nonblind deconvolution. We found that strong edges do not always profit kernel estimation, but instead under certain circumstance degrade it. This finding leads to a new metric to measure the use ..."
Abstract

Cited by 92 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We discuss a few new motion deblurring problems that are significant to kernel estimation and nonblind deconvolution. We found that strong edges do not always profit kernel estimation, but instead under certain circumstance degrade it. This finding leads to a new metric to measure the usefulness of image edges in motion deblurring and a gradient selection process to mitigate their possible adverse effect. We also propose an efficient and highquality kernel estimation method based on using the spatial prior and the iterative support detection (ISD) kernel refinement, which avoids hard threshold of the kernel elements to enforce sparsity. We employ the TVℓ1 deconvolution model, solved with a new variable substitution scheme to robustly suppress noise. 1
Alternating direction augmented Lagrangian methods for semidefinite programming
, 2009
"... We present an alternating direction method based on an augmented Lagrangian framework for solving semidefinite programming (SDP) problems in standard form. At each iteration, the algorithm, also known as a twosplitting scheme, minimizes the dual augmented Lagrangian function sequentially with resp ..."
Abstract

Cited by 68 (2 self)
 Add to MetaCart
We present an alternating direction method based on an augmented Lagrangian framework for solving semidefinite programming (SDP) problems in standard form. At each iteration, the algorithm, also known as a twosplitting scheme, minimizes the dual augmented Lagrangian function sequentially with respect to the Lagrange multipliers corresponding to the linear constraints, then the dual slack variables and finally the primal variables, while in each minimization keeping the other variables fixed. Convergence is proved by using a fixedpoint argument. A multiplesplitting algorithm is then proposed to handle SDPs with inequality constraints and positivity constraints directly without transforming them to the equality constraints in standard form. Finally, numerical results for frequency assignment, maximum stable set and binary integer quadratic programming problems are presented to demonstrate the robustness and efficiency of our algorithm.
Approximation Accuracy, Gradient Methods, and Error Bound for Structured Convex Optimization
, 2009
"... Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this ..."
Abstract

Cited by 38 (1 self)
 Add to MetaCart
(Show Context)
Convex optimization problems arising in applications, possibly as approximations of intractable problems, are often structured and large scale. When the data are noisy, it is of interest to bound the solution error relative to the (unknown) solution of the original noiseless problem. Related to this is an error bound for the linear convergence analysis of firstorder gradient methods for solving these problems. Example applications include compressed sensing, variable selection in regression, TVregularized image denoising, and sensor network localization.
Analysis and generalizations of the linearized Bregman method
 SIAM J. IMAGING SCI
, 2010
"... This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem ..."
Abstract

Cited by 36 (9 self)
 Add to MetaCart
This paper analyzes and improves the linearized Bregman method for solving the basis pursuit and related sparse optimization problems. The analysis shows that the linearized Bregman method has the exact regularization property; namely, it converges to an exact solution of the basis pursuit problem whenever its smooth parameter α is greater than a certain value. The analysis is based on showing that the linearized Bregman algorithm is equivalent to gradient descent applied to a certain dual formulation. This result motivates generalizations of the algorithm enabling the use of gradientbased optimization techniques such as line search, Barzilai–Borwein, limited memory BFGS (LBFGS), nonlinear conjugate gradient, and Nesterov’s methods. In the numerical simulations, the two proposed implementations, one using Barzilai–Borwein steps with nonmonotone line search and the other using LBFGS, gave more accurate solutions in much shorter times than the basic implementation of the linearized Bregman method with a socalled kicking technique.
Parametric Maximum Flow Algorithms for Fast Total Variation Minimization
, 2007
"... This report studies the global minimization of discretized total variation (TV) energies with an L¹ or L² fidelity term using parametric maximum flow algorithms. The TVL² model [36], also known as the RudinOsherFatemi (ROF) model is suitable for restoring images contaminated by Gaussian noise, wh ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
(Show Context)
This report studies the global minimization of discretized total variation (TV) energies with an L¹ or L² fidelity term using parametric maximum flow algorithms. The TVL² model [36], also known as the RudinOsherFatemi (ROF) model is suitable for restoring images contaminated by Gaussian noise, while the TVL¹ model [2, 29, 7, 42] is able to remove impulsive noise from greyscale images, and perform multi scale decompositions of them. For largescale applications such as those in medical image (pre)processing, we propose here fast and memoryefficient algorithms, based on a parametric maximum flow algorithm [19] and the minimum st cut representation of TVbased energy functions [26, 17]. Preliminary numerical results on largescale twodimensional CT and threedimensional Brain MRI images that illustrate the effectiveness of our approaches are presented.
Augmented Lagrangian alternating direction method for matrix separation based on lowrank factorization
, 2011
"... The matrix separation problem aims to separate a lowrank matrix and a sparse matrix from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications. Nuclearnorm minimization models have been proposed for matrix separation and prov ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
The matrix separation problem aims to separate a lowrank matrix and a sparse matrix from their sum. This problem has recently attracted considerable research attention due to its wide range of potential applications. Nuclearnorm minimization models have been proposed for matrix separation and proved to yield exact separations under suitable conditions. These models, however, typically require the calculation of a full or partial singular value decomposition (SVD) at every iteration that can become increasingly costly as matrix dimensions and rank grow. To improve scalability, in this paper we propose and investigate an alternative approach based on solving a nonconvex, lowrank factorization model by an augmented Lagrangian alternating direction method. Numerical studies indicate that the effectiveness of the proposed model is limited to problems where the sparse matrix does not dominate the lowrank one in magnitude, though this limitation can be alleviated by certain data preprocessing techniques. On the other hand, extensive numerical results show that, within its applicability range, the proposed method in general has a much faster solution speed than nuclearnorm minimization algorithms, and often provides better recoverability.
An augmented Lagrangian method for total variation video restoration,”
 IEEE Trans. Image Process.,
, 2011
"... AbstractThis paper presents a fast algorithm for restoring video sequences. The proposed algorithm, as opposed to existing methods, does not consider video restoration as a sequence of image restoration problems. Rather, it treats a video sequence as a spacetime volume and poses a spacetime tota ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
AbstractThis paper presents a fast algorithm for restoring video sequences. The proposed algorithm, as opposed to existing methods, does not consider video restoration as a sequence of image restoration problems. Rather, it treats a video sequence as a spacetime volume and poses a spacetime total variation regularization to enhance the smoothness of the solution. The optimization problem is solved by transforming the original unconstrained minimization problem to an equivalent constrained minimization problem. An augmented Lagrangian method is used to handle the constraints, and an alternating direction method (ADM) is used to iteratively find solutions of the subproblems. The proposed algorithm has a wide range of applications, including video deblurring and denoising, video disparity refinement, and hotair turbulence effect reduction.
An Efficient Algorithm for Total Variation Regularization with Applications to the Single Pixel Camera and Compressive Sensing
, 2009
"... ..."
AUGMENTED LAGRANGIAN METHOD FOR TOTAL VARIATION RESTORATION WITH NONQUADRATIC FIDELITY
"... Abstract. Recently augmented Lagrangian method has been successfully applied to image restoration with L2 fidelity. In this paper we extend the method to total variation (TV) restoration models with nonquadratic fidelities. We will first introduce the method and present the iterative algorithm for ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
Abstract. Recently augmented Lagrangian method has been successfully applied to image restoration with L2 fidelity. In this paper we extend the method to total variation (TV) restoration models with nonquadratic fidelities. We will first introduce the method and present the iterative algorithm for TV restoration with a quite general fidelity. In each iteration, three subproblems need to be solved, two of which can be very efficiently solved via FFT implementation or closed form solution. In general the third subproblem need iterative solvers. We then apply our method to TV restoration with L1 and KullbackLeibler (KL) fidelities, two common and important data terms for deblurring images corrupted by impulsive noise and Poisson noise, respectively. For these typical fidelities, we show that the third subproblem also has closed form solution and thus can be efficiently solved. In addition, convergence analysis of these algorithms are given, which cannot be obtained by previous analysis techniques.
An Alternating Direction Algorithm for Matrix Completion with Nonnegative Factors
"... Abstract. This paper introduces a novel algorithm for the nonnegative matrix factorization and completion problem, which aims to find nonnegative matrices X and Y from a subset of entries of a nonnegative matrix M so that XY approximates M. This problem is closely related to the two existing problem ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
(Show Context)
Abstract. This paper introduces a novel algorithm for the nonnegative matrix factorization and completion problem, which aims to find nonnegative matrices X and Y from a subset of entries of a nonnegative matrix M so that XY approximates M. This problem is closely related to the two existing problems: nonnegative matrix factorization and lowrank matrix completion, in the sense that it kills two birds with one stone. As it takes advantages of both nonnegativity and low rank, its results can be superior than those of the two problems alone. Our algorithm is applied to minimizing a nonconvex constrained leastsquares formulation and is based on the classic alternating direction augmented Lagrangian method. Preliminary convergence properties and numerical simulation results are presented. Compared to a recent algorithm for nonnegative random matrix factorization, the proposed algorithm yields comparable factorization through accessing only half of the matrix entries. On tasks of recovering incomplete grayscale and hyperspectral images, the results of the proposed algorithm have overall better qualities than those of two recent algorithms for matrix completion.