Results 1  10
of
299
A Nonlinear PrimalDual Method For Total VariationBased Image Restoration
, 1995
"... . We present a new method for solving total variation (TV) minimization problems in image restoration. The main idea is to remove some of the singularity caused by the nondifferentiability of the quantity jruj in the definition of the TVnorm before we apply a linearization technique such as Newton ..."
Abstract

Cited by 162 (22 self)
 Add to MetaCart
. We present a new method for solving total variation (TV) minimization problems in image restoration. The main idea is to remove some of the singularity caused by the nondifferentiability of the quantity jruj in the definition of the TVnorm before we apply a linearization technique such as Newton's method. This is accomplished by introducing an additional variable for the flux quantity appearing in the gradient of the objective function. Our method can be viewed as a primaldual method as proposed by Conn and Overton [8] and Andersen [3] for the minimization of a sum of Euclidean norms. Experimental results show that the new method has much improved global convergence behaviour than the primal Newton's method. 1. Introduction. During some phases of the manipulation of an image some random noise and blurring is usually introduced. The presence of this noise and blurring makes difficult and inaccurate the latter phases of the image processing. The algorithms for noise removal and debl...
Constraint Preconditioning for Indefinite Linear Systems
 SIAM J. Matrix Anal. Appl
, 2000
"... . The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and ..."
Abstract

Cited by 73 (10 self)
 Add to MetaCart
. The problem of nding good preconditioners for the numerical solution of indenite linear systems is considered. Special emphasis is put on preconditioners that have a 2 2 block structure and which incorporate the (1; 2) and (2; 1) blocks of the original matrix. Results concerning the spectrum and form of the eigenvectors of the preconditioned matrix and its minimum polynomial are given. The consequences of these results are considered for a variety of Krylov subspace methods. Numerical experiments validate these conclusions. Key words. preconditioning, indenite matrices, Krylov subspace methods AMS subject classications. 65F10, 65F15, 65F50 1. Introduction. In this paper, we are concerned with investigating a new class of preconditioners for indenite systems of linear equations of a sort which arise in constrained optimization as well as in leastsquares, saddlepoint and Stokes problems. We attempt to solve the indenite linear system A B T B 0  {z } A x 1 x...
Convergence analysis of pseudotransient continuation
 SIAM J. Num. Anal
, 1998
"... Abstract. Pseudotransient continuation (Ψtc) is a wellknown and physically motivated technique for computation of steady state solutions of timedependent partial differential equations. Standard globalization strategies such as line search or trust region methods often stagnate at local minima. Ψ ..."
Abstract

Cited by 61 (25 self)
 Add to MetaCart
Abstract. Pseudotransient continuation (Ψtc) is a wellknown and physically motivated technique for computation of steady state solutions of timedependent partial differential equations. Standard globalization strategies such as line search or trust region methods often stagnate at local minima. Ψtc succeeds in many of these cases by taking advantage of the underlying PDE structure of the problem. Though widely employed, the convergence of Ψtc is rarely discussed. In this paper we prove convergence for a generic form of Ψtc and illustrate it with two practical strategies.
Objectoriented software for quadratic programming
 ACM Transactions on Mathematical Software
, 2001
"... The objectoriented software package OOQP for solving convex quadratic programming problems (QP) is described. The primaldual interior point algorithms supplied by OOQP are implemented in a way that is largely independent of the problem structure. Users may exploit problem structure by supplying li ..."
Abstract

Cited by 60 (2 self)
 Add to MetaCart
The objectoriented software package OOQP for solving convex quadratic programming problems (QP) is described. The primaldual interior point algorithms supplied by OOQP are implemented in a way that is largely independent of the problem structure. Users may exploit problem structure by supplying linear algebra, problem data, and variable classes that are customized to their particular applications. The OOQP distribution contains default implementations that solve several important QP problem types, including general sparse and dense QPs, boundconstrained QPs, and QPs arising from support vector machines and Huber regression. The implementations supplied with the OOQP distribution are based on such well known linear algebra packages as MA27/57, LAPACK, and PETSc. OOQP demonstrates the usefulness of objectoriented design in optimization software development, and establishes standards that can be followed in the design of software packages for other classes of optimization problems. A number of the classes in OOQP may also be reusable directly in other codes.
An interiorpoint method for largescale ℓ1 regularized logistic regression
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2007
"... Recently, a lot of attention has been paid to ℓ1regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as ..."
Abstract

Cited by 57 (4 self)
 Add to MetaCart
Recently, a lot of attention has been paid to ℓ1regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as ℓ1regularized leastsquares programs (LSPs), which can be reformulated as convex quadratic programs, and then solved by several standard methods such as interiorpoint methods, at least for small and medium size problems. In this paper, we describe a specialized interiorpoint method for solving largescale ℓ1regularized LSPs that uses the preconditioned conjugate gradients algorithm to compute the search direction. The interiorpoint method can solve large sparse problems, with a million variables and observations, in a few tens of minutes on a PC. It can efficiently solve large dense problems, that arise in sparse signal recovery with orthogonal transforms, by exploiting fast algorithms for these transforms. The method is illustrated on a magnetic resonance imaging data set.
Recent computational developments in Krylov subspace methods for linear systems
 NUMER. LINEAR ALGEBRA APPL
, 2007
"... Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are metho ..."
Abstract

Cited by 48 (12 self)
 Add to MetaCart
Many advances in the development of Krylov subspace methods for the iterative solution of linear systems during the last decade and a half are reviewed. These new developments include different versions of restarted, augmented, deflated, flexible, nested, and inexact methods. Also reviewed are methods specifically tailored to systems with special properties such as special forms of symmetry and those depending on one or more parameters.
Preconditioning indefinite systems in interior point methods for optimization
 Computational Optimization and Applications
, 2004
"... Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable il ..."
Abstract

Cited by 44 (13 self)
 Add to MetaCart
Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable illconditioning of linear systems and, hence, iterative methods fail to provide sufficient accuracy unless appropriately preconditioned. Two types of preconditioners which use some form of incomplete Cholesky factorization for indefinite systems are proposed in this paper. Although they involve significantly sparser factorizations than those used in direct approaches they still capture most of the numerical properties of the preconditioned system. The spectral analysis of the preconditioned matrix is performed: for convex optimization problems all the eigenvalues of this matrix are strictly positive. Numerical results are given for a set of public domain large linearly constrained convex quadratic programming problems with sizes reaching tens of thousands of variables. The analysis of these results reveals that the solution times for such problems on a modern PC are measured in minutes when direct methods are used and drop to seconds when iterative methods with appropriate preconditioners are used. Keywords: interiorpoint methods, iterative solvers, preconditioners 1.
"Coarse" Integration/Bifurcation Analysis via Microscopic Simulators: microGalerkin methods
"... We present a timestepper based approach to the #coarse" integration and stability #bifurcation analysis of distributed reacting system models. The methods we discuss are applicable to systems for which the traditional modeling approach through macroscopic evolution equations #usually partial di# ..."
Abstract

Cited by 41 (23 self)
 Add to MetaCart
We present a timestepper based approach to the #coarse" integration and stability #bifurcation analysis of distributed reacting system models. The methods we discuss are applicable to systems for which the traditional modeling approach through macroscopic evolution equations #usually partial di#erential equations, PDEs# is not possible because the PDEs are not available in closed form. If an alternative, microscopic #e.g. Monte Carlo or Lattice Boltzmann# description of the physics is available, we illustrate how this microscopic simulator can be enabled #through a computational superstructure# to perform certain integration and numerical bifurcation analysis tasks directly at the coarse, systemslevel. This approach, when successful, can circumvent the derivation of accurate, closed form, macroscopic PDE descriptions of the system. The direct #systems level" analysis of microscopic process models, facilitated through suchnumerical #enabling technologies", may, if practical, advance our understanding and use of nonequilibrium systems. 1
A Multigrid Method For Distributed Parameter Estimation Problems
 Trans. Numer. Anal
, 2001
"... . This paper considers problems of distributed parameter estimation from data measurements on solutions of partial differential equations (PDEs). A nonlinear least squares functional is minimized to approximately recover the sought parameter function (i.e., the model). This functional consists of a ..."
Abstract

Cited by 39 (13 self)
 Add to MetaCart
. This paper considers problems of distributed parameter estimation from data measurements on solutions of partial differential equations (PDEs). A nonlinear least squares functional is minimized to approximately recover the sought parameter function (i.e., the model). This functional consists of a data fitting term, involving the solution of a finite volume or finite element discretization of the forward differential equation, and a Tikhonovtype regularization term, involving the discretization of a mix of model derivatives. We develop a multigrid method for the resulting constrained optimization problem. The method directly addresses the discretized PDE system which defines a critical point of the Lagrangian. The discretization is cellbased. This system is strongly coupled when the regularization parameter is small. Moreover, the compactness of the discretization scheme does not necessarily follow from compact discretizations of the forward model and of the regularization term. We therefore employ a Marquardttype modification on coarser grids. Alternatively, fewer grids are used and a preconditioned Krylovspace method is utilized on the coarsest grid. A collective point relaxation method (weighted Jacobi or a GaussSeidel variant) is used for smoothing. We demonstrate the efficiency of our method on a classical model problem from hydrology. 1.
Globalized NewtonKrylovSchwarz algorithms and software for parallel implicit CFD
 Int. J. High Performance Computing Applications
, 1998
"... Key words. NewtonKrylovSchwarz algorithms, parallel CFD, implicit methods Abstract. Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, parallelization is e ..."
Abstract

Cited by 36 (14 self)
 Add to MetaCart
Key words. NewtonKrylovSchwarz algorithms, parallel CFD, implicit methods Abstract. Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, parallelization is essential. The pseudotransient matrixfree NewtonKrylovSchwarz (ΨNKS) algorithmic framework is presented as a widely applicable answer. This article shows that, for the classical problem of threedimensional transonic Euler flow about an M6 wing, ΨNKS can simultaneously deliver • globalized, asymptotically rapid convergence through adaptive pseudotransient continuation and Newton’s method; • reasonable parallelizability for an implicit method through deferred synchronization and favorable communicationtocomputation scaling in the Krylov linear solver; and • high perprocessor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of ΨNKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of ΨNKS, and we describe a freely available, MPIbased portable parallel software implementation of the solver employed here. 1. Introduction. Disparate