Results 1  10
of
210
Robust Anisotropic Diffusion
, 1998
"... Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edgestopping" function in the anisotropic d ..."
Abstract

Cited by 278 (16 self)
 Add to MetaCart
Relations between anisotropic diffusion and robust statistics are described in this paper. Specifically, we show that anisotropic diffusion can be seen as a robust estimation procedure that estimates a piecewise smooth image from a noisy input image. The "edgestopping" function in the anisotropic diffusion equation is closely related to the error norm and influence function in the robust estimation framework. This connection leads to a new "edgestopping" function based on Tukey's biweight robust estimator, that preserves sharper boundaries than previous formulations and improves the automatic stopping of the diffusion. The robust statistical interpretation also provides a means for detecting the boundaries (edges) between the piecewise smooth regions in an image that has been smoothed with anisotropic diffusion. Additionally, we derive a relationship between anisotropic diffusion and regularization with line processes. Adding constraints on the spatial organization of the ...
A generalized Gaussian image model for edgepreserving MAP estimation
 IEEE Trans. on Image Processing
, 1993
"... Absfrucf We present a Markov random field model which allows realistic edge modeling while providing stable maximum a posteriori MAP solutions. The proposed model, which we refer to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distri ..."
Abstract

Cited by 238 (34 self)
 Add to MetaCart
Absfrucf We present a Markov random field model which allows realistic edge modeling while providing stable maximum a posteriori MAP solutions. The proposed model, which we refer to as a generalized Gaussian Markov random field (GGMRF), is named for its similarity to the generalized Gaussian distribution used in robust detection and estimation. The model satisifies several desirable analytical and computational properties for MAP estimation, including continuous dependence of the estimate on the data, invariance of the character of solutions to scaling of data, and a solution which lies at the unique global minimum of the U posteriori loglikeihood function. The GGMRF is demonstrated to be useful for image reconstruction in lowdosage transmission tomography. I.
Deterministic edgepreserving regularization in computed imaging
 IEEE Trans. Image Processing
, 1997
"... Abstract—Many image processing problems are ill posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such ..."
Abstract

Cited by 231 (23 self)
 Add to MetaCart
Abstract—Many image processing problems are ill posed and must be regularized. Usually, a roughness penalty is imposed on the solution. The difficulty is to avoid the smoothing of edges, which are very important attributes of the image. In this paper, we first give conditions for the design of such an edgepreserving regularization. Under these conditions, we show that it is possible to introduce an auxiliary variable whose role is twofold. First, it marks the discontinuities and ensures their preservation from smoothing. Second, it makes the criterion halfquadratic. The optimization is then easier. We propose a deterministic strategy, based on alternate minimizations on the image and the auxiliary variable. This leads to the definition of an original reconstruction algorithm, called ARTUR. Some theoretical properties of ARTUR are discussed. Experimental results illustrate the behavior of the algorithm. These results are shown in the field of tomography, but this method can be applied in a large number of applications in image processing. I.
Fields of experts: A framework for learning image priors
 In CVPR
, 2005
"... We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov Random Field (MRF) models by learning potential functions over extended pixel neighborhood ..."
Abstract

Cited by 229 (3 self)
 Add to MetaCart
We develop a framework for learning generic, expressive image priors that capture the statistics of natural scenes and can be used for a variety of machine vision tasks. The approach extends traditional Markov Random Field (MRF) models by learning potential functions over extended pixel neighborhoods. Field potentials are modeled using a ProductsofExperts framework that exploits nonlinear functions of many linear filter responses. In contrast to previous MRF approaches all parameters, including the linear filters themselves, are learned from training data. We demonstrate the capabilities of this Field of Experts model with two example applications, image denoising and image inpainting, which are implemented using a simple, approximate inference scheme. While the model is trained on a generic image database and is not tuned toward a specific application, we obtain results that compete with and even outperform specialized techniques. 1.
On the Unification Line Processes, Outlier Rejection, and Robust Statistics with Applications in Early Vision
, 1996
"... The modeling of spatial discontinuities for problems such as surface recovery, segmentation, image reconstruction, and optical flow has been intensely studied in computer vision. While "lineprocess" models of discontinuities have received a great deal of attention, there has been recent interest i ..."
Abstract

Cited by 190 (8 self)
 Add to MetaCart
The modeling of spatial discontinuities for problems such as surface recovery, segmentation, image reconstruction, and optical flow has been intensely studied in computer vision. While "lineprocess" models of discontinuities have received a great deal of attention, there has been recent interest in the use of robust statistical techniques to account for discontinuities. This paper unifies the two approaches. To achieve this we generalize the notion of a "line process" to that of an analog "outlier process" and show how a problem formulated in terms of outlier processes can be viewed in terms of robust statistics. We also characterize a class of robust statistical problems for which an equivalent outlierprocess formulation exists and give a straightforward method for converting a robust estimation problem into an outlierprocess formulation. We show how prior assumptions about the spatial structure of outliers can be expressed as constraints on the recovered analog outlier processes and how traditional continuation methods can be extended to the explicit outlierprocess formulation. These results indicate that the outlierprocess approach provides a general framework which subsumes the traditional lineprocess approaches as well as a wide class of robust estimation problems. Examples in surface reconstruction, image segmentation, and optical flow are presented to illustrate the use of outlier processes and to show how the relationship between outlier processes and robust statistics can be exploited. An appendix provides a catalog of common robust error norms and their equivalent outlierprocess formulations.
Removing camera shake from a single photograph
 ACM Trans. Graph
, 2006
"... Camera shake during exposure leads to objectionable image blur and ruins many photographs. Conventional blind deconvolution methods typically assume frequencydomain constraints on images, or overly simplified parametric forms for the motion path during camera shake. Real camera motions can follow c ..."
Abstract

Cited by 188 (13 self)
 Add to MetaCart
Camera shake during exposure leads to objectionable image blur and ruins many photographs. Conventional blind deconvolution methods typically assume frequencydomain constraints on images, or overly simplified parametric forms for the motion path during camera shake. Real camera motions can follow convoluted paths, and a spatial domain prior can better maintain visually salient image characteristics. We introduce a method to remove the effects of camera shake from seriously blurred images. The method assumes a uniform camera blur over the image and negligible inplane camera rotation. In order to estimate the blur from the camera shake, the user must specify an image region without saturation effects. We show results for a variety of digital photographs taken from personal photo collections.
A Framework for the Robust Estimation of Optical Flow
, 1993
"... We consider the problem of robustly estimating optical flow from a pair of images using a new framework based on robust estimation which addresses violations of the brightness constancy and spatial smoothness assumptions. We also show the relationship between the robust estimation framework and line ..."
Abstract

Cited by 159 (10 self)
 Add to MetaCart
We consider the problem of robustly estimating optical flow from a pair of images using a new framework based on robust estimation which addresses violations of the brightness constancy and spatial smoothness assumptions. We also show the relationship between the robust estimation framework and lineprocess approaches for coping with spatial discontinuities. In doing so, we generalize the notion of a line process to that of an outlier process that can account for violations in both the brightness and smoothness assumptions. We develop a Graduated NonConvexity algorithm for recovering optical flow and motion discontinuities and demonstrate the performance of the robust formulation on both synthetic data and natural images. 1 Introduction Algorithms for recovering optical flow embody a set of assumptions about the world which, by necessity, are simplifications and hence may be violated in practice. For example, the assumption of brightness constancy is violated when moti...
Nonlinear Image Recovery with HalfQuadratic Regularization
, 1993
"... One popular method for the recovery of an ideal intensity image from corrupted or indirect measurements is regularization: minimize an objective function which enforces a roughness penalty in addition to coherence with the data. Linear estimates are relatively easy to compute but generally introduce ..."
Abstract

Cited by 132 (0 self)
 Add to MetaCart
One popular method for the recovery of an ideal intensity image from corrupted or indirect measurements is regularization: minimize an objective function which enforces a roughness penalty in addition to coherence with the data. Linear estimates are relatively easy to compute but generally introduce systematic errors; for example, they are incapable of recovering discontinuities and other important image attributes. In contrast, nonlinear estimates are more accurate, but often far less accessible. This is particularly true when the objective function is nonconvex and the distribution of each data component depends on many image components through a linear operator with broad support. Our approach is based on an auxiliary array and an extended objective function in which the original variables appear quadratically and the auxiliary variables are decoupled. Minimizing over the auxiliary array alone yields the original function, so the original image estimate can be obtained by joint min...
A Variational Method In Image Recovery
 SIAM J. Numer. Anal
, 1997
"... This paper is concerned with a classical denoising and deblurring problem in image recovery. Our approach is based on a variational method. By using the LegendreFenchel transform, we show how the nonquadratic criterion to be minimized can be split into a sequence of halfquadratic problems easier t ..."
Abstract

Cited by 101 (22 self)
 Add to MetaCart
This paper is concerned with a classical denoising and deblurring problem in image recovery. Our approach is based on a variational method. By using the LegendreFenchel transform, we show how the nonquadratic criterion to be minimized can be split into a sequence of halfquadratic problems easier to solve numerically. First we prove an existence and uniqueness result, and then we describe the algorithm for computing the solution and we give a proof of convergence. Finally, we present some experimental results for synthetic and real images.
A new alternating minimization algorithm for total variation image reconstruction
 SIAM J. IMAGING SCI
, 2008
"... We propose, analyze and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new halfquadratic model applicable to not only the anisotropic but also isotropic forms of total variati ..."
Abstract

Cited by 97 (16 self)
 Add to MetaCart
We propose, analyze and test an alternating minimization algorithm for recovering images from blurry and noisy observations with total variation (TV) regularization. This algorithm arises from a new halfquadratic model applicable to not only the anisotropic but also isotropic forms of total variation discretizations. The periteration computational complexity of the algorithm is three Fast Fourier Transforms (FFTs). We establish strong convergence properties for the algorithm including finite convergence for some variables and relatively fast exponential (or qlinear in optimization terminology) convergence for the others. Furthermore, we propose a continuation scheme to accelerate the practical convergence of the algorithm. Extensive numerical results show that our algorithm performs favorably in comparison to several stateoftheart algorithms. In particular, it runs orders of magnitude faster than the Lagged Diffusivity algorithm for totalvariationbased deblurring. Some extensions of our algorithm are also discussed.