Results 1 -
7 of
7
Stable signal recovery from incomplete and inaccurate measurements,”
- Comm. Pure Appl. Math.,
, 2006
"... Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y? To r ..."
Abstract
-
Cited by 1397 (38 self)
- Add to MetaCart
(Show Context)
Abstract Suppose we wish to recover a vector x 0 ∈ R m (e.g., a digital signal or image) from incomplete and contaminated observations y = Ax 0 + e; A is an n × m matrix with far fewer rows than columns (n m) and e is an error term. Is it possible to recover x 0 accurately based on the data y? To recover x 0 , we consider the solution x to the 1 -regularization problem where is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x 0 is sufficiently sparse, then the solution is within the noise level As a first example, suppose that A is a Gaussian random matrix; then stable recovery occurs for almost all such A's provided that the number of nonzeros of x 0 is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of x 0 ; then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/(log m) 6 . In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights into the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.
COMBINATORIAL CONTINUOUS MAXIMUM FLOW
, 2011
"... Maximum flow (and minimumcut) algorithms have had a strong impact on computer vision. In particular, graph cuts algorithms provide a mechanism for the discrete optimization of an energy functional which has been used in a variety of applications such as image segmentation, stereo, image stitching an ..."
Abstract
-
Cited by 6 (2 self)
- Add to MetaCart
(Show Context)
Maximum flow (and minimumcut) algorithms have had a strong impact on computer vision. In particular, graph cuts algorithms provide a mechanism for the discrete optimization of an energy functional which has been used in a variety of applications such as image segmentation, stereo, image stitching and texture synthesis. Algorithms based on the classical formulation of maxflow defined on a graph are known to exhibit metrication artefacts in the solution. Therefore, a recent trend has been to instead employ a spatially continuous maximum flow (or the dual mincut problem) in these same applications to produce solutions with no metrication errors. However, known fast continuous max-flow algorithms have no stopping criteria or have not been proved to converge. In this work, we revisit the continuous max-flow problem and show that the analogous discrete formulation is different from the classical max-flow problem. We then apply an appropriate combinatorial optimization technique to this combinatorial continuous max-flow (CCMF) problem to find a null-divergence solution that exhibits no metrication artefacts and may be solved exactly by a fast, efficient algorithm with provable convergence. Finally, by exhibiting the dual problem of our CCMF formulation, we clarify the fact, already proved by Nozawa in the continuous setting, that the max-flow and the total variation problems are not always equivalent.
Optimal Estimation of Deterioration from Diagnostic Image Sequence
- IEEE TRANSACTIONS ON SIGNAL PROCESSING, SUBMITTED MAY 2007
, 2007
"... This paper considers estimation of pixel-wise monotonic increasing (or decreasing) data from a time series of noisy blurred images. The motivation comes from estimation of mechanical structure damage that accumulates irreversibly over time. We formulate a Maximum A posteriory Probablity (MAP) estima ..."
Abstract
-
Cited by 5 (4 self)
- Add to MetaCart
(Show Context)
This paper considers estimation of pixel-wise monotonic increasing (or decreasing) data from a time series of noisy blurred images. The motivation comes from estimation of mechanical structure damage that accumulates irreversibly over time. We formulate a Maximum A posteriory Probablity (MAP) estimation problem and find a solution by direct numerical optimization of a log-likelihood index. Spatial continuity of the damage is modeled using a Markov Random Field (MRF). The MRF prior includes the temporal monotonicity constraints. We tune the MRF prior, using a spatial frequency domain loopshaping technique to achieve a tradeoff between noise rejection and signal restoration properties of the estimate. The MAP optimization is a large-scale Quadratic Programming (QP) problem that could have more than a million of decision variables and constraints. We describe and implement an efficient interior-point method for solving such optimization problem. The method uses a preconditioned conjugate gradient method to compute the search step. The developed QP solver relies on the special structure of the problem and can solve the problems of this size in a few tens of minutes, on a PC. The application example in the paper describes structural damage images obtained using a Structural Health Monitoring (SHM) system. The damage signal is distorted by environmental temperature that varies for each acquired image in the series. The solution for the experimental data is demonstrated to provide an excellent estimate of the damage accumulation trend while rejecting the spatial and temporal noise.
rE
"... All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately. ..."
Abstract
- Add to MetaCart
(Show Context)
All in-text references underlined in blue are linked to publications on ResearchGate, letting you access and read them immediately.
Stable Signal Recovery from Incomplete and Inaccurate Measurements
, 2005
"... Suppose we wish to recover a vector x0 ∈ R m (e.g. a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is a n by m matrix with far fewer rows than columns (n ≪ m) and e is an error term. Is it possible to recover x0 accurately based on the data y? To recover x0, w ..."
Abstract
- Add to MetaCart
(Show Context)
Suppose we wish to recover a vector x0 ∈ R m (e.g. a digital signal or image) from incomplete and contaminated observations y = Ax0 + e; A is a n by m matrix with far fewer rows than columns (n ≪ m) and e is an error term. Is it possible to recover x0 accurately based on the data y? To recover x0, we consider the solution x ♯ to the ℓ1-regularization problem min �x�ℓ1 subject to �Ax − y�ℓ2 ≤ ɛ, where ɛ is the size of the error term e. We show that if A obeys a uniform uncertainty principle (with unit-normed columns) and if the vector x0 is sufficiently sparse, then the solution is within the noise level �x ♯ − x0�ℓ2 ≤ C · ɛ. As a first example, suppose that A is a Gaussian random matrix, then stable recovery occurs for almost all such A’s provided that the number of nonzeros of x0 is of about the same order as the number of observations. As a second instance, suppose one observes few Fourier samples of x0, then stable recovery occurs for almost any set of n coefficients provided that the number of nonzeros is of the order of n/[log m] 6. In the case where the error term vanishes, the recovery is of course exact, and this work actually provides novel insights on the exact recovery phenomenon discussed in earlier papers. The methodology also explains why one can also very nearly recover approximately sparse signals.
Graph regularization for color . . .
, 2007
"... Nowadays color image processing is an essential issue in computer vision. Variational formulations provide a framework for color image restoration, smoothing and segmentation problems. The solutions of variational models can be obtained by minimizing appropriate energy functions and this minimizatio ..."
Abstract
- Add to MetaCart
Nowadays color image processing is an essential issue in computer vision. Variational formulations provide a framework for color image restoration, smoothing and segmentation problems. The solutions of variational models can be obtained by minimizing appropriate energy functions and this minimization is usually performed by continuous partial differential equations (PDEs). The problem is usually considered as a regularization matter which minimizes a smoothness plus a regularization term. In this paper, we propose a general discrete regularization framework defined on weighted graphs of arbitrary topologies which can be seen as a discrete analogue of classical regularization theory. The smoothness term of the regularization uses a discrete definition of the p-Laplace operator. With this formulation, we propose a family of fast and simple anisotropic linear and nonlinear filters which do not involve PDEs. The proposed approach can be useful to process color images for restoration, denoising and segmentation purposes.