Results 1  10
of
35
Extraction of HighResolution Frames from Video Sequences
 IEEE Transactions on Image Processing
, 1996
"... The human visual system appears to be capable of temporally integrating information in a video sequence in such a way that the perceived spatial resolution of a sequence appears much higher than the spatial resolution of an individual frame. While the mechanisms in the human visual system which do t ..."
Abstract

Cited by 261 (8 self)
 Add to MetaCart
The human visual system appears to be capable of temporally integrating information in a video sequence in such a way that the perceived spatial resolution of a sequence appears much higher than the spatial resolution of an individual frame. While the mechanisms in the human visual system which do this are unknown, the effect is not too surprising given that temporally adjacent frames in a video sequence contain slightly different, but unique, information. This paper addresses how to utilize both the spatial and temporal information present in a short image sequence to create a single highresolution video frame. A novel observation model based on motion compensated subsampling is proposed for a video sequence. Since the reconstruction problem is illposed, Bayesian restoration with a discontinuitypreserving prior image model is used to extract a highresolution video still given a short lowresolution sequence. Estimates computed from a lowresolution image sequence containing a subp...
A unified approach to statistical tomography using coordinate descent optimization
 IEEE Trans. on Image Processing
, 1996
"... Abstract 1 Over the past ten years there has been considerable interest in statistically optimal reconstruction of image crosssections from tomographic data. In particular, a variety of such algorithms have been proposed for maximum a posteriori (MAP) reconstruction from emission tomographic data. ..."
Abstract

Cited by 139 (27 self)
 Add to MetaCart
(Show Context)
Abstract 1 Over the past ten years there has been considerable interest in statistically optimal reconstruction of image crosssections from tomographic data. In particular, a variety of such algorithms have been proposed for maximum a posteriori (MAP) reconstruction from emission tomographic data. While MAP estimation requires the solution of an optimization problem, most existing reconstruction algorithms take an indirect approach based on the expectation maximization (EM) algorithm. In this paper we propose a new approach to statistically optimal image reconstruction based on direct optimization of the MAP criterion. The key to this direct optimization approach is greedy pixelwise computations known as iterative coordinate decent (ICD). We show that the ICD iterations require approximately the same amount of computation per iteration as EM based approaches, but the new method converges much more rapidly (in our experiments typically 5 iterations). Other advantages of the ICD method are that it is easily applied to MAP estimation of transmission tomograms, and typical convex constraints, such as positivity, are simply incorporated.
A Theoretical Framework for Convex Regularizers in PDEBased Computation of Image Motion
, 2000
"... Many differential methods for the recovery of the optic flow field from an image sequence can be expressed in terms of a variational problem where the optic flow minimizes some energy. Typically, these energy functionals consist of two terms: a data term, which requires e.g. that a brightness consta ..."
Abstract

Cited by 99 (25 self)
 Add to MetaCart
Many differential methods for the recovery of the optic flow field from an image sequence can be expressed in terms of a variational problem where the optic flow minimizes some energy. Typically, these energy functionals consist of two terms: a data term, which requires e.g. that a brightness constancy assumption holds, and a regularizer that encourages global or piecewise smoothness of the flow field. In this paper we present a systematic classification of rotation invariant convex regularizers by exploring their connection to diffusion filters for multichannel images. This taxonomy provides a unifying framework for datadriven and flowdriven, isotropic and anisotropic, as well as spatial and spatiotemporal regularizers. While some of these techniques are classic methods from the literature, others are derived here for the first time. We prove that all these methods are wellposed: they posses a unique solution that depends in a continuous way on the initial data. An interesting structural relation between isotropic and anisotropic flowdriven regularizers is identified, and a design criterion is proposed for constructing anisotropic flowdriven regularizers in a simple and direct way from isotropic ones. Its use is illustrated by several examples.
Image Sequence Analysis via Partial Differential Equations
, 1999
"... This article deals with the problem of restoring and motion segmenting noisy image sequences with a static background. Usually, motion segmentation and image restoration are considered separately in image sequence restoration. Moreover, motion segmentation is often noise sensitive. In this article, ..."
Abstract

Cited by 53 (3 self)
 Add to MetaCart
This article deals with the problem of restoring and motion segmenting noisy image sequences with a static background. Usually, motion segmentation and image restoration are considered separately in image sequence restoration. Moreover, motion segmentation is often noise sensitive. In this article, the motion segmentation and the image restoration parts are performed in a coupled way, allowing the motion segmentation part to positively influence the restoration part and viceversa. This is the key of our approach that allows to deal simultaneously with the problem of restoration and motion segmentation. To this end, we propose a theoretically justified optimization problem that permits to take into account both requirements. The model is theoretically justified. Existence and unicity are proved in the space of bounded variations. A suitable numerical scheme based on half quadratic minimization is then proposed and its convergence and stability demonstrated. Experimental results obtaine...
Superresolution reconstruction of hyperspectral images
 IEEE Trans. on Image Proc
, 2005
"... Abstract—Hyperspectral images are used for aerial and space imagery applications, including target detection, tracking, agricultural, and natural resource exploration. Unfortunately, atmospheric scattering, secondary illumination, changing viewing angles, and sensor noise degrade the quality of thes ..."
Abstract

Cited by 40 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Hyperspectral images are used for aerial and space imagery applications, including target detection, tracking, agricultural, and natural resource exploration. Unfortunately, atmospheric scattering, secondary illumination, changing viewing angles, and sensor noise degrade the quality of these images. Improving their resolution has a high payoff, but applying superresolution techniques separately to every spectral band is problematic for two main reasons. First, the number of spectral bands can be in the hundreds, which increases the computational load excessively. Second, considering the bands separately does not make use of the information that is present across them. Furthermore, separate band super resolution does not make use of the inherent low dimensionality of the spectral data, which can effectively be used to improve the robustness against noise. In this paper, we introduce a novel superresolution method for hyperspectral images. An integral part of our work is to model the hyperspectral image acquisition process. We propose a model that enables us to represent the hyperspectral observations from different wavelengths as weighted linear combinations of a small number of basis image planes. Then, a method for applying super resolution to hyperspectral images using this model is presented. The method fuses information from multiple observations and spectral bands to improve spatial resolution and reconstruct the spectrum of the observed scene as a combination of a small number of spectral basis functions. Index Terms—Hyperspectral, image reconstruction, information fusion, resolution enhancement, spectral, super resolution.
Superresolution still and video reconstruction from mpegcoded video
 IEEE Trans. Circuits Syst. Video Technol
, 2002
"... ..."
On robust estimation and smoothing with spatial and tonal kernels
 Proc. Dagstuhl Seminar: Geometric Properties from Incomplete Data
, 2004
"... This paper deals with establishing relations between a number of widelyused nonlinear filters for digital image processing. We cover robust statistical estimation with (local) Mestimators, local mode filtering in image or histogram space, bilateral filtering, nonlinear diffusion, and regularisatio ..."
Abstract

Cited by 32 (8 self)
 Add to MetaCart
(Show Context)
This paper deals with establishing relations between a number of widelyused nonlinear filters for digital image processing. We cover robust statistical estimation with (local) Mestimators, local mode filtering in image or histogram space, bilateral filtering, nonlinear diffusion, and regularisation approaches. Although these methods originate in different mathematical theories, we show that their implementation reveals a highly similar structure. We demonstrate that all these methods can be cast into a unified framework of functional minimisation combining nonlocal data and nonlocal smoothness terms. This unification contributes to a better understanding of the individual methods, and it opens the way to new techniques combining the advantages of known filters. Keywords: image analysis, Mestimators, mode filtering, nonlinear diffusion, bilateral filter, regularisation
Error concealment in MPEG video streams over ATM networks
 IEEE Journal on Selected Areas in Communications
, 2000
"... Abstract—When transmitting compressed video over a data network, one has to deal with how channel errors affect the decoding process. This is particularly a problem with data loss or erasures. In this paper we describe techniques to address this problem in the context of Asynchronous Transfer Mode ( ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
Abstract—When transmitting compressed video over a data network, one has to deal with how channel errors affect the decoding process. This is particularly a problem with data loss or erasures. In this paper we describe techniques to address this problem in the context of Asynchronous Transfer Mode (ATM) networks. Our techniques can be extended to other types of data networks such as wireless networks. In ATM networks channel errors or congestion cause data to be dropped, which results in the loss of entire macroblocks when MPEG video is transmitted. In order to reconstruct the missing data, the location of these macroblocks must be known. We describe a technique for packing ATM cells with compressed data, whereby the location of missing macroblocks in the encoded video stream can be found. This technique also permits the proper decoding of correctly received macroblocks, and thus prevents the loss of ATM cells from affecting the decoding process. The packing strategy can also be used for wireless or other types of data networks. We also describe spatial and temporal techniques for the recovery of lost macroblocks. In particular, we develop several optimal estimation techniques for the reconstruction of missing macroblocks that contain both spatial and temporal information using a Markov random field model. We further describe a suboptimal estimation technique that can be implemented in real time. Index Terms—ATM, cell loss, cell packing, error concealment, motion vectors, Markov random field, spatial reconstruction, temporal reconstruction. I.
Error Concealment in Encoded Video Streams
, 2001
"... When transmitting compressed video over a data network, one has to deal with how channel errors affect the decoding process. This is particularly problematic with data loss or erasures. In this paper we describe techniques to address this problem in the context of networks where channel errors or co ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
When transmitting compressed video over a data network, one has to deal with how channel errors affect the decoding process. This is particularly problematic with data loss or erasures. In this paper we describe techniques to address this problem in the context of networks where channel errors or congestion can result in the loss of entire macroblocks when MPEG video is transmitted. We describe spatial and temporal techniques for the recovery of lost macroblocks. In particular, we develop estimation techniques for the reconstruction of missing macroblocks using a Markov Random Field model. We show that the widely used heuristic motion compensated error concealment technique based on averaging motion vectors is a special case of our estimation technique. We further describe a technique that can be implemented in realtime.
A Study of a Convex Variational Diffusion Approach for Image Segmentation and Feature Extraction
 J. Math. Imaging Vision
, 1998
"... We analyze a variational approach to image segmentation that is based on a strictly convex nonquadratic cost functional. The smoothness term combines a standard firstorder measure for image regions with a totalvariation based measure for signal transitions. Accordingly, the costs associated with ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
We analyze a variational approach to image segmentation that is based on a strictly convex nonquadratic cost functional. The smoothness term combines a standard firstorder measure for image regions with a totalvariation based measure for signal transitions. Accordingly, the costs associated with "discontinuities" are given by the length of level lines and local image contrast. For real images, this provides a reasonable approximation of the variational model of Mumford and Shah that has been suggested as a generic approach to image segmentation. The global properties of the convex variational model are favorable to applications: Uniqueness of the solution, continuous dependence of the solution on both data and parameters, consistent and efficient numerical approximation of the solution with the FEMmethod. Various global and local properties of the convex variational model are analyzed and illustrated with numerical examples. Apart from the favorable global properties, the approach ...