Results 1  10
of
370
Image registration methods: a survey.
, 2003
"... Abstract This paper aims to present a review of recent as well as classic image registration methods. Image registration is the process of overlaying images (two or more) of the same scene taken at different times, from different viewpoints, and/or by different sensors. The registration geometrical ..."
Abstract

Cited by 760 (10 self)
 Add to MetaCart
(Show Context)
Abstract This paper aims to present a review of recent as well as classic image registration methods. Image registration is the process of overlaying images (two or more) of the same scene taken at different times, from different viewpoints, and/or by different sensors. The registration geometrically align two images (the reference and sensed images). The reviewed approaches are classified according to their nature (areabased and featurebased) and according to four basic steps of image registration procedure: feature detection, feature matching, mapping function design, and image transformation and resampling. Main contributions, advantages, and drawbacks of the methods are mentioned in the paper. Problematic issues of image registration and outlook for the future research are discussed too. The major goal of the paper is to provide a comprehensive reference source for the researchers involved in image registration, regardless of particular application areas. q
Sampling—50 years after Shannon
 Proceedings of the IEEE
, 2000
"... This paper presents an account of the current state of sampling, 50 years after Shannon’s formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the math ..."
Abstract

Cited by 339 (27 self)
 Add to MetaCart
(Show Context)
This paper presents an account of the current state of sampling, 50 years after Shannon’s formulation of the sampling theorem. The emphasis is on regular sampling, where the grid is uniform. This topic has benefited from a strong research revival during the past few years, thanks in part to the mathematical connections that were made with wavelet theory. To introduce the reader to the modern, Hilbertspace formulation, we reinterpret Shannon’s sampling procedure as an orthogonal projection onto the subspace of bandlimited functions. We then extend the standard sampling paradigm for a representation of functions in the more general class of “shiftinvariant” functions spaces, including splines and wavelets. Practically, this allows for simpler—and possibly more realistic—interpolation models, which can be used in conjunction with a much wider class of (antialiasing) prefilters that are not necessarily ideal lowpass. We summarize and discuss the results available for the determination of the approximation error and of the sampling rate when the input of the system is essentially arbitrary; e.g., nonbandlimited. We also review variations of sampling that can be understood from the same unifying perspective. These include wavelets, multiwavelets, Papoulis generalized sampling, finite elements, and frames. Irregular sampling and radial basis functions are briefly mentioned. Keywords—Bandlimited functions, Hilbert spaces, interpolation, least squares approximation, projection operators, sampling,
Extraction of HighResolution Frames from Video Sequences
 IEEE Transactions on Image Processing
, 1996
"... The human visual system appears to be capable of temporally integrating information in a video sequence in such a way that the perceived spatial resolution of a sequence appears much higher than the spatial resolution of an individual frame. While the mechanisms in the human visual system which do t ..."
Abstract

Cited by 264 (8 self)
 Add to MetaCart
The human visual system appears to be capable of temporally integrating information in a video sequence in such a way that the perceived spatial resolution of a sequence appears much higher than the spatial resolution of an individual frame. While the mechanisms in the human visual system which do this are unknown, the effect is not too surprising given that temporally adjacent frames in a video sequence contain slightly different, but unique, information. This paper addresses how to utilize both the spatial and temporal information present in a short image sequence to create a single highresolution video frame. A novel observation model based on motion compensated subsampling is proposed for a video sequence. Since the reconstruction problem is illposed, Bayesian restoration with a discontinuitypreserving prior image model is used to extract a highresolution video still given a short lowresolution sequence. Estimates computed from a lowresolution image sequence containing a subp...
Interpolation revisited
 IEEE Transactions on Medical Imaging
, 2000
"... Abstract—Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to ..."
Abstract

Cited by 198 (33 self)
 Add to MetaCart
(Show Context)
Abstract—Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to traditional interpolation, we call their use generalized interpolation; they involve a prefiltering step when correctly applied. We explain why the approximation order inherent in any basis function is important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation order can be expressed as the convolution of a Bspline of the same order with another function that has none. This motivates the use of splines and splinebased functions as a tunable way to keep artifacts in check without any significant cost penalty. We discuss implementation and performance issues, and we provide experimental evidence to support our claims. Index Terms—Approximation constant, approximation order, Bsplines, Fourier error kernel, maximal order and minimal support (Moms), piecewisepolynomials. I.
Secrets of Optical Flow Estimation and Their Principles
, 2010
"... The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible throu ..."
Abstract

Cited by 195 (10 self)
 Add to MetaCart
(Show Context)
The accuracy of optical flow estimation algorithms has been improving steadily as evidenced by results on the Middlebury optical flow benchmark. The typical formulation, however, has changed little since the work of Horn and Schunck. We attempt to uncover what has made recent advances possible through a thorough analysis of how the objective function, the optimization method, and modern implementation practices influence accuracy. We discover that “classical” flow formulations perform surprisingly well when combined with modern optimization and implementation techniques. Moreover, we find that while median filtering of intermediate flow fields during optimization is a key to recent performance gains, it leads to higher energy solutions. To understand the principles behind this phenomenon, we derive a new objective that formalizes the median filtering heuristic. This objective includes a nonlocal term that robustly integrates flow estimates over large spatial neighborhoods. By modifying this new term to include information about flow and image boundaries we develop a method that ranks at the top of the Middlebury benchmark.
An evaluation of reconstruction filters for volume rendering
 Proceedings of IEEE Visualization
, 1994
"... To render images from a threedimensional array of sample values, it is necessary to interpolate between the samples. This paper is concerned with interpolation methods that are equivalent to convolving the samples with a reconstruction filter; this covers all commonly used schemes, including tril ..."
Abstract

Cited by 168 (1 self)
 Add to MetaCart
To render images from a threedimensional array of sample values, it is necessary to interpolate between the samples. This paper is concerned with interpolation methods that are equivalent to convolving the samples with a reconstruction filter; this covers all commonly used schemes, including trilinear and cubic interpolation. We first outline the formal basis of interpolation in threedimensional signal processing theory. We then propose numerical metrics that can be used to measure filter characteristics that are relevant to the appearance of images generated using that filter. We apply those metrics to several previously used filters and relate the results to isosurface images of the interpolations. We show that the choice of interpolation scheme can have a dramatic effect on image quality, and we discuss the cost/benefit tradeoff inherent in choosing a filter. 1
Survey: Interpolation Methods in Medical Image Processing
 IEEE Transactions on Medical Imaging
, 1999
"... Abstract — Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation ker ..."
Abstract

Cited by 161 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and windowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic; 5) cubic Bspline; 6) cubic; g) Lagrange; and 7) Gaussian interpolation and approximation techniques with kernel sizes from 1 2 1upto 8 2 8. The comparison is done by: 1) spatial and Fourier analyses; 2) computational complexity as well as runtime evaluations; and 3) qualitative and quantitative interpolation error determinations for particular interpolation tasks which were taken from common situations in medical image processing. For local and Fourier analyses, a standardized notation is introduced
Exposing digital forgeries in color filter array interpolated images
 IEEE Transactions on Signal Processing
, 2005
"... With the advent of lowcost and highresolution digital cameras, and sophisticated photo editing software, digital images can be easily manipulated and altered. Although good forgeries may leave no visual clues of having been tampered with, they may, nevertheless, alter the underlying statistics of ..."
Abstract

Cited by 149 (12 self)
 Add to MetaCart
(Show Context)
With the advent of lowcost and highresolution digital cameras, and sophisticated photo editing software, digital images can be easily manipulated and altered. Although good forgeries may leave no visual clues of having been tampered with, they may, nevertheless, alter the underlying statistics of an image. Most digital cameras, for example, employ a single sensor in conjunction with a color filter array (CFA), and then interpolate the missing color samples to obtain a three channel color image. This interpolation introduces specific correlations which are likely to be destroyed when tampering with an image. We quantify the specific correlations introduced by CFA interpolation, and describe how these correlations, or lack thereof, can be automatically detected in any portion of an image. We show the efficacy of this approach in revealing traces of digital tampering in lossless and lossy compressed color images interpolated with several different CFA algorithms. I.
Bayesian Estimation Of Motion Vector Fields
 IEEE Trans. Pattern Anal. Machine Intell
, 1992
"... This paper presents a new approach to the estimation of twodimensional motion vector fields from timevarying images. The approach is stochastic, both in its formulation and in the solution method. The formulation involves the specification of a deterministic structural model, along with stochastic ..."
Abstract

Cited by 137 (19 self)
 Add to MetaCart
(Show Context)
This paper presents a new approach to the estimation of twodimensional motion vector fields from timevarying images. The approach is stochastic, both in its formulation and in the solution method. The formulation involves the specification of a deterministic structural model, along with stochastic observation and motion field models. Two motion models are proposed: a globally smooth model based on vector Markov random fields and a piecewise smooth model derived from coupled vectorbinary Markov random fields. Two estimation criteria are studied. In the Maximum A Posteriori Probability (MAP) estimation the a posteriori probability of motion given data is maximized, while in the Minimum Expected Cost (MEC) estimation the expectation of a certain cost function is minimized. The MAP estimation is performed via simulated annealing, while the MEC algorithm performs iterationwise averaging. Both algorithms generate sample fields by means of stochastic relaxation implemented via the Gibbs s...
Height and gradient from shading
 International Journal of Computer Vision
, 1990
"... Abstract: The method described here for recovering the shape of a surface from a shaded image can deal with complex, wrinkled surfaces. Integrability can be enforced easily because both surface height and gradient are represented (A gradient field is integrable if it is the gradient of some surface ..."
Abstract

Cited by 135 (1 self)
 Add to MetaCart
Abstract: The method described here for recovering the shape of a surface from a shaded image can deal with complex, wrinkled surfaces. Integrability can be enforced easily because both surface height and gradient are represented (A gradient field is integrable if it is the gradient of some surface height function). The robustness of the method stems in part from linearization of the reflectance map about the current estimate of the surface orientation at each picture cell (The reflectance map gives the dependence of scene radiance on surface orientation). The new scheme can find an exact solution of a given shapefromshading problem even though a regularizing term is included. The reason is that the penalty term is needed only to stabilize the iterative scheme when it is far from the correct solution; it can be turned off as the solution is approached. This is a reflection of the fact that shapefromshading problems are not illposed when boundary conditions are available, or when the image contains singular points. This paper includes a review of previous work on shape from shading and photoclinometry. Novel features of the new scheme are introduced one at a time to make it easier to see what each contributes. Included is a discussion of implementation details that are important if exact algebraic solutions of synthetic shapefromshading problems are to be obtained. The hope is that better performance on synthetic data will lead to better performance on real data.