Results 1  10
of
160
Splines: A Perfect Fit for Signal/Image Processing
 IEEE SIGNAL PROCESSING MAGAZINE
, 1999
"... ..."
Unified segmentation
, 2005
"... A probabilistic framework is presented that enables image registration, tissue classification, and bias correction to be combined within the same generative model. A derivation of a loglikelihood objective function for the unified model is provided. The model is based on a mixture of Gaussians and ..."
Abstract

Cited by 324 (12 self)
 Add to MetaCart
A probabilistic framework is presented that enables image registration, tissue classification, and bias correction to be combined within the same generative model. A derivation of a loglikelihood objective function for the unified model is provided. The model is based on a mixture of Gaussians and is extended to incorporate a smooth intensity variation and nonlinear registration with tissue probability maps. A strategy for optimising the model parameters is described, along with the requisite partial derivatives of the objective function.
A Pyramid Approach to SubPixel Registration Based on Intensity
, 1998
"... We present an automatic subpixel registration algorithm that minimizes the mean square intensity difference between a reference and a test data set, which can be either images (2D) or volumes (3D). It uses an explicit spline representation of the images in conjunction with spline processing, and ..."
Abstract

Cited by 237 (18 self)
 Add to MetaCart
We present an automatic subpixel registration algorithm that minimizes the mean square intensity difference between a reference and a test data set, which can be either images (2D) or volumes (3D). It uses an explicit spline representation of the images in conjunction with spline processing, and is based on a coarsetofine iterative strategy (pyramid approach). The minimization is performed according to a new variation (ML*) of the MarquardtLevenberg algorithm for nonlinear leastsquare optimization. The geometric deformation model is a global 3D affine transformation that can be optionally restricted to rigidbody motion (rotation and translation), combined with isometric scaling. It also includes an optional adjustment of image contrast differences. We obtain excellent results for the registration of intramodality Positron Emission Tomography (PET) and functional Magnetic Resonance Imaging (fMRI) data. We conclude that the multiresolution refinement strategy is more robust than a comparable singlestage method, being less likely to be trapped into a false local optimum. In addition, our improved version of the MarquardtLevenberg algorithm is faster.
Interpolation revisited
 IEEE Transactions on Medical Imaging
, 2000
"... Abstract—Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to ..."
Abstract

Cited by 198 (33 self)
 Add to MetaCart
(Show Context)
Abstract—Based on the theory of approximation, this paper presents a unified analysis of interpolation and resampling techniques. An important issue is the choice of adequate basis functions. We show that, contrary to the common belief, those that perform best are not interpolating. By opposition to traditional interpolation, we call their use generalized interpolation; they involve a prefiltering step when correctly applied. We explain why the approximation order inherent in any basis function is important to limit interpolation artifacts. The decomposition theorem states that any basis function endowed with approximation order can be expressed as the convolution of a Bspline of the same order with another function that has none. This motivates the use of splines and splinebased functions as a tunable way to keep artifacts in check without any significant cost penalty. We discuss implementation and performance issues, and we provide experimental evidence to support our claims. Index Terms—Approximation constant, approximation order, Bsplines, Fourier error kernel, maximal order and minimal support (Moms), piecewisepolynomials. I.
Survey: Interpolation Methods in Medical Image Processing
 IEEE Transactions on Medical Imaging
, 1999
"... Abstract — Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation ker ..."
Abstract

Cited by 161 (2 self)
 Add to MetaCart
Abstract — Image interpolation techniques often are required in medical imaging for image generation (e.g., discrete back projection for inverse Radon transform) and processing such as compression or resampling. Since the ideal interpolation function spatially is unlimited, several interpolation kernels of finite size have been introduced. This paper compares 1) truncated and windowed sinc; 2) nearest neighbor; 3) linear; 4) quadratic; 5) cubic Bspline; 6) cubic; g) Lagrange; and 7) Gaussian interpolation and approximation techniques with kernel sizes from 1 2 1upto 8 2 8. The comparison is done by: 1) spatial and Fourier analyses; 2) computational complexity as well as runtime evaluations; and 3) qualitative and quantitative interpolation error determinations for particular interpolation tasks which were taken from common situations in medical image processing. For local and Fourier analyses, a standardized notation is introduced
Adaptive fuzzy segmentation of magnetic resonance images
 IEEE TRANS. MED. IMAG
, 1999
"... An algorithm is presented for the fuzzy segmentation of twodimensional (2D) and threedimensional (3D) multispectral magnetic resonance (MR) images that have been corrupted by intensity inhomogeneities, also known as shading artifacts. The algorithm is an extension of the 2D adaptive fuzzy Cme ..."
Abstract

Cited by 158 (10 self)
 Add to MetaCart
An algorithm is presented for the fuzzy segmentation of twodimensional (2D) and threedimensional (3D) multispectral magnetic resonance (MR) images that have been corrupted by intensity inhomogeneities, also known as shading artifacts. The algorithm is an extension of the 2D adaptive fuzzy Cmeans algorithm (2D AFCM) presented in previous work by the authors. This algorithm models the intensity inhomogeneities as a gain field that causes image intensities to smoothly and slowly vary through the image space. It iteratively adapts to the intensity inhomogeneities and is completely automated. In this paper, we fully generalize 2D AFCM to threedimensional (3D) multispectral images. Because of the potential size of 3D image data, we also describe a new faster multigridbased algorithm for its implementation. We show, using simulated MR data, that 3D AFCM yields lower error rates than both the standard fuzzy Cmeans (FCM) algorithm and two other competing methods, when segmenting corrupted images. Its efficacy is further demonstrated using real 3D scalar and multispectral MR brain images.
Optimization of Mutual Information for Multiresolution Image Registration
 IEEE Transactions on Image Processing
, 2000
"... We propose a new method for the intermodal registration of images using a criterion known as mutual information. Our main contribution is an optimizer that we specifically designed for this criterion. We show that this new optimizer is well adapted to a multiresolution approach because it typically ..."
Abstract

Cited by 146 (6 self)
 Add to MetaCart
We propose a new method for the intermodal registration of images using a criterion known as mutual information. Our main contribution is an optimizer that we specifically designed for this criterion. We show that this new optimizer is well adapted to a multiresolution approach because it typically converges in fewer criterion evaluations than other optimizers. We have built a multiresolution image pyramid, along with an interpolation process, an optimizer, and the criterion itself, around the unifying concept of splineprocessing. This ensures coherence in the way we model data and yields good performance. We have tested our approach in a variety of experimental conditions and report excellent results. We claim an accuracy of about a hundredth of a pixel under ideal conditions. We are also robust since the accuracy is still about a tenth of a pixel under very noisy conditions. In addition, a blind evaluation of our results compares very favorably to the work of several other researchers.
A chronology of interpolation: From ancient astronomy to modern signal and image processing
 Proceedings of the IEEE
, 2002
"... This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into histo ..."
Abstract

Cited by 105 (0 self)
 Add to MetaCart
(Show Context)
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively recent theoretical as well as experimental studies concludes the presentation. Keywords—Approximation, convolutionbased interpolation, history, image processing, polynomial interpolation, signal processing, splines. “It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation. It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be extended by considering noteworthy examples of it. ” 1 I.
Quantitative Fourier Analysis of Approximation Techniques: Part II  Wavelets
 IEEE Trans. Signal Processing
, 1999
"... In a previous paper, we proposed a general Fourier method that provides an accurate prediction of the approximation error, irrespective of the scaling properties of the approximating functions. Here, we apply our results when these functions satisfy the usual twoscale relation encountered in dyadic ..."
Abstract

Cited by 100 (38 self)
 Add to MetaCart
(Show Context)
In a previous paper, we proposed a general Fourier method that provides an accurate prediction of the approximation error, irrespective of the scaling properties of the approximating functions. Here, we apply our results when these functions satisfy the usual twoscale relation encountered in dyadic multiresolution analysis. As a consequence of this additional constraint, the quantities introduced in our previous paper can be computed explicitly as a function of the refinement filter. This is, in particular, true for the asymptotic expansion of the approximation error for biorthonormal wavelets as the scale tends to zero. One of the contributions of this paper is the computation of sharp, asymptotically optimal upper bounds for the leastsquares approximation error. Another contribution is the application of these results to Bsplines and Daubechies scaling functions, which yields explicit asymptotic developments and upper bounds. Thanks to these explicit expressions, we can quantify ...
Image Interpolation and Resampling
 HANDBOOK OF MEDICAL IMAGING, PROCESSING AND ANALYSIS
, 2000
"... This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted syn ..."
Abstract

Cited by 81 (11 self)
 Add to MetaCart
This chapter presents a survey of interpolation and resampling techniques in the context of exact, separable interpolation of regularly sampled data. In this context, the traditional view of interpolation is to represent an arbitrary continuous function as a discrete sum of weighted and shifted synthesis functions—in other words, a mixed convolution equation. An important issue is the choice of adequate synthesis functions that satisfy interpolation properties. Examples of finitesupport ones are the square pulse (nearestneighbor interpolation), the hat function (linear interpolation), the cubic Keys' function, and various truncated or windowed versions of the sinc function. On the other hand, splines provide examples of infinitesupport interpolation functions that can be realized exactly at a finite, surprisingly small computational cost. We discuss implementation issues and illustrate the performance of each synthesis function. We also highlight several artifacts that may arise when performing interpolation, such as ringing, aliasing, blocking and blurring. We explain why the approximation order inherent in the synthesis function is important to limit these interpolation artifacts, which motivates the use of splines as a tunable way to keep them in check without any significant cost penalty.