Results 1  10
of
266
A nonlocal algorithm for image denoising
 In CVPR
, 2005
"... We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the non local means ..."
Abstract

Cited by 433 (12 self)
 Add to MetaCart
(Show Context)
We propose a new measure, the method noise, to evaluate and compare the performance of digital image denoising methods. We first compute and analyze this method noise for a wide class of denoising algorithms, namely the local smoothing filters. Second, we propose a new algorithm, the non local means (NLmeans), based on a non local averaging of all pixels in the image. Finally, we present some experiments comparing the NLmeans algorithm and the local smoothing filters. 1.
Evaluation of Interest Point Detectors
, 2000
"... Many different lowlevel feature detectors exist and it is widely agreed that the evaluation of detectors is important. In this paper we introduce two evaluation criteria for interest points: repeatability rate and information content. Repeatability rate evaluates the geometric stability under diff ..."
Abstract

Cited by 409 (8 self)
 Add to MetaCart
(Show Context)
Many different lowlevel feature detectors exist and it is widely agreed that the evaluation of detectors is important. In this paper we introduce two evaluation criteria for interest points: repeatability rate and information content. Repeatability rate evaluates the geometric stability under different transformations. Information content measures the distinctiveness of features. Different interest point detectors are compared using these two criteria. We determine which detector gives the best results and show that it satisfies the criteria well.
Machine learning for highspeed corner detection
 In European Conference on Computer Vision
, 2006
"... Where feature points are used in realtime framerate applications, a highspeed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in realtime applicati ..."
Abstract

Cited by 337 (4 self)
 Add to MetaCart
(Show Context)
Where feature points are used in realtime framerate applications, a highspeed feature detector is necessary. Feature detectors such as SIFT (DoG), Harris and SUSAN are good methods which yield high quality features, however they are too computationally intensive for use in realtime applications of any complexity. Here we show that machine learning can be used to derive a feature detector which can fully process live PAL video using less than 7% of the available processing time. By comparison neither the Harris detector (120%) nor the detection stage of SIFT (300%) can operate at full frame rate.
Probabilistic Independent Component Analysis
, 2003
"... Independent Component Analysis is becoming a popular exploratory method for analysing complex data such as that from FMRI experiments. The application of such 'modelfree' methods, however, has been somewhat restricted both by the view that results can be uninterpretable and by the lack of ..."
Abstract

Cited by 205 (14 self)
 Add to MetaCart
Independent Component Analysis is becoming a popular exploratory method for analysing complex data such as that from FMRI experiments. The application of such 'modelfree' methods, however, has been somewhat restricted both by the view that results can be uninterpretable and by the lack of ability to quantify statistical significance. We present an integrated approach to Probabilistic ICA for FMRI data that allows for nonsquare mixing in the presence of Gaussian noise. We employ an objective estimation of the amount of Gaussian noise through Bayesian analysis of the true dimensionality of the data, i.e. the number of activation and nonGaussian noise sources. Reduction of the data to this 'true' subspace before the ICA decomposition automatically results in an estimate of the noise, leading to the ability to assign significance to voxels in ICA spatial maps. Estimation of the number of intrinsic sources not only enables us to carry out probabilistic modelling, but also achieves an asymptotically unique decomposition of the data. This reduces problems of interpretation, as each final independent component is now much more likely to be due to only one physical or physiological process. We also describe other improvements to standard ICA, such as temporal prewhitening and variance normafisation of timeseries, the latter being particularly useful in the context of dimensionality reduction when weak activation is present. We discuss the use of prior information about the spatiotemporal nature of the source processes, and an alternativehypothesis testing approach for inference, using Gaussian mixture models. The performance of our approach is illustrated and evaluated on real and complex artificial FMRI data, and compared to the spatiotemporal accuracy of restfits obtaine...
Temporal autocorrelation in univariate linear modelling of fMRI data
 pP Y C W P k nk N p Var(Yk ) (Yk ) 0 1 C CR 1 Var(Y ) P k nk N Var(Y k ) 0 1 C MI H(X;Y ) H(X) H(Y ) 1 0 C NMI H(X;Y ) H(X)+H(Y
, 2000
"... In functional magnetic resonance imaging statistical analysis there are problems with accounting for temporal autocorrelations when assessing change within voxels. Techniques to date have utilized temporal filtering strategies to either shape these autocorrelations or remove them. Shaping, or “color ..."
Abstract

Cited by 197 (10 self)
 Add to MetaCart
In functional magnetic resonance imaging statistical analysis there are problems with accounting for temporal autocorrelations when assessing change within voxels. Techniques to date have utilized temporal filtering strategies to either shape these autocorrelations or remove them. Shaping, or “coloring, ” attempts to negate the effects of not accurately knowing the intrinsic autocorrelations by imposing known autocorrelation via temporal filtering. Removing the autocorrelation, or “prewhitening, ” gives the best linear unbiased estimator, assuming that the autocorrelation is accurately known. For singleevent designs, the efficiency of the estimator is considerably higher for prewhitening compared with coloring. However, it has been suggested that sufficiently accurate estimates of the autocorrelation are currently not available to give prewhitening acceptable bias. To overcome this, we consider different ways to estimate the autocorrelation for use in prewhitening. After highpass filtering is performed, a Tukey taper (set to smooth the spectral density more than would normally be used in spectral density estimation) performs best. Importantly, estimation is further improved by using nonlinear spatial filtering to smooth the estimated autocorrelation, but only within tissue type. Using this approach when prewhitening reduced bias to close to zero at probability levels as low as 1 � 10 �5. © 2001 Academic Press Key Words: FMRI analysis; GLM; temporal filtering; temporal autocorrelation; spatial filtering; singleevent; autoregressive model; spectral density estimation; multitapering.
Fusing Points and Lines for High Performance Tracking
 IN INTERNATIONAL CONFERENCE ON COMPUTER VISION
, 2005
"... This paper addresses the problem of realtime 3D modelbased tracking by combining pointbased and edgebased tracking systems. We present a careful analysis of the properties of these two sensor systems and show that this leads to some nontrivial design choices that collectively yield extremely hig ..."
Abstract

Cited by 147 (5 self)
 Add to MetaCart
This paper addresses the problem of realtime 3D modelbased tracking by combining pointbased and edgebased tracking systems. We present a careful analysis of the properties of these two sensor systems and show that this leads to some nontrivial design choices that collectively yield extremely high performance. In particular, we present a method for integrating the two systems and robustly combining the pose estimates they produce. Further we show how online learning can be used to improve the performance of feature tracking. Finally, to aid realtime performance, we introduce the FAST feature detector which can perform fullframe feature detection at 400Hz. The combination of these techniques results in a system which is capable of tracking average prediction errors of 200 pixels. This level of robustness allows us to track very rapid motions, such as 50° camera shake at 6Hz.
Flash Photography Enhancement via Intrinsic Relighting
 ACM Trans. Graphics
, 2004
"... Figure 1: (a) Top: Photograph taken in a dark environment, the image is noisy and/or blurry. Bottom: Flash photography provides a sharp but flat image with distracting shadows at the silhouette of objects. (b) Inset showing the noise of the availablelight image. (c) Our technique merges the two ima ..."
Abstract

Cited by 145 (6 self)
 Add to MetaCart
Figure 1: (a) Top: Photograph taken in a dark environment, the image is noisy and/or blurry. Bottom: Flash photography provides a sharp but flat image with distracting shadows at the silhouette of objects. (b) Inset showing the noise of the availablelight image. (c) Our technique merges the two images to transfer the ambiance of the available lighting. Note the shadow of the candle on the table. Our technique enhances photographs shot in dark environments by combining a picture taken with the available light and one taken with the flash. We preserve the ambiance of the original lighting and insert the sharpness. We use the bilateral filter to decompose the images into detail and large scale. We reconstruct the image using the large scale of the available lighting and the detail of the flash. We detect and correct flash shadow. Our output combines the advantages of available illumination and flash photography.
Nonlocal image and movie denoising
 International Journal of Computer Vision
, 2008
"... Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying th ..."
Abstract

Cited by 99 (2 self)
 Add to MetaCart
(Show Context)
Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NLmeans. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”, specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle, which characterizes artifactfree methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. “Noise to noise ” will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the “statistical optimality”, is needed and will be
Fast highdimensional filtering using the permutohedral lattice
 Computer Graphics Forum (EG 2010 Proceedings
"... Many useful algorithms for processing images and geometry fall under the general framework of highdimensional Gaussian filtering. This family of algorithms includes bilateral filtering and nonlocal means. We propose a new way to perform such filters using the permutohedral lattice, which tessellat ..."
Abstract

Cited by 78 (5 self)
 Add to MetaCart
(Show Context)
Many useful algorithms for processing images and geometry fall under the general framework of highdimensional Gaussian filtering. This family of algorithms includes bilateral filtering and nonlocal means. We propose a new way to perform such filters using the permutohedral lattice, which tessellates highdimensional space with uniform simplices. Our algorithm is the first implementation of a highdimensional Gaussian filter that is both linear in input size and polynomial in dimensionality. Furthermore it is parameterfree, apart from the filter size, and achieves a consistently high accuracy relative to ground truth (> 45 dB). We use this to demonstrate a number of interactiverate applications of filters in as high as eight dimensions.
Nonlocal Regularization of Inverse Problems
, 2008
"... This article proposes a new framework to regularize linear inverse problems using the total variation on nonlocal graphs. This nonlocal graph allows to adapt the penalization to the geometry of the underlying function to recover. A fast algorithm computes iteratively both the solution of the regul ..."
Abstract

Cited by 56 (3 self)
 Add to MetaCart
(Show Context)
This article proposes a new framework to regularize linear inverse problems using the total variation on nonlocal graphs. This nonlocal graph allows to adapt the penalization to the geometry of the underlying function to recover. A fast algorithm computes iteratively both the solution of the regularization process and the nonlocal graph adapted to this solution. We show numerical applications of this method to the resolution of image processing inverse problems such as inpainting, superresolution and compressive sampling.