Results 1  10
of
27
Realtime focus range sensor
 In International Conference on Computer Vision (ICCV
, 1995
"... AbstractStructures of dynamic scenes can only be recovered using a realtime range sensor. Depth from defocus offers an effective solution to fast and dense range estimation. However, accurate depth estimation requires theoretical and practical solutions to a variety of problems including recovery ..."
Abstract

Cited by 106 (11 self)
 Add to MetaCart
AbstractStructures of dynamic scenes can only be recovered using a realtime range sensor. Depth from defocus offers an effective solution to fast and dense range estimation. However, accurate depth estimation requires theoretical and practical solutions to a variety of problems including recovery of textureless surfaces, precise blur estimation, and magnification variations caused by defocusing. Both textured and textureless surfaces are recovered using an illumination pattern that is projected via the same optical path used to acquire images. The illumination pattern is optimized to maximize accuracy and spatial resolution in computed depth. The relative blurring in two images is computed using a narrowband linear operator that is designed by considering all the optical, sensing, and computational elements of the depth from defocus system. Defocus invariant magnification is achieved by the use of an additional aperture in the imaging optics. A prototype focus range sensor has been developed that has a workspace of 1 cubic foot and produces up to 512 x 480 depth estimates at 30 Hz with an average RMS error of 0.2%. Several experimental results are included to demonstrate the performance of the sensor. Index TermsDepth from defocus, constant magnification defocusing, active illumination pattern, optical transfer function, image sensing, tuned focus operator, depth estimation, realtime range sensor. 1
A geometric approach to shape from defocus
 IEEE Trans. Pattern Anal. Mach. Intell
, 2005
"... Abstract—We introduce a novel approach to shape from defocus, i.e., the problem of inferring the threedimensional (3D) geometry of a scene from a collection of defocused images. Typically, in shape from defocus, the task of extracting geometry also requires deblurring the given images. A common app ..."
Abstract

Cited by 58 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We introduce a novel approach to shape from defocus, i.e., the problem of inferring the threedimensional (3D) geometry of a scene from a collection of defocused images. Typically, in shape from defocus, the task of extracting geometry also requires deblurring the given images. A common approach to bypass this task relies on approximating the scene locally by a plane parallel to the image (the socalled equifocal assumption). We show that this approximation is indeed not necessary, as one can estimate 3D geometry while avoiding deblurring without strong assumptions on the scene. Solving the problem of shape from defocus requires modeling how light interacts with the optics before reaching the imaging surface. This interaction is described by the socalled point spread function (PSF). When the form of the PSF is known, we propose an optimal method to infer 3D geometry from defocused images that involves computing orthogonal operators which are regularized via functional singular value decomposition. When the form of the PSF is unknown, we propose a simple and efficient method that first learns a set of projection operators from blurred images and then uses these operators to estimate the 3D geometry of the scene from novel blurred images. Our experiments on both real and synthetic images show that the performance of the algorithm is relatively insensitive to the form of the PSF. Our general approach is to minimize the Euclidean norm of the difference between the estimated images and the observed images. The method is geometric in that we reduce the minimization to performing projections onto linear subspaces, by using inner product structures on both infinite and finitedimensional Hilbert spaces. Both proposed algorithms involve only simple matrixvector multiplications which can be implemented in realtime. Index Terms—Shape from defocus, depth from defocus, blind deconvolution, image processing, deblurring, shape, 3D reconstruction, shape estimation, image restoration, learning subspaces. 1
Confocal Stereo
, 2009
"... We present confocal stereo, a new method for computing 3D shape by controlling the focus and aperture of a lens. The method is specifically designed for reconstructing scenes with high geometric complexity or finescale texture. To achieve this, we introduce the confocal constancy property, which st ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
We present confocal stereo, a new method for computing 3D shape by controlling the focus and aperture of a lens. The method is specifically designed for reconstructing scenes with high geometric complexity or finescale texture. To achieve this, we introduce the confocal constancy property, which states that as the lens aperture varies, the pixel intensity of a visible infocus scene point will vary in a sceneindependent way, that can be predicted by prior radiometric lens calibration. The only requirement is that incoming radiance within the cone subtended by the largest aperture is nearly constant. First, we develop a detailed lens model that factors out the distortions in high resolution SLR cameras (12MP or more) with largeaperture lenses (e.g., f1.2). This allows us to assemble an A Ã F aperturefocus image (AFI) for each pixel, that collects the undistorted measurements over all A apertures and F focus settings. In the AFI representation, confocal constancy reduces to color comparisons within regions of the AFI, and leads to focus metrics that can be evaluated separately for each pixel. We propose two such metrics and present initial reconstruction results for complex scenes, as well as for a scene with known groundtruth shape.
Active refocusing of images and videos
 ACM Trans. Gr
"... Figure 1: Active refocusing of images. (a) Image acquired by projecting a sparse set of illumination dots on the scene. (b) The dots are automatically removed from the acquired image, and the defocus of the dots and a color segmentation of the image are used to compute an approximate depth map of th ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
Figure 1: Active refocusing of images. (a) Image acquired by projecting a sparse set of illumination dots on the scene. (b) The dots are automatically removed from the acquired image, and the defocus of the dots and a color segmentation of the image are used to compute an approximate depth map of the scene with sharp boundaries. (c and d) The depth map and the dotremoved image are used to smoothly refocus the scene. (e) The refocusing can also be done for an image taken immediately before or after but illuminated as desired. We present a system for refocusing images and videos of dynamic scenes using a novel, singleview depth estimation method. Our method for obtaining depth is based on the defocus of a sparse set of dots projected onto the scene. In contrast to other active illumination techniques, the projected pattern of dots can be removed from each captured image and its brightness easily controlled in order to avoid under or overexposure. The depths corresponding to the projected dots and a color segmentation of the image are used to compute an approximate depth map of the scene with clean region boundaries. The depth map is used to refocus the acquired image after the dots are removed, simulating realistic depth of field effects. Experiments on a wide variety of scenes, including closeups and live action, demonstrate the effectiveness of our method. CR Categories: I.3.7 [Computer Graphics]: ThreeDimensional
A Variational Approach to Shape from Defocus
 In Proc. European Conference on Computer Vision
, 2002
"... We address the problem of estimating the threedimensional shape and radiance of a surface in space from images obtained with different focal settings. ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
(Show Context)
We address the problem of estimating the threedimensional shape and radiance of a surface in space from images obtained with different focal settings.
A Geometric Approach to Blind Deconvolution with Application to Shape from Defocus
 Proc. IEEE Computer Vision and Pattern Recognition
, 2000
"... We propose a solution to the generic \bilinear calibrationestimation problem" when using a quadratic cost function and restricting to (locally) translationinvariant imaging models. We apply the solution to the problem of reconstructing the threedimensional shape and radiance of a scene from a ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
(Show Context)
We propose a solution to the generic \bilinear calibrationestimation problem" when using a quadratic cost function and restricting to (locally) translationinvariant imaging models. We apply the solution to the problem of reconstructing the threedimensional shape and radiance of a scene from a number of defocused images. Since the imaging process maps the continuum of threedimensional space onto the discrete pixel grid, rather than discretizing the continuum we exploit the structure of maps between (niteand innitedimensional) Hilbert spaces and arrive at a principled algorithm that does not involve any choice of basis or discretization. Rather, these are uniquely determined by the data, and exploited in a functional singular value decomposition in order to obtain a regularized solution. 1 Introduction An imaging system, such as the eye or a videocamera, involves a map from the threedimensional environment onto a twodimensional surface. In order to retrieve the spatial inform...
On generating seamless mosaics with large depth of field
 In ICPR
, 2000
"... Imaging cameras have only finite depth offield and only those objects within that depth range are simultaneously in focus. The depth of field of a camera can be improved by mosaicing a sequence of images taken under different focal settings. In conventional mosaicing schemes, a focus measure is comp ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Imaging cameras have only finite depth offield and only those objects within that depth range are simultaneously in focus. The depth of field of a camera can be improved by mosaicing a sequence of images taken under different focal settings. In conventional mosaicing schemes, a focus measure is computed for every scene point across the image sequence and the point is selected from that image where the focus measure is highest. We have, however, proved in this paper that the focus measure is not the highest in the best focussed frame for a certain class of scene points. The incorrect selection of image fiames for these points, causes visual artifacts to appear in the resulting mosaic. We have also proposed a method to isolate such scene points, and an algorithm to compose large depth of field mosaics without the undesirable artifacts. 1.
Shape and radiance estimation from the information divergence of blurred images
 of Blurred Images,” Proc. European Conf. Computer Vision
, 2000
"... Abstract. We formulate the problem of reconstructing the shape and radiance of a scene as the minimization of the information divergence between blurred images, and propose an algorithm that is provably convergent and guarantees that the solution is admissible, in the sense of corresponding to a pos ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
(Show Context)
Abstract. We formulate the problem of reconstructing the shape and radiance of a scene as the minimization of the information divergence between blurred images, and propose an algorithm that is provably convergent and guarantees that the solution is admissible, in the sense of corresponding to a positive radiance and imaging kernel. The motivation for the use of information divergence comes from the work of Csiszár [5], while the fundamental elements of the proof of convergence come from work by Snyder et al. [14], extended to handle unknown imaging kernels (i.e. the shape of the scene). 1
Observing Shape From Defocused Images
 International Journal of Computer Vision
, 1999
"... Accommodation cues are measurable properties of an image that are associated with a change in the geometry of the imaging device. To what extent can threedimensional shape be reconstructed using accommodation cues alone? This question is fundamental to the problem of reconstructing \shape from focu ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Accommodation cues are measurable properties of an image that are associated with a change in the geometry of the imaging device. To what extent can threedimensional shape be reconstructed using accommodation cues alone? This question is fundamental to the problem of reconstructing \shape from focus" (SFF) and \shape from defocus" (SFD) for applications in inspection, microscopy, image restoration (deblurring) and visualization. We address it by studying the observability of accommodation cues in an analytical framework that reveals under what conditions shape can be reconstructed from defocused images. We do so in three steps: (1) we characterize the observability of any surface in the presence of a controlled radiance (weak observability), (2) we establish the existence of a radiance that allows distinguishing any two surfaces (sucient excitation) and finally (3) we show that in the absence of any prior knowledge on the radiance, two surfaces can be distinguished up to the degree of resolution determined by the complexity of the radiance (strong observability). We formulate the problem of reconstructing the shape and radiance of a scene as the minimization of the information divergence between blurred images, and propose an algorithm that is provably convergent and guarantees that the solution is admissible, in the sense of corresponding to a positive radiance and imaging kernel.
Generating omnifocus images using graph cuts and a new focus measure
 In Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on
, 2004
"... In this paper, we discuss how to generate omnifocus images from a sequence of different focal setting images. We first show that the existing focus measures would encounter difficulty when detecting which frame is most focused for pixels in the regions between intensity edges and uniform areas. Then ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we discuss how to generate omnifocus images from a sequence of different focal setting images. We first show that the existing focus measures would encounter difficulty when detecting which frame is most focused for pixels in the regions between intensity edges and uniform areas. Then we propose a new focus measure that could be used to handle this problem. In addition, after computing focus measures for every pixel in all images, we construct a three dimensional (3D) nodecapacitated graph and apply a graph cut based optimization method to estimate a spatiofocus surface that minimizes the summation of the new focus measure values on this surface. An omnifocus image can be directly generated from this minimal spatiofocus surface. Experimental results with simulated and real scenes are provided. 1