## Depth from Defocus: A Spatial Domain Approach (1994)

Venue: | International Journal of Computer Vision |

Citations: | 78 - 13 self |

### BibTeX

@ARTICLE{Subbarao94depthfrom,

author = {Murali Subbarao and Gopal Surya},

title = {Depth from Defocus: A Spatial Domain Approach},

journal = {International Journal of Computer Vision},

year = {1994},

volume = {13},

pages = {271--294}

}

### Years of Citing Articles

### OpenURL

### Abstract

A new method named STM is described for determining distance of objects and rapid autofocusing of camera systems. STM uses image defocus information and is based on a new Spatial-Domain Convolution/Deconvolution Transform. The method requires only two images taken with dierent camera parameters such as lens position, focal length, and aperture diameter. Both images can be arbitrarily blurred and neither of them needs to be a focused image. Therefore STM is very fast in comparison with Depth-from-Focus methods which search for the lens position or focal length of best focus. The method involves simple local operations and can be easily implemented in parallel to obtain the depthmap of a scene. STM has been implemented on an actual camera system named SPARCS. Experiments on the performance of STM and their results on realworld planar objects are presented. The results indicate that the accuracy of STM compares well with Depth-from-Focus methods and is useful in practical ap...

### Citations

1403 |
Robot Vision
- HORN
- 1986
(Show Context)
Citation Context ...us, Convolution/ Deconvolution, autofocusing. 1 1 Introduction Passive techniques of ranging or determining distance of objects from a camera is an important problem in computer vision. Stereo vision =-=[6]-=- is perhaps the most popular technique. The major computational problems associated with stereo are the correspondence problem and detection of occlusion. Recently, Depth-from-Focus (DFF) methods [5, ... |

405 | Introduction to Fourier Optics - Goodman - 2005 |

189 |
A new sense for depth of field
- Pentland
- 1987
(Show Context)
Citation Context ...echanical motion of camera parts which is much slower than electronic computation. During the entire period of adjusting camera parameters, the scene must remain stationary. Recently some researchers =-=[7, 11, 12, 19, 20, 21, 26, 28]-=- have proposed methods forsnding distance of an object which do not require focusing the object. They take the level of defocus of the object into account in determining distance. Therefore this appro... |

107 |
A perspective on range finding techniques for computer vision
- Jarvis
- 1983
(Show Context)
Citation Context ... [6] is perhaps the most popular technique. The major computational problems associated with stereo are the correspondence problem and detection of occlusion. Recently, Depth-from-Focus (DFF) methods =-=[5, 8, 9, 13, 14, 27, 29-=-] have attracted the attention of researchers as they do not suer from the problems associated with stereo. DFF methods are based on the fact that in the image formed by an optical system such as a co... |

70 | Parallel depth recovery by changing camera parameters
- Subbarao
- 1988
(Show Context)
Citation Context ...echanical motion of camera parts which is much slower than electronic computation. During the entire period of adjusting camera parameters, the scene must remain stationary. Recently some researchers =-=[7, 11, 12, 19, 20, 21, 26, 28]-=- have proposed methods forsnding distance of an object which do not require focusing the object. They take the level of defocus of the object into account in determining distance. Therefore this appro... |

65 |
Accommodation in computer vision
- Tenenbaum
- 1970
(Show Context)
Citation Context ... [6] is perhaps the most popular technique. The major computational problems associated with stereo are the correspondence problem and detection of occlusion. Recently, Depth-from-Focus (DFF) methods =-=[5, 8, 9, 13, 14, 27, 29-=-] have attracted the attention of researchers as they do not suer from the problems associated with stereo. DFF methods are based on the fact that in the image formed by an optical system such as a co... |

63 |
Linear Systems, Fourier Transforms and Optics
- Gaskill
- 1978
(Show Context)
Citation Context ..., i.e., e = (s; f; D): (2) In order to illustrate the theoretical basis of STM we take the optical system to be circularly symmetric around the optical axis, and use a paraxial geometric optics model =-=[2]-=- for image formation. This is a good approximation in practice to actual image formation process modeled by physical optics [1, 25]. However, STM itself is applicable to physical optics model also. In... |

53 | Nikzad: Focusing techniques
- Subbarao, Choi, et al.
(Show Context)
Citation Context ... [6] is perhaps the most popular technique. The major computational problems associated with stereo are the correspondence problem and detection of occlusion. Recently, Depth-from-Focus (DFF) methods =-=[5, 8, 9, 13, 14, 27, 29-=-] have attracted the attention of researchers as they do not suer from the problems associated with stereo. DFF methods are based on the fact that in the image formed by an optical system such as a co... |

45 |
Fundamentals of electronic imaging systems
- Schreiber
- 1986
(Show Context)
Citation Context ...berrations, etc., it will be a roughly circular blob with the brightness falling o gradually at the border rather than sharply. Therefore, as an alternative to the above cylindrical PSF model, often [=-=6, 11, 15-=-, 20] a two-dimensional Gaussian is suggested which is dened by h 2 (x; y) = 1 2 2 e x 2 +y 2 2 2 (8) 4 where is a spread parameter corresponding to the standard deviation of the distribution of the ... |

35 |
Focusing
- Krotkov
- 1987
(Show Context)
Citation Context |

35 |
Depth recovery from blurred edges
- Subbarao, Gurumoorthy
(Show Context)
Citation Context ...ance of all objects in a scene using the DFD methods, irrespective of whether the objects are focused or not. Several DFD methods have been proposed and demonstrated for objects with brightness edges =-=[4, 11, 18, 21, 22, 28]-=-. In this case the underlying focused image is assumed to be a step edge or an edge of known form. A measure of edge blur is computed and this is used to estimate the distance of the object. In estima... |

30 |
A matrix based method for determining depth from focus
- Ens, Lawrence
- 1991
(Show Context)
Citation Context ...echanical motion of camera parts which is much slower than electronic computation. During the entire period of adjusting camera parameters, the scene must remain stationary. Recently some researchers =-=[7, 11, 12, 19, 20, 21, 26, 28]-=- have proposed methods forsnding distance of an object which do not require focusing the object. They take the level of defocus of the object into account in determining distance. Therefore this appro... |

29 |
Shape from focus system
- Nayar
- 1992
(Show Context)
Citation Context |

27 |
Huang,“A simple, real-time range camera
- Pentland, Darrell, et al.
- 1989
(Show Context)
Citation Context |

26 |
Depth from focus
- Grossman
- 1987
(Show Context)
Citation Context ...ance of all objects in a scene using the DFD methods, irrespective of whether the objects are focused or not. Several DFD methods have been proposed and demonstrated for objects with brightness edges =-=[4, 11, 18, 21, 22, 28]-=-. In this case the underlying focused image is assumed to be a step edge or an edge of known form. A measure of edge blur is computed and this is used to estimate the distance of the object. In estima... |

24 | A generalized depth estimation algorithm with a single image - Lai, Fu, et al. - 1992 |

23 | Depth from defocus and rapid autofocusing: A practical approach
- Subbarao, Wei
- 1992
(Show Context)
Citation Context |

21 | to appear). Efficient depth recovery through inverse optics. Machine Vision for Inspection and Measurement, Editor: H
- Subbarao
- 1988
(Show Context)
Citation Context |

21 | Computer modeling and simulation of camera defocus
- Subbarao, Lu
- 1992
(Show Context)
Citation Context ...etric around the optical axis, and use a paraxial geometric optics model [2] for image formation. This is a good approximation in practice to actual image formation process modeled by physical optics =-=[1, 25]-=-. However, STM itself is applicable to physical optics model also. In Figure 2, if the object point p is not in focus, then it gives rise to a blurred image p 00 on the image detector ID. According to... |

20 | Implementation of automatic focusing algorithms for a computer vision system with camera control
- Schlag, Sanderson, et al.
- 1983
(Show Context)
Citation Context |

19 | Application of spatial-domain convolution/deconvolution transform for determining distance from image defocus
- Subbarao, Surya
- 1992
(Show Context)
Citation Context ...at it is based on a smoothness assumption and it is computation intensive. In this paper a new method of depth estimation using a new Spatial Domain Convolution /Deconvolution Transform (S-Transform) =-=[24-=-] is described. This method, named S-Transform Method or STM, uses only two images taken with dierent camera parameters. All the computations are done in the spatial domain and are local in nature. He... |

13 | Determining distance from defocused images of simple objects
- Subbarao
- 1989
(Show Context)
Citation Context ...ance of all objects in a scene using the DFD methods, irrespective of whether the objects are focused or not. Several DFD methods have been proposed and demonstrated for objects with brightness edges =-=[4, 11, 18, 21, 22, 28]-=-. In this case the underlying focused image is assumed to be a step edge or an edge of known form. A measure of edge blur is computed and this is used to estimate the distance of the object. In estima... |

7 |
Smoothed Di erentiation Filters for Images
- Meer, Weiss
- 1992
(Show Context)
Citation Context ...e previous section we assumed a local cubic polynomial model for the focused image f(x; y) in deriving STM. This assumption can be removed by using a set of smoothingslters proposed by Meer and Weiss =-=[10]-=- so that STM can be applied to arbitrary focused images. Meer and Weiss [10] have proposed a set of discrete image smoothingslters for estimating images and their derivatives. Theseslters essentially ... |

7 |
Computational methods and electronic camera apparatus for determining distance of objects, rapid autofocusing, and obtaining improved focus images,” U.S. patent application serial number 07/373,996
- Subbarao
- 1989
(Show Context)
Citation Context ...htness of the image. Normalization with respect to image magnication is more complicated. It can be done by image interpolation and resampling such that all images correspond to the sameseld of view [=-=17-=-]. The relation between an original image g(x; y) taken with s = s 1 and the corresponding magnication normalized image g n (x; y) taken with s = s 0 is given by g n (x=s 1 ; y=s 1 ) = g(x=s 0 ; y=s 0... |

7 |
Application of Spatial-Domain Convolution /Deconvolution Transform for Determining Distance from Image Defocus
- Subbarao, Surya
- 1992
(Show Context)
Citation Context |

6 | On the depth information in the point spread function of a defocused optical system
- Subbarao
- 1990
(Show Context)
Citation Context ... constant of proportionality characteristic of the given camera. Except when is very small (in which case diraction eects dominate), in most practical cases k = 1 p 2 (10) is a good approximation [18=-=, 22, 2-=-3]. Since the blur circle radius R 0 is a function of e and u, can be written as (e; u). (However, the image of an actual point light source for our camera was quite close to a cylindrical function a... |

2 |
A Model for Image Sensing and Digitization
- Subbarao, Nikzad
- 1990
(Show Context)
Citation Context ...int (x; y) of a scene is dened as the total light energy incident on the camera aperture (entrance pupil) during one exposure period from the object point along the direction corresponding to (x; y) [=-=16-=-]. Let g 1 (x; y) and g 2 (x; y) be two images of the object recorded for two dierent camera parameter settings e 1 and e 2 where e 1 = (s 1 ; f 1 ; D 1 ) and e 2 = (s 2 ; f 2 ; D 2 ): (29) The images... |