Results 1 - 10
of
53
Image and depth from a conventional camera with a coded aperture
- ACM TRANS. GRAPH
, 2007
"... A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolut ..."
Abstract
-
Cited by 278 (24 self)
- Add to MetaCart
A conventional camera captures blurred versions of scene information away from the plane of focus. Camera systems have been proposed that allow for recording all-focus images, or for extracting depth, but to record both simultaneously has required more extensive hardware and reduced spatial resolution. We propose a simple modification to a conventional camera that allows for the simultaneous recovery of both (a) high resolution image information and (b) depth information adequate for semi-automatic extraction of a layered depth representation of the image. Our modification is to insert a patterned occluder within the aperture of the camera lens, creating a coded aperture. We introduce a criterion for depth discriminability which we use to design the preferred aperture pattern. Using a statistical model of images, we can recover both depth information and an all-focus image from single photographs taken with the modified camera. A layered depth map is then extracted, requiring user-drawn strokes to clarify layer assignments in some cases. The resulting sharp image and layered depth map can be combined for various photographic applications, including automatic scene segmentation, post-exposure refocusing, or re-rendering of the scene from an alternate viewpoint.
Random lens imaging
- MIT Computer Science and Artificial Intelligence Laboratory
, 2006
"... We call a random lens one for which the function relating the input light ray to the output sensor location is pseudo-random. Imaging systems with random lenses can expand the space of possible camera designs, allowing new trade-offs in optical design and potentially adding new imaging capabilities. ..."
Abstract
-
Cited by 37 (0 self)
- Add to MetaCart
(Show Context)
We call a random lens one for which the function relating the input light ray to the output sensor location is pseudo-random. Imaging systems with random lenses can expand the space of possible camera designs, allowing new trade-offs in optical design and potentially adding new imaging capabilities. Machine learning methods are critical for both camera calibration and image reconstruction from the sensor data. We develop the theory and compare two different methods for calibration and reconstruction: an MAP approach, and basis pursuit from compressive sensing [5]. We show proof-of-concept experimental results from a random lens made from a multi-faceted mirror, showing successful calibration and image reconstruction. We illustrate the potential for super-resolution and 3D imaging. 1
Understanding camera trade-offs through a bayesian analysis of light field projections
- MIT CSAIL TR
, 2008
"... Computer vision has traditionally focused on extracting structure, such as depth, from images acquired using thin-lens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but in ..."
Abstract
-
Cited by 33 (6 self)
- Add to MetaCart
(Show Context)
Computer vision has traditionally focused on extracting structure, such as depth, from images acquired using thin-lens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but instead require the joint reconstruction of structure and image information. For example, recent coded aperture designs have been optimized to facilitate the joint reconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied by different strategies. This paper introduces a unified framework for analyzing computational imaging approaches. Each sensor element is modeled as an inner product over the 4D light field. The imaging task is then posed as Bayesian inference: given the observed noisy light field projections and a prior on light field signals, estimate the original light field. Under common imaging conditions, we compare the performance of various camera designs using 2D light field simulations. This framework allows us to better understand the tradeoffs of each camera type and analyze their limitations.
Shield fields: modeling and capturing 3D occluders
- ACM TRANS. GRAPH
"... We describe a unified representation of occluders in light transport and photography using shield fields: the 4D attenuation function which acts on any light field incident on an occluder. Our key theoretical result is that shield fields can be used to decouple the effects of occluders and incident ..."
Abstract
-
Cited by 29 (9 self)
- Add to MetaCart
We describe a unified representation of occluders in light transport and photography using shield fields: the 4D attenuation function which acts on any light field incident on an occluder. Our key theoretical result is that shield fields can be used to decouple the effects of occluders and incident illumination. We first describe the properties of shield fields in the frequency-domain and briefly analyze the “forward ” problem of efficiently computing cast shadows. Afterwards, we apply the shield field signal-processing framework to make several new observations regarding the “inverse” problem of reconstructing 3D occluders from cast shadows – extending previous work on shape-from-silhouette and visual hull methods. From this analysis we develop the first single-camera, single-shot approach to capture visual hulls without requiring moving or programmable illumination. We analyze several competing camera designs, ultimately leading to the development of a new large-format, mask-based light field camera that exploits optimal tiled-broadband codes for light-efficient shield field capture. We conclude by presenting a detailed experimental analysis of shield field capture and 3D occluder reconstruction.
Coded Exposure Deblurring: Optimized Codes for PSF Estimation and Invertibility
"... We consider the problem of single image object motion deblurring from a static camera. It is well-known that deblurring of moving objects using a traditional camera is illposed, due to the loss of high spatial frequencies in the captured blurred image. A coded exposure camera [17] modulates the inte ..."
Abstract
-
Cited by 22 (2 self)
- Add to MetaCart
(Show Context)
We consider the problem of single image object motion deblurring from a static camera. It is well-known that deblurring of moving objects using a traditional camera is illposed, due to the loss of high spatial frequencies in the captured blurred image. A coded exposure camera [17] modulates the integration pattern of light by opening and closing the shutter within the exposure time using a binary code. The code is chosen to make the resulting point spread function (PSF) invertible, for best deconvolution performance. However, for a successful deconvolution algorithm, PSF estimation is as important as PSF invertibility. We show that PSF estimation is easier if the resulting motion blur is smooth and the optimal code for PSF invertibility could worsen PSF estimation, since it leads to non-smooth blur. We show that both criterions of PSF invertibility and PSF estimation can be simultaneously met, albeit with a slight increase in the deconvolution noise. We propose design rules for a code to have good PSF estimation capability and outline two search criteria for finding the optimal code for a given length. We present theoretical analysis comparing the performance of the proposed code with the code optimized solely for PSF invertibility. We also show how to easily implement coded exposure on a consumer grade machine vision camera with no additional hardware. Real experimental results demonstrate the effectiveness of the proposed codes for motion deblurring.
Optimal Single Image Capture for Motion Deblurring
"... Deblurring images of moving objects captured from a traditional camera is an ill-posed problem due to the loss of high spatial frequencies in the captured images. Recent techniques have attempted to engineer the motion point spread function (PSF) by either making it invertible [16] using coded expos ..."
Abstract
-
Cited by 15 (1 self)
- Add to MetaCart
(Show Context)
Deblurring images of moving objects captured from a traditional camera is an ill-posed problem due to the loss of high spatial frequencies in the captured images. Recent techniques have attempted to engineer the motion point spread function (PSF) by either making it invertible [16] using coded exposure, or invariant to motion [13] by moving the camera in a specific fashion. We address the problem of optimal single image capture strategy for best deblurring performance. We formulate the problem of optimal capture as maximizing the signal to noise ratio (SNR) of the deconvolved image given a scene light level. As the exposure time increases, the sensor integrates more light, thereby increasing the SNR of the captured signal. However, for moving objects, larger exposure time also results in more blur and hence more deconvolution noise. We compare the following three single image capture strategies: (a) traditional camera, (b) coded exposure camera, and (c) motion invariant photography, as well as the best exposure time for capture by analyzing the rate of increase of deconvolution noise with exposure time. We analyze which strategy is optimal for known/unknown motion direction and speed and investigate how the performance degrades for other cases. We present real experimental results by simulating the above capture strategies using a high speed video camera. 1.
Sequences and arrays with Perfect Periodic Correlation
- IEEE Transactions on Aerospace and Electronic Systems
, 1988
"... Properties and methods for synthesizing sequences with perfect periodic autocorrelation functions and good energy efficiency are discussed. The construction is extended to two-dimensional perfect arrays. The construction methods used are based mainly on a search in the frequency domain and on a mult ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
Properties and methods for synthesizing sequences with perfect periodic autocorrelation functions and good energy efficiency are discussed. The construction is extended to two-dimensional perfect arrays. The construction methods used are based mainly on a search in the frequency domain and on a multiplication theorem for periodic sequences and arrays.
Reinterpretable Imager: Towards Variable Post-Capture Space, Angle and Time Resolution in Photography
"... We describe a novel multiplexing approach to achieve tradeoffs in space, angle and time resolution in photography. We explore the problem of mapping useful subsets of time-varying 4D lightfields in a single snapshot. Our design is based on using a dynamic mask in the aperture and a static mask close ..."
Abstract
-
Cited by 10 (6 self)
- Add to MetaCart
(Show Context)
We describe a novel multiplexing approach to achieve tradeoffs in space, angle and time resolution in photography. We explore the problem of mapping useful subsets of time-varying 4D lightfields in a single snapshot. Our design is based on using a dynamic mask in the aperture and a static mask close to the sensor. The key idea is to exploit scene-specific redundancy along spatial, angular and temporal dimensions and to provide a programmable or variable resolution tradeoff among these dimensions. This allows a user to reinterpret the single captured photo as either a high spatial resolution image, a refocusable image stack or a video for different parts of the scene in post-processing. A lightfield camera or a video camera forces a-priori choice in space-angle-time resolution. We demonstrate a single prototype which provides flexible post-capture abilities not possible using either a single-shot lightfield camera or a multi-frame video camera. We show several novel results including digital refocusing on objects moving in depth and capturing multiple facial expressions in a single photo.
Compressive Light Field Sensing
, 2012
"... We propose a novel design for light field image acquisition based on compressive sensing principles. By placing a randomly coded mask at the aperture of a camera, incoherent measurements of the light passing through different parts of the lens are encoded in the captured images. Each captured imag ..."
Abstract
-
Cited by 8 (0 self)
- Add to MetaCart
(Show Context)
We propose a novel design for light field image acquisition based on compressive sensing principles. By placing a randomly coded mask at the aperture of a camera, incoherent measurements of the light passing through different parts of the lens are encoded in the captured images. Each captured image is a random linear combination of different angular views of a scene. The encoded images are then used to recover the original light field image via a novel Bayesian reconstruction algorithm. Using the principles of compressive sensing, we show that light field images with a large number of angular views can be recovered from only a few acquisitions. Moreover, the proposed acquisition and recovery method provides light field images with high spatial resolution and signal-to-noise-ratio, and therefore is not affected by limitations common to existing light field camera designs. We present a prototype camera design based on the proposed framework by modifying a regular digital camera. Finally, we demonstrate the effectiveness of the proposed system using experimental results with both synthetic and real images.
A coded aperture for high-resolution nuclear medicine planar imaging with a conventional anger camera: experimental results
- P oS(FISBH2006)008 Recent Advances in SPECT Francesco Cusanno
"... Abstract—As compared to pinholes and collimators, coded aperture cameras present the potential advantage of increased signal-to-noise ratio (SNR). This advantage can be used to improve the resolution of conventional imagers while still producing clear images. In this paper, we describe a near-field ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
(Show Context)
Abstract—As compared to pinholes and collimators, coded aperture cameras present the potential advantage of increased signal-to-noise ratio (SNR). This advantage can be used to improve the resolution of conventional imagers while still producing clear images. In this paper, we describe a near-field coded aperture camera with a field of view of 9 9 cm and 1.7-mm system resolution based on the use of an existing gamma-camera. Exper-imental results include planar in-vivo studies of 99mTc-labeled compounds in a mouse. Index Terms—Coded aperture imaging, high resolution, molec-ular imaging, nuclear medicine imaging, small animal imaging. I.