Results 1 - 10
of
31
Coded Exposure Deblurring: Optimized Codes for PSF Estimation and Invertibility
"... We consider the problem of single image object motion deblurring from a static camera. It is well-known that deblurring of moving objects using a traditional camera is illposed, due to the loss of high spatial frequencies in the captured blurred image. A coded exposure camera [17] modulates the inte ..."
Abstract
-
Cited by 22 (2 self)
- Add to MetaCart
(Show Context)
We consider the problem of single image object motion deblurring from a static camera. It is well-known that deblurring of moving objects using a traditional camera is illposed, due to the loss of high spatial frequencies in the captured blurred image. A coded exposure camera [17] modulates the integration pattern of light by opening and closing the shutter within the exposure time using a binary code. The code is chosen to make the resulting point spread function (PSF) invertible, for best deconvolution performance. However, for a successful deconvolution algorithm, PSF estimation is as important as PSF invertibility. We show that PSF estimation is easier if the resulting motion blur is smooth and the optimal code for PSF invertibility could worsen PSF estimation, since it leads to non-smooth blur. We show that both criterions of PSF invertibility and PSF estimation can be simultaneously met, albeit with a slight increase in the deconvolution noise. We propose design rules for a code to have good PSF estimation capability and outline two search criteria for finding the optimal code for a given length. We present theoretical analysis comparing the performance of the proposed code with the code optimized solely for PSF invertibility. We also show how to easily implement coded exposure on a consumer grade machine vision camera with no additional hardware. Real experimental results demonstrate the effectiveness of the proposed codes for motion deblurring.
Active Polarization Descattering
"... Imaging in scattering media such as fog and water is important but challenging. Images suffer from poor visibility due to backscattering and signal attenuation. Most prior methods for scene recovery use active illumination scanners (structured and gated), which can be slow and cumbersome. On the oth ..."
Abstract
-
Cited by 21 (8 self)
- Add to MetaCart
(Show Context)
Imaging in scattering media such as fog and water is important but challenging. Images suffer from poor visibility due to backscattering and signal attenuation. Most prior methods for scene recovery use active illumination scanners (structured and gated), which can be slow and cumbersome. On the other hand, natural illumination is inapplicable to dark environments. The current paper addresses the need for a non-scanning recovery method, that uses active scene irradiance. We study the formation of images under widefield artificial illumination. Based on the formation model, the paper presents an approach for recovering the object signal. It also yields rough information about the 3D scene structure. The approach can work with compact, simple hardware, having active widefield, polychromatic polarized illumination. The camera is fitted with a polarization analyzer. Two frames of the scene are instantly taken, with different states of the analyzer or light-source polarizer. A recovery algorithm follows the acquisition. It allows both the backscatter and the object reflection to be partially polarized. It thus unifies and generalizes prior polarization-based methods, which had assumed exclusive polarization of either of these components. The approach is limited to an effective range, due to image noise and falloff of widefield illumination. Thus, these limits and the noise sensitivity are analyzed. The approach particularly applies underwater. We therefore use the approach to demonstrate recovery of object signals and significant visibility enhancement in underwater field experiments.
Optimal Coded Sampling for Temporal Super-Resolution
"... Conventional low frame rate cameras result in blur and/or aliasing in images while capturing fast dynamic events. Multiple low speed cameras have been used previously with staggered sampling to increase the temporal resolution. However, previous approaches are inefficient: they either use small inte ..."
Abstract
-
Cited by 18 (7 self)
- Add to MetaCart
(Show Context)
Conventional low frame rate cameras result in blur and/or aliasing in images while capturing fast dynamic events. Multiple low speed cameras have been used previously with staggered sampling to increase the temporal resolution. However, previous approaches are inefficient: they either use small integration time for each camera which does not provide light benefit, or use large integration time in a way that requires solving a big ill-posed linear system. We propose coded sampling that address these issues: using N cameras it allows N times temporal superresolution while allowing ∼ N 2 times more light compared to an equivalent high speed camera. In addition, it results in a well-posed linear system which can be solved independently for each frame, avoiding reconstruction artifacts and significantly reducing the computational time and memory. Our proposed sampling uses optimal multiplexing code considering additive Gaussian noise to achieve the maximum possible SNR in the recovered video. We show how to implement coded sampling on off-the-shelf machine vision cameras. We also propose a new class of invertible codes that allow continuous blur in captured frames, leading to an easier hardware implementation. 1.
Optimal Single Image Capture for Motion Deblurring
"... Deblurring images of moving objects captured from a traditional camera is an ill-posed problem due to the loss of high spatial frequencies in the captured images. Recent techniques have attempted to engineer the motion point spread function (PSF) by either making it invertible [16] using coded expos ..."
Abstract
-
Cited by 15 (1 self)
- Add to MetaCart
(Show Context)
Deblurring images of moving objects captured from a traditional camera is an ill-posed problem due to the loss of high spatial frequencies in the captured images. Recent techniques have attempted to engineer the motion point spread function (PSF) by either making it invertible [16] using coded exposure, or invariant to motion [13] by moving the camera in a specific fashion. We address the problem of optimal single image capture strategy for best deblurring performance. We formulate the problem of optimal capture as maximizing the signal to noise ratio (SNR) of the deconvolved image given a scene light level. As the exposure time increases, the sensor integrates more light, thereby increasing the SNR of the captured signal. However, for moving objects, larger exposure time also results in more blur and hence more deconvolution noise. We compare the following three single image capture strategies: (a) traditional camera, (b) coded exposure camera, and (c) motion invariant photography, as well as the best exposure time for capture by analyzing the rate of increase of deconvolution noise with exposure time. We analyze which strategy is optimal for known/unknown motion direction and speed and investigate how the performance degrades for other cases. We present real experimental results by simulating the above capture strategies using a high speed video camera. 1.
Optimal multiplexed sensing: bounds, conditions and a graph theory link
- Optics Express
"... Abstract: Measuring an array of variables is central to many systems, including imagers (array of pixels), spectrometers (array of spectral bands) and lighting systems. Each of the measurements, however, is prone to noise and potential sensor saturation. It is recognized by a growing number of meth ..."
Abstract
-
Cited by 15 (2 self)
- Add to MetaCart
Abstract: Measuring an array of variables is central to many systems, including imagers (array of pixels), spectrometers (array of spectral bands) and lighting systems. Each of the measurements, however, is prone to noise and potential sensor saturation. It is recognized by a growing number of methods that such problems can be reduced by multiplexing the measured variables. In each measurement, multiple variables (radiation channels) are mixed (multiplexed) by a code. Then, after data acquisition, the variables are decoupled computationally in post processing. Potential benefits of the use of multiplexing include increased signal-to-noise ratio and accommodation of scene dynamic range. However, existing multiplexing schemes, including Hadamard-based codes, are inhibited by fundamental limits set by sensor saturation and Poisson distributed photon noise, which is scene dependent. There is thus a need to find optimal codes that best increase the signal to noise ratio, while accounting for these effects. Hence, this paper deals with the pursuit of such optimal measurements that avoid saturation and account for the signal dependency of noise. The paper derives lower bounds on the mean square error of demultiplexed variables. This is useful for assessing the optimality of numerically-searched multiplexing codes, thus expediting the numerical search. Furthermore, the paper states the necessary conditions for attaining the lower bounds by a general code. We show that graph theory can be harnessed for finding such ideal codes, by the use of strongly regular graphs.
Multiplexed Illumination for Scene Recovery in the Presence of Global Illumination
- In IEEE International Conference on Computer Vision (ICCV
, 2011
"... Global illumination effects such as inter-reflections and subsurface scattering result in systematic, and often significant errors in scene recovery using active illumination. Recently, it was shown that the direct and global components could be separated efficiently for a scene illuminated with a s ..."
Abstract
-
Cited by 13 (2 self)
- Add to MetaCart
(Show Context)
Global illumination effects such as inter-reflections and subsurface scattering result in systematic, and often significant errors in scene recovery using active illumination. Recently, it was shown that the direct and global components could be separated efficiently for a scene illuminated with a single light source. In this paper, we study the problem of direct-global separation for multiple light sources. We derive a theoretical lower bound for the number of required images, and propose a multiplexed illumination scheme which achieves this lower bound. We analyze the signalto-noise ratio (SNR) characteristics of the proposed illumination multiplexing method in the context of direct-global separation. We apply our method to several scene recovery techniques requiring multiple light sources, including shape from shading, structured light 3D scanning, photometric stereo, and reflectance estimation. Both simulation and experimental results show that the proposed method can accurately recover scene information with fewer images compared to sequentially separating direct-global components for each light source. 1.
Reinterpretable Imager: Towards Variable Post-Capture Space, Angle and Time Resolution in Photography
"... We describe a novel multiplexing approach to achieve tradeoffs in space, angle and time resolution in photography. We explore the problem of mapping useful subsets of time-varying 4D lightfields in a single snapshot. Our design is based on using a dynamic mask in the aperture and a static mask close ..."
Abstract
-
Cited by 10 (6 self)
- Add to MetaCart
(Show Context)
We describe a novel multiplexing approach to achieve tradeoffs in space, angle and time resolution in photography. We explore the problem of mapping useful subsets of time-varying 4D lightfields in a single snapshot. Our design is based on using a dynamic mask in the aperture and a static mask close to the sensor. The key idea is to exploit scene-specific redundancy along spatial, angular and temporal dimensions and to provide a programmable or variable resolution tradeoff among these dimensions. This allows a user to reinterpret the single captured photo as either a high spatial resolution image, a refocusable image stack or a video for different parts of the scene in post-processing. A lightfield camera or a video camera forces a-priori choice in space-angle-time resolution. We demonstrate a single prototype which provides flexible post-capture abilities not possible using either a single-shot lightfield camera or a multi-frame video camera. We show several novel results including digital refocusing on objects moving in depth and capturing multiple facial expressions in a single photo.
When Does Computational Imaging Improve Performance?
"... Abstract—A number of computational imaging techniques have been introduced to improve image quality by increasing light throughput. These techniques use optical coding to measure a stronger signal level. However, the performance of these techniques is limited by the decoding step, which amplifies no ..."
Abstract
-
Cited by 9 (4 self)
- Add to MetaCart
Abstract—A number of computational imaging techniques have been introduced to improve image quality by increasing light throughput. These techniques use optical coding to measure a stronger signal level. However, the performance of these techniques is limited by the decoding step, which amplifies noise. While it is well understood that optical coding can increase performance at low light levels, little is known about the quantitative performance advantage of computational imaging in general settings. In this paper, we derive the performance bounds for various computational imaging techniques. We then discuss the implications of these bounds for several real-world scenarios (illumination conditions, scene properties and sensor noise characteristics). Our results show that computational imaging techniques do not provide a significant performance advantage when imaging with illumination brighter than typical daylight. These results can be readily used by practitioners to design the most suitable imaging systems given the application at hand.
1A Framework for the Analysis of Computational Imaging Systems with Practical Applications
"... Abstract—Over the last decade, a number of Computational Imag-ing (CI) systems have been proposed for tasks such as motion deblurring, defocus deblurring and multispectral imaging. These techniques increase the amount of light reaching the sensor via multiplexing and then undo the deleterious effect ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
Abstract—Over the last decade, a number of Computational Imag-ing (CI) systems have been proposed for tasks such as motion deblurring, defocus deblurring and multispectral imaging. These techniques increase the amount of light reaching the sensor via multiplexing and then undo the deleterious effects of multiplexing by appropriate reconstruction algorithms. Given the widespread appeal and the considerable enthusiasm generated by these techniques, a detailed performance analysis of the benefits conferred by this approach is important. Unfortunately, a detailed analysis of CI has proven to be a challenging problem because performance depends equally on three components: (1) the optical multiplexing, (2) the noise characteristics of the sensor, and (3) the reconstruction algorithm which typically uses signal priors. A few recent papers [10], [9], [42] have performed analysis taking multiplexing and noise characteristics into account.
LOW-RATE POISSON INTENSITY ESTIMATION USING MULTIPLEXED IMAGING
"... Multiplexed imaging is a powerful mechanism for achieving high signal-to-noise ratio (SNR) in the presence of signal-independent additive noise. However, for imaging in presence of only signal-dependent shot noise, multiplexing has been shown to significantly degrade SNR. Hence, multiplexing to incr ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
(Show Context)
Multiplexed imaging is a powerful mechanism for achieving high signal-to-noise ratio (SNR) in the presence of signal-independent additive noise. However, for imaging in presence of only signal-dependent shot noise, multiplexing has been shown to significantly degrade SNR. Hence, multiplexing to increase SNR in presence of Poisson noise is normally thought to be infeasible. In this paper, we present an exception to this view by demonstrating multiplexing ad-vantage when the scene parameters are non-negative valued and are observed through a low-rate Poisson channel. Index Terms—Multiplexed illumination and sensing, image de-noising, shot noise, photon-limited imaging, Poisson noise, signal-dependent noise, convex optimization.