Results 1 - 10
of
21
How Do People Edit Light Fields?
"... Figure 1: Example results of light fields edited by different users. Top: A synthetic light field (vase), with ground truth depth information. Bottom: Two real light fields (toys and motorbike) captured with the Lytro camera. In this work we evaluate the benefits of different light field interaction ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
Figure 1: Example results of light fields edited by different users. Top: A synthetic light field (vase), with ground truth depth information. Bottom: Two real light fields (toys and motorbike) captured with the Lytro camera. In this work we evaluate the benefits of different light field interaction paradigms and tools, and draw conclusions to help guide future interface designs for light field editing. We present a thorough study to evaluate different light field edit-ing interfaces, tools and workflows from a user perspective. This is of special relevance given the multidimensional nature of light fields, which may make common image editing tasks become com-plex in light field space. We additionally investigate the potential benefits of using depth information when editing, and the limita-tions imposed by imperfect depth reconstruction using current tech-niques. We perform two different experiments, collecting both ob-jective and subjective data from a varied number of editing tasks of increasing complexity based on local point-and-click tools. In the first experiment, we rely on perfect depth from synthetic light fields, and focus on simple edits. This allows us to gain basic in-sight on light field editing, and to design a more advanced editing interface. This is then used in the second experiment, employing real light fields with imperfect reconstructed depth, and covering more advanced editing tasks. Our study shows that users can edit light fields with our tested interface and tools, even in the presence of imperfect depth. They follow different workflows depending on the task at hand, mostly relying on a combination of different depth cues. Last, we confirm our findings by asking a set of artists to freely edit both real and synthetic light fields.
FlatCam: Thin, Bare-Sensor Cameras using Coded Aperture and Computation
, 2015
"... FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light f ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths.
A Light Transport Framework for Lenslet Light Field Cameras
"... Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental tradeoff between spatial and angular reso-lution, but there has been limited understanding of this tradeoff t ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
Light field cameras capture full spatio-angular information of the light field, and enable many novel photographic and scientific applications. It is often stated that there is a fundamental tradeoff between spatial and angular reso-lution, but there has been limited understanding of this tradeoff theoretically or numerically. Moreover, it is very difficult to evaluate the design of a light field camera, because a new design is usually reported with its prototype and rendering algorithm, all of which affect resolution. In this paper, we develop a light transport framework for understanding the fundamental limits of light field camera resolution. We first derive the pre-filtering model of lenslet-based light field cameras. The main novelty of our model is in considering the full space-angle sensitivity profile of the photosensor—in particular, real pixels have non-uniform angular sensitiv-ity, responding more to light along the optical axis, rather than at grazing an-gles. We show that the full sensor profile plays an important role in defining the performance of a light field camera. The proposed method can model all existing lenslet-based light field cameras and allows us to compare them in a unified way in simulation, independent of the practical differences between particular prototypes. We further extend our framework to analyze the per-formance of two rendering methods: the simple projection-based method and the inverse light transport process. We validate our framework with both flatland simulation and real data from the Lytro light field camera.
Accurate Depth Map Estimation from a Lenslet Light Field Camera
"... This paper introduces an algorithm that accurately esti-mates depth maps using a lenslet light field camera. The proposed algorithm estimates the multi-view stereo cor-respondences with sub-pixel accuracy using the cost vol-ume. The foundation for constructing accurate costs is threefold. First, the ..."
Abstract
- Add to MetaCart
(Show Context)
This paper introduces an algorithm that accurately esti-mates depth maps using a lenslet light field camera. The proposed algorithm estimates the multi-view stereo cor-respondences with sub-pixel accuracy using the cost vol-ume. The foundation for constructing accurate costs is threefold. First, the sub-aperture images are displaced us-ing the phase shift theorem. Second, the gradient costs are adaptively aggregated using the angular coordinates of the light field. Third, the feature correspondences between the sub-aperture images are used as additional constraints. With the cost volume, the multi-label optimization propa-gates and corrects the depth map in the weak texture re-gions. Finally, the local depth map is iteratively refined through fitting the local quadratic function to estimate a non-discrete depth map. Because micro-lens images con-tain unexpected distortions, a method is also proposed that corrects this error. The effectiveness of the proposed algo-rithm is demonstrated through challenging real world ex-amples and including comparisons with the performance of advanced depth estimation algorithms. 1.
A Switchable Light Field Camera Architecture with Angle Sensitive Pixels and Dictionary-based Sparse Coding
"... We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and ap-plied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced re-construction algorithms, we show that—contrary to light field ca ..."
Abstract
- Add to MetaCart
(Show Context)
We propose a flexible light field camera architecture that is at the convergence of optics, sensor electronics, and ap-plied mathematics. Through the co-design of a sensor that comprises tailored, Angle Sensitive Pixels and advanced re-construction algorithms, we show that—contrary to light field cameras today—our system can use the same measure-ments captured in a single sensor image to recover either a high-resolution 2D image, a low-resolution 4D light field using fast, linear processing, or a high-resolution light field using sparsity-constrained optimization. 1.
NARROW FIELD-OF-VIEW VISUAL ODOMETRY BASED ON A FOCUSED PLENOPTIC CAMERA
"... In this article we present a new method for visual odometry based on a focused plenoptic camera. This method fuses the depth data gained by a monocular Simultaneous Localization and Mapping (SLAM) algorithm and the one received from a focused plenoptic cam-era. Our algorithm uses the depth data and ..."
Abstract
- Add to MetaCart
(Show Context)
In this article we present a new method for visual odometry based on a focused plenoptic camera. This method fuses the depth data gained by a monocular Simultaneous Localization and Mapping (SLAM) algorithm and the one received from a focused plenoptic cam-era. Our algorithm uses the depth data and the totally focused images supplied by the plenoptic camera to run a real-time semi-dense direct SLAM algorithm. Based on this combined approach, the scale ambiguity of a monocular SLAM system can be overcome. Fur-thermore, the additional light-field information highly improves the tracking capabilities of the algorithm. Thus, visual odometry even for narrow field of view (FOV) cameras is possible. We show that not only tracking profits from the additional light-field information. By accumulating the depth information over multiple tracked images, also the depth accuracy of the focused plenoptic camera can be highly improved. This novel approach improves the depth error by one order of magnitude compared to the one received from a single light-field image. 1.
1FlatCam: Thin, Bare-Sensor Cameras using Coded Aperture and Computation
"... FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light f ..."
Abstract
- Add to MetaCart
FlatCam is a thin form-factor lensless camera that consists of a coded mask placed on top of a bare, conventional sensor array. Unlike a traditional, lens-based camera where an image of the scene is directly recorded on the sensor pixels, each pixel in FlatCam records a linear combination of light from multiple scene elements. A computational algorithm is then used to demultiplex the recorded measurements and reconstruct an image of the scene. FlatCam is an instance of a coded aperture imaging system; however, unlike the vast majority of related work, we place the coded mask extremely close to the image sensor that can enable a thin system. We employ a separable mask to ensure that both calibration and image reconstruction are scalable in terms of memory requirements and computational complexity. We demonstrate the potential of the FlatCam design using two prototypes: one at visible wavelengths and one at infrared wavelengths. I.
CamSwarm: Instantaneous Smartphone Camera Arrays for Collaborative Photography
"... ABSTRACT Camera arrays (CamArrays) are widely used in commercial filming projects for achieving special visual effects such as bullet time effect, but are very expensive to set up. We propose CamSwarm, a low-cost and lightweight alternative to professional CamArrays for consumer applications. It al ..."
Abstract
- Add to MetaCart
(Show Context)
ABSTRACT Camera arrays (CamArrays) are widely used in commercial filming projects for achieving special visual effects such as bullet time effect, but are very expensive to set up. We propose CamSwarm, a low-cost and lightweight alternative to professional CamArrays for consumer applications. It allows the construction of a collaborative photography platform from multiple mobile devices anywhere and anytime, enabling new capturing and editing experiences that a single camera cannot provide. Our system allows easy team formation; uses realtime visualization and feedback to guide camera positioning; provides a mechanism for synchronized capturing; and finally allows the user to efficiently browse and edit the captured imagery. Our user study suggests that CamSwarm is easy to use; the provided real-time guidance is helpful; and the full system achieves high quality results promising for non-professional use.
ONLINE VIEW SAMPLING FOR ESTIMATING DEPTH FROM LIGHT FIELDS
"... Geometric information such as depth obtained from light fields finds more applications recently. Where and how to sample images to populate a light field is an important problem to maximize the usability of information gathered for depth re-construction. We propose a simple analysis model for view s ..."
Abstract
- Add to MetaCart
(Show Context)
Geometric information such as depth obtained from light fields finds more applications recently. Where and how to sample images to populate a light field is an important problem to maximize the usability of information gathered for depth re-construction. We propose a simple analysis model for view sampling and an adaptive, online sampling algorithm tailored to light field depth reconstruction. Our model is based on the trade-off between visibility and depth resolvability for varying sampling locations, and seeks the optimal locations that best balance the two conflicting criteria. 1.