Results 1 - 10
of
28
Upsampling Range Data in Dynamic Environments
"... We present a flexible method for fusing information from optical and range sensors based on an accelerated highdimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor sys ..."
Abstract
-
Cited by 34 (0 self)
- Add to MetaCart
(Show Context)
We present a flexible method for fusing information from optical and range sensors based on an accelerated highdimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs. 1.
Multi-view Image and ToF Sensor Fusion for Dense 3D Reconstruction
, 2009
"... Multi-view stereo methods frequently fail to properly reconstruct 3D scene geometry if visible texture is sparse or the scene exhibits difficult self-occlusions. Time-of-Flight (ToF) depth sensors can provide 3D information regardless of texture but with only limited resolution and accuracy. To find ..."
Abstract
-
Cited by 33 (1 self)
- Add to MetaCart
(Show Context)
Multi-view stereo methods frequently fail to properly reconstruct 3D scene geometry if visible texture is sparse or the scene exhibits difficult self-occlusions. Time-of-Flight (ToF) depth sensors can provide 3D information regardless of texture but with only limited resolution and accuracy. To find an optimal reconstruction, we propose an integrated multi-view sensor fusion approach that combines information from multiple color cameras and multiple ToF depth sensors. First, multi-view ToF sensor measurements are combined to obtain a coarse but complete model. Then, the initial model is refined by means of a probabilistic multiview fusion framework, optimizing over an energy function that aggregates ToF depth sensor information with multiview stereo and silhouette constraints. We obtain high quality dense and detailed 3D models of scenes challenging for stereo alone, while simultaneously reducing complex noise of ToF sensors.
Patch based synthesis for single depth image super-resolution
- In European Conference on Computer Vision (ECCV
, 2012
"... Abstract. We present an algorithm to synthetically increase the reso-lution of a solitary depth image using only a generic database of local patches. Modern range sensors measure depths with non-Gaussian noise and at lower starting resolutions than typical visible-light cameras. While patch based ap ..."
Abstract
-
Cited by 17 (0 self)
- Add to MetaCart
(Show Context)
Abstract. We present an algorithm to synthetically increase the reso-lution of a solitary depth image using only a generic database of local patches. Modern range sensors measure depths with non-Gaussian noise and at lower starting resolutions than typical visible-light cameras. While patch based approaches for upsampling intensity images continue to im-prove, this is the first exploration of patching for depth images. We match against the height field of each low resolution input depth patch, and search our database for a list of appropriate high resolution candidate patches. Selecting the right candidate at each location in the depth image is then posed as a Markov random field labeling problem. Our experiments also show how important further depth-specific pro-cessing, such as noise removal and correct patch normalization, dramat-ically improves our results. Perhaps surprisingly, even better results are achieved on a variety of real test scenes by providing our algorithm with only synthetic training depth data. 1
Coherent Spatiotemporal Filtering, Upsampling and Rendering of RGBZ Videos
- COMPUTER GRAPHICS FORUM (PROCEEDINGS OF EUROGRAPHICS)
, 2012
"... Sophisticated video processing effects require both image and geometry information. We explore the possibility to augment a video camera with a recent infrared time-of-flight depth camera, to capture high-resolution RGB and low-resolution, noisy depth at video frame rates. To turn such a setup into ..."
Abstract
-
Cited by 13 (3 self)
- Add to MetaCart
(Show Context)
Sophisticated video processing effects require both image and geometry information. We explore the possibility to augment a video camera with a recent infrared time-of-flight depth camera, to capture high-resolution RGB and low-resolution, noisy depth at video frame rates. To turn such a setup into a practical RGBZ video camera, we develop efficient data filtering techniques that are tailored to the noise characteristics of IR depth cameras. We first remove typical artefacts in the RGBZ data and then apply an efficient spatiotemporal denoising and upsampling scheme. This allows us to record temporally coherent RGBZ videos at interactive frame rates and to use them to render a variety of effects in unprecedented quality. We show effects such as video relighting, geometry-based abstraction and stylisation, background segmentation and rendering in stereoscopic 3D.
Dense Depth Maps from Low Resolution Time-of-Flight Depth and High Resolution Color Views
"... Abstract. In this paper a systematic approach to the processing and combination of high resolution color images and low resolution time-offlight depth maps is described. The purpose is the calculation of a dense depth map for one of the high resolution color images. Special attention is payed to the ..."
Abstract
-
Cited by 12 (0 self)
- Add to MetaCart
(Show Context)
Abstract. In this paper a systematic approach to the processing and combination of high resolution color images and low resolution time-offlight depth maps is described. The purpose is the calculation of a dense depth map for one of the high resolution color images. Special attention is payed to the different nature of the input data and their large difference in resolution. This way the low resolution time-of-flight measurements are exploited without sacrificing the high resolution observations in the color data. 1
High-Resolution Depth Maps Based on TOF-Stereo Fusion
"... Abstract — The combination of range sensors with color cameras can be very useful for robot navigation, semantic perception, manipulation, and telepresence. Several methods of combining range- and color-data have been investigated and successfully used in various robotic applications. Most of these ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
(Show Context)
Abstract — The combination of range sensors with color cameras can be very useful for robot navigation, semantic perception, manipulation, and telepresence. Several methods of combining range- and color-data have been investigated and successfully used in various robotic applications. Most of these systems suffer from the problems of noise in the range-data and resolution mismatch between the range sensor and the color cameras, since the resolution of current range sensors is much less than the resolution of color cameras. High-resolution depth maps can be obtained using stereo matching, but this often fails to construct accurate depth maps of weakly/repetitively textured scenes, or if the scene exhibits complex self-occlusions. Range sensors provide coarse depth information regardless of presence/absence of texture. The use of a calibrated system, composed of a time-of-flight (TOF) camera and of a stereoscopic camera pair, allows data fusion thus overcoming the weaknesses of both individual sensors. We propose a novel TOF-stereo fusion method based on an efficient seed-growing algorithm which uses the TOF data projected onto the stereo image pair as an initial set of correspondences. These initial “seeds ” are then propagated based on a Bayesian model which combines an image similarity score with rough depth priors computed from the low-resolution range data. The overall result is a dense and accurate depth map at the resolution of the color cameras at hand. We show that the proposed algorithm outperforms 2D image-based stereo algorithms and that the results are of higher resolution than off-the-shelf color-range sensors, e.g., Kinect. Moreover, the algorithm potentially exhibits real-time performance on a single CPU. I.
Image guided depth upsampling using anisotropic total generalized variation
- in IEEE International Conference on Computer Vision
, 2013
"... In this work we present a novel method for the chal-lenging problem of depth image upsampling. Modern depth cameras such as Kinect or Time of Flight cameras deliver dense, high quality depth measurements but are limited in their lateral resolution. To overcome this limitation we for-mulate a convex ..."
Abstract
-
Cited by 7 (1 self)
- Add to MetaCart
(Show Context)
In this work we present a novel method for the chal-lenging problem of depth image upsampling. Modern depth cameras such as Kinect or Time of Flight cameras deliver dense, high quality depth measurements but are limited in their lateral resolution. To overcome this limitation we for-mulate a convex optimization problem using higher order regularization for depth image upsampling. In this opti-mization an anisotropic diffusion tensor, calculated from a high resolution intensity image, is used to guide the upsam-pling. We derive a numerical algorithm based on a primal-dual formulation that is efficiently parallelized and runs at multiple frames per second. We show that this novel up-sampling clearly outperforms state of the art approaches in terms of speed and accuracy on the widely used Middlebury 2007 datasets. Furthermore, we introduce novel datasets with highly accurate groundtruth, which, for the first time, enable to benchmark depth upsampling methods using real sensor data. 1.
Real-time Shading-based Refinement for Consumer Depth Cameras
"... Figure 1: Our method takes as input depth and aligned RGB images from any consumer depth camera (here a PrimeSense Carmine 1.09). Per-frame and in real-time we approximate the incident lighting and albedo, and use these for geometry refinement. From left: Example input depth and RGB image; raw depth ..."
Abstract
-
Cited by 5 (1 self)
- Add to MetaCart
Figure 1: Our method takes as input depth and aligned RGB images from any consumer depth camera (here a PrimeSense Carmine 1.09). Per-frame and in real-time we approximate the incident lighting and albedo, and use these for geometry refinement. From left: Example input depth and RGB image; raw depth input prior to refinement (rendered with normals and phong shading, respectively); our refined result, note detail on the eye (top right) compared to original depth map (bottom right); full 3D reconstruction using our refined depth maps in the real-time scan integration method of [Nießner et al. 2013] (far right) We present the first real-time method for refinement of depth data using shape-from-shading in general uncontrolled scenes. Per frame, our real-time algorithm takes raw noisy depth data and an aligned RGB image as input, and approximates the time-varying incident lighting, which is then used for geometry refinement. This leads to dramatically enhanced depth maps at 30Hz. Our algorithm makes few scene assumptions, handling arbitrary scene objects even under motion. To enable this type of real-time depth map enhancement, we contribute a new highly parallel algorithm that reformulates the inverse rendering optimization problem in prior work, allowing us to estimate lighting and shape in a temporally coherent way at video frame-rates. Our optimization problem is minimized using a new regular grid Gauss-Newton solver implemented fully on the GPU. We demonstrate results showing enhanced depth maps, which are comparable to offline methods but are computed orders of magnitude faster, as well as baseline comparisons with online filtering-based methods. We conclude with applications of our higher quality depth maps for improved real-time surface reconstruction and performance capture.
Joint Geodesic Upsampling of Depth Images
"... We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution one. Though this is closely related to the all-pair-shortest-path problem which has O(n2 log n) complexity, we develop a novel approximation algorithm whose com-plexity grows linearly with the image size and achieve real-time performance. We compare our algorithm with the state of the art on the benchmark dataset and show that our approach provides more accurate depth upsampling with fewer artifacts. In addition, we show that the proposed al-gorithm is well suited for upsampling depth images using binary edge maps, an important sensor fusion application. 1.
Denoising Time-Of-Flight Data with Adaptive Total Variation
"... Abstract. For denoising depth maps from time-of-flight (ToF) cameras we propose an adaptive total variation based approach of first and second order. This approach allows us to take into account the geometric properties of the depth data, such as edges and slopes. To steer adaptivity we utilize a sp ..."
Abstract
-
Cited by 4 (2 self)
- Add to MetaCart
(Show Context)
Abstract. For denoising depth maps from time-of-flight (ToF) cameras we propose an adaptive total variation based approach of first and second order. This approach allows us to take into account the geometric properties of the depth data, such as edges and slopes. To steer adaptivity we utilize a special kind of structure tensor based on both the amplitude and phase of the recorded ToF signal. A comparison to state-of-the-art denoising methods shows the advantages of our approach.