Results 1 -
7 of
7
Joint Geodesic Upsampling of Depth Images
"... We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution one. Though this is closely related to the all-pair-shortest-path problem which has O(n2 log n) complexity, we develop a novel approximation algorithm whose com-plexity grows linearly with the image size and achieve real-time performance. We compare our algorithm with the state of the art on the benchmark dataset and show that our approach provides more accurate depth upsampling with fewer artifacts. In addition, we show that the proposed al-gorithm is well suited for upsampling depth images using binary edge maps, an important sensor fusion application. 1.
Joint Geodesic Upsampling of Depth Images
, 2013
"... We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution one. Though this is closely related to the all-pairshortest-path problem which has O(n2 log n) complexity, we develop a novel approximation algorithm whose complexity grows linearly with the image size and achieve realtime performance. We compare our algorithm with the state of the art on the benchmark dataset and show that our approach provides more accurate depth upsampling with fewer artifacts. In addition, we show that the proposed algorithm is well suited for upsampling depth images using binary edge maps, an important sensor fusion application.
Spatio-temporal geometry fusion for multiple hybrid cameras using moving least squares surfaces
- CGF (Eurographics
, 2014
"... Figure 1: a) Combined raw geometries obtained by a calibrated setup of two hybrid color+depth cameras rendered in green and red respectively. b) Result of geometry fusion obtained by our MLS-based approach. c) Textured geometry from a). Note the numerous visual artifacts due to the inaccurate and in ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
Figure 1: a) Combined raw geometries obtained by a calibrated setup of two hybrid color+depth cameras rendered in green and red respectively. b) Result of geometry fusion obtained by our MLS-based approach. c) Textured geometry from a). Note the numerous visual artifacts due to the inaccurate and incomplete geometry, especially near depth discontinuities. d) Our optimized textured geometry from b). Multiview reconstruction aims at computing the geometry of a scene observed by a set of cameras. Accurate 3D reconstruction of dynamic scenes is a key component in a large variety of applications, ranging from special effects to telepresence and medical imaging. In this paper we propose a method based on Moving Least Squares surfaces which robustly and efficiently reconstructs dynamic scenes captured by a set of hybrid color+depth cameras. Our reconstruction provides spatio-temporal consistency and seamlessly fuses color and geometric information. We illustrate our formulation on a variety of real sequences and demonstrate that it favorably compares to state-of-the-art methods.
Temporal depth video enhancement based on intrinsic static structure
- in Proc. IEEE Int. Conf. Image Process
, 2014
"... Depth video enhancement is an essential preprocessing step for various 3D applications. Despite extensive studies of spa-tial enhancement, effective temporal enhancement that both strengthens temporal consistency and keeps correct depth variation needs further research. In this paper, we propose a n ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Depth video enhancement is an essential preprocessing step for various 3D applications. Despite extensive studies of spa-tial enhancement, effective temporal enhancement that both strengthens temporal consistency and keeps correct depth variation needs further research. In this paper, we propose a novel method to enhance the depth video by blending raw depth frame with the estimated intrinsic static structure, which defines static structure of captured scene and is es-timated iteratively by a probabilistic generative model with sequentially incoming depth frames. Our experimental results show that the proposed method is effective both in static and dynamic scene and is compatible with various kinds of depth videos. We will demonstrate that superior performance can be achieved in comparison with existing temporal enhancement approaches. Index Terms — depth video enhancement, temporal en-hancement, probabilistic model, variational approximation 1.
et a l., “Kinect Shadow Detection and Classification
- Proc. IEEE Int’l Conf. Computer Vision Workshops (ICCVW
"... Kinect depth maps often contain missing data, or “holes”, for various reasons. Most existing Kinect-related research treat these holes as artifacts and try to minimize them as much as possible. In this paper, we advocate a to-tally different idea – turning Kinect holes into useful infor-mation. In p ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Kinect depth maps often contain missing data, or “holes”, for various reasons. Most existing Kinect-related research treat these holes as artifacts and try to minimize them as much as possible. In this paper, we advocate a to-tally different idea – turning Kinect holes into useful infor-mation. In particular, we are interested in the unique type of holes that are caused by occlusion of the Kinect’s struc-tured light, resulting in shadows and loss of depth acquisi-tion. We propose a robust detection scheme to detect and classify different types of shadows based on their distinct local shadow patterns as determined from geometric anal-ysis, without assumption on object geometry. Experimental results demonstrate that the proposed scheme can achieve very accurate shadow detection. We also demonstrate the usefulness of the extracted shadow information by success-fully applying it for automatic foreground segmentation. 1.
SUBMITTED TO TRANSACTION ON IMAGE PROCESSING 1 Online Temporally Consistent Indoor Depth Video Enhancement via Static Structure
"... Abstract—In this paper, we propose a new method to online enhance the quality of a depth video based on the intermediary of a so-called static structure of the captured scene. The static and dynamic regions of the input depth frame are robustly separated by a layer assignment procedure, in which the ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—In this paper, we propose a new method to online enhance the quality of a depth video based on the intermediary of a so-called static structure of the captured scene. The static and dynamic regions of the input depth frame are robustly separated by a layer assignment procedure, in which the dynamic part stays in the front while the static part fits and helps to update this structure by a novel online variational generative model with added spatial refinement. The dynamic content is enhanced spatially while the static region is otherwise substituted by the updated static structure so as to favor the long-range spatio-temporal enhancement. The proposed method both performs long-range temporal consistency on the static region and keeps necessary depth variations in the dynamic content. Thus it can produce flicker-free and spatially-optimized depth videos with reduced motion blur and depth distortion. Our experimental results reveal that the proposed method is effective in both static and dynamic indoor scenes and is compatible with depth videos captured by Kinect and Time-of-flight camera. We also demonstrate that excellent performance can be achieved by the proposed method in comparison with the existing spatio-temporal approaches. In addition, our enhanced depth videos and static structures can act as effective cues to improve various applications including depth-aided background subtraction and novel view synthesis, showing satisfactory results with few visual artifacts. Index Terms—Static structure, temporally consistent depth video enhancement, online estimation, layer assignment. I.
Color-Guided Depth Recovery From RGB-D Data Using an Adaptive Autoregressive Model
"... Abstract — This paper proposes an adaptive color-guided autoregressive (AR) model for high quality depth recovery from low quality measurements captured by depth cameras. We observe and verify that the AR model tightly fits depth maps of generic scenes. The depth recovery task is formulated into a m ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract — This paper proposes an adaptive color-guided autoregressive (AR) model for high quality depth recovery from low quality measurements captured by depth cameras. We observe and verify that the AR model tightly fits depth maps of generic scenes. The depth recovery task is formulated into a minimization of AR prediction errors subject to measurement consistency. The AR predictor for each pixel is constructed according to both the local correlation in the initial depth map and the nonlocal similarity in the accompanied high quality color image. We analyze the stability of our method from a linear system point of view, and design a parameter adaptation scheme to achieve stable and accurate depth recovery. Quantitative and qualitative evaluation compared with ten state-of-the-art schemes show the effectiveness and superiority of our method. Being able to handle various types of depth degradations, the proposed method is versatile for mainstream depth sensors, time-of-flight camera, and Kinect, as demonstrated by experiments on real systems. Index Terms — Depth recovery (upsampling, inpainting, denois-ing), autoregressive model, RGB-D camera.