Results 1 -
4 of
4
CP-Census: A Novel Model for Dense Variational Scene Flow from RGB-D Data
"... We present a novel method for dense variational scene flow estimation based a multi-scale Ternary Census Transform in combination with a patchwise Closest Points depth data term. On the one hand, the Ternary Census Transform in the intensity data term is capable of handling illumination changes, low ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
We present a novel method for dense variational scene flow estimation based a multi-scale Ternary Census Transform in combination with a patchwise Closest Points depth data term. On the one hand, the Ternary Census Transform in the intensity data term is capable of handling illumination changes, low texture and noise. On the other hand, the patchwise Closest Points search in the depth data term increases the robustness in low structured regions. Further, we utilize higher order regularization which is weighted and directed according to the input data by an anisotropic diffusion tensor. This allows to calculate a dense and accurate flow field which supports smooth as well as non-rigid movements while preserving flow boundaries. The numerical algorithm is solved based on a primal-dual formulation and is efficiently parallelized to run at high frame rates. In an extensive qualitative and quantitative evaluation we show that this novel method for scene flow calculation outperforms existing approaches. The method is applicable to any sensor delivering dense depth and intensity data such as Microsoft Kinect or Intel Gesture Camera. 1
Nonrigid Surface Registration and Completion from RGBD Images
"... Abstract. Nonrigid surface registration is a challenging problem that suffers from many ambiguities. Existing methods typically assume the availability of full volumetric data, or require a global model of the sur-face of interest. In this paper, we introduce an approach to nonrigid registration tha ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Nonrigid surface registration is a challenging problem that suffers from many ambiguities. Existing methods typically assume the availability of full volumetric data, or require a global model of the sur-face of interest. In this paper, we introduce an approach to nonrigid registration that performs on relatively low-quality RGBD images and does not assume prior knowledge of the global surface shape. To this end, we model the surface as a collection of patches, and infer the patch deformations by performing inference in a graphical model. Our repre-sentation lets us fill in the holes in the input depth maps, thus essentially achieving surface completion. Our experimental evaluation demonstrates the effectiveness of our approach on several sequences, as well as its ro-bustness to missing data and occlusions.
Similarity-Aware Patchwork Assembly for Depth Image Super-Resolution
"... This paper describes a patchwork assembly algorithm for depth image super-resolution. An input low resolution depth image is disassembled into parts by matching similar regions on a set of high resolution training images, and a super-resolution image is then assembled using these cor-responding matc ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
This paper describes a patchwork assembly algorithm for depth image super-resolution. An input low resolution depth image is disassembled into parts by matching similar regions on a set of high resolution training images, and a super-resolution image is then assembled using these cor-responding matched counterparts. We convert the super-resolution problem into a Markov Random Field (MRF) la-beling problem, and propose a unified formulation embed-ding (1) the consistency between the resolution enhanced image and the original input, (2) the similarity of disas-sembled parts with the corresponding regions on training images, (3) the depth smoothness in local neighborhoods, (4) the additional geometric constraints from self-similar structures in the scene, and (5) the boundary coincidence between the resolution enhanced depth image and an op-tional aligned high resolution intensity image. Experimen-tal results on both synthetic and real-world data demon-strate that the proposed algorithm is capable of recovering high quality depth images with ×4 resolution enhancement along each coordinate direction, and that it outperforms state-of-the-arts [14] in both qualitative and quantitative evaluations. 1.
Depth Enhancement via Low-rank Matrix Completion
"... Depth captured by consumer RGB-D cameras is often noisy and misses values at some pixels, especially around object boundaries. Most existing methods complete the missing depth values guided by the corresponding color im-age. When the color image is noisy or the correlation be-tween color and depth i ..."
Abstract
- Add to MetaCart
(Show Context)
Depth captured by consumer RGB-D cameras is often noisy and misses values at some pixels, especially around object boundaries. Most existing methods complete the missing depth values guided by the corresponding color im-age. When the color image is noisy or the correlation be-tween color and depth is weak, the depth map cannot be properly enhanced. In this paper, we present a depth map enhancement algorithm that performs depth map comple-tion and de-noising simultaneously. Our method is based on the observation that similar RGB-D patches lie in a very low-dimensional subspace. We can then assemble the sim-ilar patches into a matrix and enforce this low-rank sub-space constraint. This low-rank subspace constraint es-sentially captures the underlying structure in the RGB-D patches and enables robust depth enhancement against the noise or weak correlation between color and depth. Based on this subspace constraint, our method formulates depth map enhancement as a low-rank matrix completion prob-lem. Since the rank of a matrix changes over matrices, we develop a data-driven method to automatically determine the rank number for each matrix. The experiments on both public benchmarks and our own captured RGB-D images show that our method can effectively enhance depth maps. 1.