• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

Lidarboost: Depth superresolution for tof 3D shape scanning. In: CPVR (2009)

by S Schuon, C Theobalt, J Davis, S Thrun
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 29
Next 10 →

Patch based synthesis for single depth image super-resolution

by Oisin Mac Aodha, Neill D. F. Campbell, Arun Nair, Gabriel J. Brostow - In European Conference on Computer Vision (ECCV , 2012
"... Abstract. We present an algorithm to synthetically increase the reso-lution of a solitary depth image using only a generic database of local patches. Modern range sensors measure depths with non-Gaussian noise and at lower starting resolutions than typical visible-light cameras. While patch based ap ..."
Abstract - Cited by 17 (0 self) - Add to MetaCart
Abstract. We present an algorithm to synthetically increase the reso-lution of a solitary depth image using only a generic database of local patches. Modern range sensors measure depths with non-Gaussian noise and at lower starting resolutions than typical visible-light cameras. While patch based approaches for upsampling intensity images continue to im-prove, this is the first exploration of patching for depth images. We match against the height field of each low resolution input depth patch, and search our database for a list of appropriate high resolution candidate patches. Selecting the right candidate at each location in the depth image is then posed as a Markov random field labeling problem. Our experiments also show how important further depth-specific pro-cessing, such as noise removal and correct patch normalization, dramat-ically improves our results. Perhaps surprisingly, even better results are achieved on a variety of real test scenes by providing our algorithm with only synthetic training depth data. 1
(Show Context)

Citation Context

...boring patches. The data likelihood term, Ed(x̂i), measures the difference between the normalized input patch and the normalized downsampled high resolution candidate: Ed(x̂i) = ||x̂i − (ŷi) ↓d ||2. =-=(2)-=- Unlike Freeman et al . [3], we do not upsample the low resolution input using a deterministic interpolation method to then compute matches at the upsampled scale. We found that doing so unnecessarily...

Capturing time-of-flight data with confidence

by Malcolm Reynolds, Leto Peel, Tim Weyrich, Gabriel J Brostow - In Proc. CVPR , 2011
"... Time-of-Flight cameras provide high-frame-rate depth measurements within a limited range of distances. These readings can be extremely noisy and display unique errors, for instance, where scenes contain depth discontinuities or materials with low infrared reflectivity. Previous works have treated th ..."
Abstract - Cited by 13 (2 self) - Add to MetaCart
Time-of-Flight cameras provide high-frame-rate depth measurements within a limited range of distances. These readings can be extremely noisy and display unique errors, for instance, where scenes contain depth discontinuities or materials with low infrared reflectivity. Previous works have treated the amplitude of each Time-of-Flight sample as a measure of confidence. In this paper, we demonstrate the shortcomings of this common lone heuristic, and propose an improved per-pixel confidence measure using a Random Forest regressor trained with real-world data. Using an industrial laser scanner for ground truth acquisition, we evaluate our technique on data from two different Time-of-Flight cameras1. We argue that an improved confidence measure leads to superior reconstructions in subsequent steps of traditional scan processing pipelines. At the same time, data with confidence reduces the need for point cloud smoothing and median filtering. 1.
(Show Context)

Citation Context

...based on color affinity, iteratively reducing the level of up-sampled blur that occurs due to interpolation in discontinuous areas. Our main inspiration, however, comes from the work of Schuon et al. =-=[29]-=-. Their LidarBoost method combines several noisy ToF frames into a single, high-resolution depth image. By assigning zero confi946Training Confidence Assignment Laser Scan ToF Data ToF Data 3D Regist...

KinectAvatar: Fully Automatic Body Capture Using a Single Kinect

by Yan Cui, Will Chang, Tobias Nöll, Didier Stricker
"... Abstract. We present a novel scanning system for capturing a full 3D human body model using just a single depth camera and no auxiliary equipment. We claim that data captured from a single Kinect is sufficient to produce a good quality full 3D human model. In this setting, the challenges we face are ..."
Abstract - Cited by 7 (0 self) - Add to MetaCart
Abstract. We present a novel scanning system for capturing a full 3D human body model using just a single depth camera and no auxiliary equipment. We claim that data captured from a single Kinect is sufficient to produce a good quality full 3D human model. In this setting, the challenges we face are the sensor’s low resolution with random noise and the subject’s non-rigid movement when capturing the data. To overcome these challenges, we develop an improved superresolution algorithm that takes color constraints into account. We then align the super-resolved scans using a combination of automatic rigid and non-rigid registration. As the system is of low price and obtains impressive results in several minutes, full 3D human body scanning technology can now become more accessible to everyday users at home. 1
(Show Context)

Citation Context

...ould be applied as a pre-processing step. Newcombe et al. [5] apply a bilateral filter [10] to the raw Kinect depth map to obtain a discontinuity preserved depth map with reduced noise. Schuon et al. =-=[11]-=- develop a super-resolution algorithm (LidarBoost) to improve the depth resolution and data quality of a ToF range scan, and Cui et al. [2] further develop this method. In this paper, we compare these...

Combining stereo and Time-of-Flight images with application to automatic plant phenotyping

by Yu Song, Chris A. Glasbey, Gerrit Polder, J. Anja Dieleman
"... Abstract. This paper shows how stereo and Time-of-Flight (ToF) images can be combined to estimate dense depth maps in order to automate plant phenotyping. We focus on some challenging plant images captured in a glasshouse environment, and show that even the state-of-the-art stereo methods produce un ..."
Abstract - Cited by 5 (2 self) - Add to MetaCart
Abstract. This paper shows how stereo and Time-of-Flight (ToF) images can be combined to estimate dense depth maps in order to automate plant phenotyping. We focus on some challenging plant images captured in a glasshouse environment, and show that even the state-of-the-art stereo methods produce unsatisfactory results. By developing a geometric approach which transforms depth information in a ToF image to a localised search range for dense stereo, a global optimisation strategy is adopted for producing smooth and discontinuity-preserving results. Since pixel-by-pixel depth data are unavailable for our images and many other applications, a quantitative method accounting for the surface smoothness and the edge sharpness to evaluate estimation results is proposed. We compare our method with and without ToF against other state-ofthe-art stereo methods, and demonstrate that combining stereo and ToF images gives superior results. 1
(Show Context)

Citation Context

...lt to be matched. Using ToF in these situations provided an estimate and reduced ambiguities. Another advantage is that dense stereo can be a super resolution technique for ToF images as discussed by =-=[4, 16]-=-, and we have presented discontinuity preserving results by combining ToF and stereo (e.g. Fig. 4). We have not considered environmental effects or measurement uncertainties related to the ToF camera ...

M.: A joint intensity and depth co-sparse analysis model for depth map super-resolution

by Martin Kiechle, Simon Hawe, Martin Kleinsteuber - In: Proceedings of the IEEE International Conference on Computer Vision, ICCV , 2013
"... High-resolution depth maps can be inferred from low-resolution depth measurements and an additional high-resolution intensity image of the same scene. To that end, we introduce a bimodal co-sparse analysis model, which is able to capture the interdependency of registered intensity and depth informat ..."
Abstract - Cited by 4 (2 self) - Add to MetaCart
High-resolution depth maps can be inferred from low-resolution depth measurements and an additional high-resolution intensity image of the same scene. To that end, we introduce a bimodal co-sparse analysis model, which is able to capture the interdependency of registered intensity and depth information. This model is based on the assumption that the co-supports of corresponding bimodal image structures are aligned when computed by a suitable pair of analysis operators. No analytic form of such operators exist and we propose a method for learning them from a set of registered training signals. This learning process is done offline and returns a bimodal analysis operator that is universally applicable to natural scenes. We use this to exploit the bimodal co-sparse analysis model as a prior for solving inverse problems, which leads to an efficient algorithm for depth map super-resolution. 1
(Show Context)

Citation Context

...work well for small upscaling factors. A different approach, which also solely requires depth information is based on fusing multiple displaced LR depth maps into a single HR depth map. Schuon et al. =-=[23]-=- develop a global energy optimization framework employing data fidelity and geometry priors. This idea is extended for better edge-preservation by Bhavsar et al. in [4]. A number of recently introduce...

Depth super resolution by rigid body self-similarity in 3d

by Christoph Rhemann, Margrit Gelautz, Carsten Rother - In CVPR , 2013
"... We tackle the problem of jointly increasing the spatial resolution and apparent measurement accuracy of an input low-resolution, noisy, and perhaps heavily quantized depth map. In stark contrast to earlier work, we make no use of ancillary data like a color image at the target resolu-tion, multiple ..."
Abstract - Cited by 4 (0 self) - Add to MetaCart
We tackle the problem of jointly increasing the spatial resolution and apparent measurement accuracy of an input low-resolution, noisy, and perhaps heavily quantized depth map. In stark contrast to earlier work, we make no use of ancillary data like a color image at the target resolu-tion, multiple aligned depth maps, or a database of high-resolution depth exemplars. Instead, we proceed by identi-fying and merging patch correspondences within the input depth map itself, exploiting patchwise scene self-similarity across depth such as repetition of geometric primitives or object symmetry. While the notion of ‘single-image ’ super resolution has successfully been applied in the context of color and intensity images, we are to our knowledge the first to present a tailored analogue for depth images. Rather than reason in terms of patches of 2D pixels as others have before us, our key contribution is to proceed by reason-ing in terms of patches of 3D points, with matched patch pairs related by a respective 6 DoF rigid body motion in 3D. In support of obtaining a dense correspondence field in reasonable time, we introduce a new 3D variant of Patch-Match. A third contribution is a simple, yet effective patch upscaling and merging technique, which predicts sharp ob-ject boundaries at the target resolution. We show that our results are highly competitive with those of alternative tech-niques leveraging even a color image at the target resolu-tion or a database of high-resolution depth exemplars. 1.
(Show Context)

Citation Context

...[13] take this idea further and use a non-local, highly-connected smoothness term that better preserves thin structures in the SR output. Multiple Depth Maps. The Lidarboost approach of Schuon et al. =-=[17]-=- combines several depth maps acquired from slightly different viewpoints. The Kinectfusion approach of Izadi et al. [10] produces outstanding results by fusing a sequence of depth maps generated by a ...

Superfaces: A Super-Resolution Model for 3D Faces

by Stefano Berretti, Alberto Del Bimbo, Pietro Pala
"... Abstract. Face recognition based on the analysis of 3D scans has been an active research subject over the last few years. However, the impact of the resolution of 3D scans on the recognition process has not been addressed explicitly yet being of primal importance after the introduction of a new gene ..."
Abstract - Cited by 3 (2 self) - Add to MetaCart
Abstract. Face recognition based on the analysis of 3D scans has been an active research subject over the last few years. However, the impact of the resolution of 3D scans on the recognition process has not been addressed explicitly yet being of primal importance after the introduction of a new generation of low cost 4D scanning devices. These devices are capable of combined depth/rgb acquisition over time with a low resolution compared to the 3D scanners typically used in 3D face recognition benchmarks. In this paper, we define a super-resolution model for 3D faces by which a sequence of low-resolution 3D scans can be processed to extract a higher resolution 3D face model, namely the superface model. The proposed solution relies on the Scaled ICP procedure to align the low-resolution 3D models with each other and estimate the value of the high-resolution 3D model based on the statistics of values of the lowresolution scans in corresponding points. The approach is validated on a data set that includes, for each subject, one sequence of low-resolution 3D face scans and one ground-truth high-resolution 3D face model acquired through a high-resolution 3D scanner. In this way, results of the super-resolution process are evaluated qualitatively and quantitatively by measuring the error between the superface and the ground-truth. 1
(Show Context)

Citation Context

... the relations between depth and intensity data, such as the joint occurrence of depth and intensity edges, and smoothness of geometry in areas of largely uniform color. Also the approach proposed in =-=[12]-=- targets processing of data provided by time-of-flight cameras. However, the proposed solution relies on an energy minimization framework that explicitly takes into account the characteristic of the s...

Stereo+Kinect for High Resolution Stereo Correspondences

by Gowri Somanath, Scott Cohen, Brian Price, Chandra Kambhamettu
"... In this work, we combine the complementary depth sensors Kinect and stereo image matching to obtain high quality correspondences. Our goal is to obtain a dense disparity map at the spatial and depth resolution of the stereo cameras (4-12 MP). We propose a global optimization scheme, where both the d ..."
Abstract - Cited by 2 (0 self) - Add to MetaCart
In this work, we combine the complementary depth sensors Kinect and stereo image matching to obtain high quality correspondences. Our goal is to obtain a dense disparity map at the spatial and depth resolution of the stereo cameras (4-12 MP). We propose a global optimization scheme, where both the data and smoothness costs are derived using sensor confidences and low resolution geometry from Kinect. A spatially varying search range is used to limit the number of potential disparities at each pixel. The smoothness prior is based on available low resolution depth from Kinect rather than image gradients, thus performing better in both textured areas with smooth depth and textureless areas with depth gradient. We also propose a spatially varying smoothness weight to better handle occlusion areas, and the relative contribution of the two energy terms. We demonstrate how the two sensors can be effectively fused to obtain correct scene depth in ambiguous areas, as well as fine structural details in textured areas. 1.
(Show Context)

Citation Context

...ve employed alternate depth sensors in combination with single or stereo cameras. The first set of schemes obtain multiple samples from a single moving depth sensor to improve the accuracy or density =-=[8, 2, 10]-=-. Structure from Motion and tracking is used to register multiple scans. Though the schemes can provide high quality depth map as a final result, there is no clear relation between the final accuracy ...

Depth map up-sampling using cost-volume filtering

by Ji-ho Cho, Satoshi Ikehata, Hyunjin Yoo, Margrit Gelautz, Kiyoharu Aizawa - in Proc. of IVMSP Workshop , 2013
"... Depth maps captured by active sensors (e.g., ToF cameras and Kinect) typically suffer from poor spatial resolution, con-siderable amount of noise, and missing data. To overcome these problems, we propose a novel depth map up-sampling method which increases the resolution of the original depth map wh ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Depth maps captured by active sensors (e.g., ToF cameras and Kinect) typically suffer from poor spatial resolution, con-siderable amount of noise, and missing data. To overcome these problems, we propose a novel depth map up-sampling method which increases the resolution of the original depth map while effectively suppressing aliasing artifacts. Assum-ing that a registered high-resolution texture image is avail-able, the cost-volume filtering framework is applied to this problem. Our experiments show that cost-volume filtering can generate the high-resolution depth map accurately and efficiently while preserving discontinuous object boundaries, which is often a challenge when various state-of-the-art algo-rithms are applied. Index Terms — Depth map super-resolution, cost-volume filtering, up-sampling
(Show Context)

Citation Context

...ow-resolution image formation process. Some algorithms in this class reconstruct a high-resolution depth map of a static scene by fusing multiple low-resolution depth maps that were observed together =-=[2, 3]-=-. More recently, learning-based single image super-resolution techniques were integrated into depth map This work was supported in part by the Austrian Science Fund (FWF, M1383-N23) and JSPS Research ...

Nonrigid Surface Registration and Completion from RGBD Images

by Weipeng Xu, Mathieu Salzmann, Yongtian Wang, Yue Liu
"... Abstract. Nonrigid surface registration is a challenging problem that suffers from many ambiguities. Existing methods typically assume the availability of full volumetric data, or require a global model of the sur-face of interest. In this paper, we introduce an approach to nonrigid registration tha ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
Abstract. Nonrigid surface registration is a challenging problem that suffers from many ambiguities. Existing methods typically assume the availability of full volumetric data, or require a global model of the sur-face of interest. In this paper, we introduce an approach to nonrigid registration that performs on relatively low-quality RGBD images and does not assume prior knowledge of the global surface shape. To this end, we model the surface as a collection of patches, and infer the patch deformations by performing inference in a graphical model. Our repre-sentation lets us fill in the holes in the input depth maps, thus essentially achieving surface completion. Our experimental evaluation demonstrates the effectiveness of our approach on several sequences, as well as its ro-bustness to missing data and occlusions.
(Show Context)

Citation Context

...ed with depth sensors are known to be noisy and to suffer from missing data. As a consequence, many techniques perform shape completion and denoising by fusing multiple depth images of the same scene =-=[30, 5]-=-. This was further extended to incorporating appearance information in the fusion process [38, 24]. While the previous methods rely on the availability of multiple depth maps of a rigid scene, depth s...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University