Results 11 - 20
of
36
Accelerating defocus blur magnification
"... Figure 1: Real world example of the steps of our algorithm. We estimate the blur at edge locations in the image (b), then we interpolate the values to close the gaps (c). This blur map can be used to magnify the defocus blur (d). A shallow depth-of-field is often used as a creative element in photog ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
Figure 1: Real world example of the steps of our algorithm. We estimate the blur at edge locations in the image (b), then we interpolate the values to close the gaps (c). This blur map can be used to magnify the defocus blur (d). A shallow depth-of-field is often used as a creative element in photographs. This, however, comes at the cost of expensive and heavy camera equipment, such as large sensor DSLR bodies and fast lenses. In contrast, cheap small-sensor cameras with fixed lenses usually exhibit a larger depth-of-field than desirable. In this case a computational solution is suggesting, since a shallow depth-of-field cannot be achieved by optical means. One possibility is to algorithmically increase the defocus blur already present in the image. Yet, existing algorithmic solutions tackling this problem suffer from poor performance due to the ill-posedness of the problem: The amount of defocus blur can be estimated at edges only; homogeneous areas do not contain such information. However, to magnify the defocus blur we need to know the amount of blur at every pixel position. Estimating it requires solving an optimization problem with many unknowns. We propose a faster way to propagate the amount of blur from the edges to the entire image by solving the optimization problem on a small scale, followed by edge-aware upsampling using the original image as guide. The resulting approximate defocus map can be used to synthesize images with shallow depth-of-field with quality comparable to the original approach. This is demonstrated by experimental results.
Joint Geodesic Upsampling of Depth Images
, 2013
"... We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
We propose an algorithm utilizing geodesic distances to upsample a low resolution depth image using a registered high resolution color image. Specifically, it computes depth for each pixel in the high resolution image using geodesic paths to the pixels whose depths are known from the low resolution one. Though this is closely related to the all-pairshortest-path problem which has O(n2 log n) complexity, we develop a novel approximation algorithm whose complexity grows linearly with the image size and achieve realtime performance. We compare our algorithm with the state of the art on the benchmark dataset and show that our approach provides more accurate depth upsampling with fewer artifacts. In addition, we show that the proposed algorithm is well suited for upsampling depth images using binary edge maps, an important sensor fusion application.
Spatio-temporal geometry fusion for multiple hybrid cameras using moving least squares surfaces
- CGF (Eurographics
, 2014
"... Figure 1: a) Combined raw geometries obtained by a calibrated setup of two hybrid color+depth cameras rendered in green and red respectively. b) Result of geometry fusion obtained by our MLS-based approach. c) Textured geometry from a). Note the numerous visual artifacts due to the inaccurate and in ..."
Abstract
-
Cited by 2 (2 self)
- Add to MetaCart
Figure 1: a) Combined raw geometries obtained by a calibrated setup of two hybrid color+depth cameras rendered in green and red respectively. b) Result of geometry fusion obtained by our MLS-based approach. c) Textured geometry from a). Note the numerous visual artifacts due to the inaccurate and incomplete geometry, especially near depth discontinuities. d) Our optimized textured geometry from b). Multiview reconstruction aims at computing the geometry of a scene observed by a set of cameras. Accurate 3D reconstruction of dynamic scenes is a key component in a large variety of applications, ranging from special effects to telepresence and medical imaging. In this paper we propose a method based on Moving Least Squares surfaces which robustly and efficiently reconstructs dynamic scenes captured by a set of hybrid color+depth cameras. Our reconstruction provides spatio-temporal consistency and seamlessly fuses color and geometric information. We illustrate our formulation on a variety of real sequences and demonstrate that it favorably compares to state-of-the-art methods.
S.C.: Layer depth denoising and completion for structured-light rgb-d cameras
- In: Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on
, 2013
"... The recent popularity of structured-light depth sensors has enabled many new applications from gesture-based us-er interface to 3D reconstructions. The quality of the depth measurements of these systems, however, is far from perfect. Some depth values can have significant errors, while others can be ..."
Abstract
-
Cited by 2 (1 self)
- Add to MetaCart
(Show Context)
The recent popularity of structured-light depth sensors has enabled many new applications from gesture-based us-er interface to 3D reconstructions. The quality of the depth measurements of these systems, however, is far from perfect. Some depth values can have significant errors, while others can be missing altogether. The uncertainty in depth mea-surements among these sensors can significantly degrade the performance of any subsequent vision processing. In this paper, we propose a novel probabilistic model to cap-ture various types of uncertainties in the depth measure-ment process among structured-light systems. The key to our model is the use of depth layers to account for the dif-ferences between foreground objects and background scene, the missing depth value phenomenon, and the correlation between color and depth channels. The depth layer label-ing is solved as a maximum a-posteriori estimation prob-lem, and a Markov Random Field attuned to the uncertain-ty in measurements is used to spatially smooth the labeling process. Using the depth-layer labels, we propose a depth correction and completion algorithm that outperforms oth-er techniques in the literature. 1.
Dcshmatching patches in rgbd images
- in International Conference on Computer Vision ICCV, IEEE
, 2013
"... We extend patch based methods to work on patches in 3D space. We start with Coherency Sensitive Hashing [12] (CSH), which is an algorithm for matching patches between two RGB images, and extend it to work with RGBD im-ages. This is done by warping all 3D patches to a com-mon virtual plane in which C ..."
Abstract
-
Cited by 2 (0 self)
- Add to MetaCart
(Show Context)
We extend patch based methods to work on patches in 3D space. We start with Coherency Sensitive Hashing [12] (CSH), which is an algorithm for matching patches between two RGB images, and extend it to work with RGBD im-ages. This is done by warping all 3D patches to a com-mon virtual plane in which CSH is performed. To avoid noise due to warping of patches of various normals and depths, we estimate a group of dominant planes and com-pute CSH on each plane separately, before merging the matching patches. The result is DCSH- an algorithm that matches world (3D) patches in order to guide the search for image plane matches. An independent contribution is an ex-tension of CSH, which we term Social-CSH. It allows a ma-jor speedup of the k nearest neighbor (kNN) version of CSH- its runtime growing linearly, rather than quadratically, in k. Social-CSH is used as a subcomponent of DCSH when many NNs are required, as in the case of image denoising. We show the benefits of using depth information to image re-construction and image denoising, demonstrated on several RGBD images. 1.
Temporal depth video enhancement based on intrinsic static structure
- in Proc. IEEE Int. Conf. Image Process
, 2014
"... Depth video enhancement is an essential preprocessing step for various 3D applications. Despite extensive studies of spa-tial enhancement, effective temporal enhancement that both strengthens temporal consistency and keeps correct depth variation needs further research. In this paper, we propose a n ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Depth video enhancement is an essential preprocessing step for various 3D applications. Despite extensive studies of spa-tial enhancement, effective temporal enhancement that both strengthens temporal consistency and keeps correct depth variation needs further research. In this paper, we propose a novel method to enhance the depth video by blending raw depth frame with the estimated intrinsic static structure, which defines static structure of captured scene and is es-timated iteratively by a probabilistic generative model with sequentially incoming depth frames. Our experimental results show that the proposed method is effective both in static and dynamic scene and is compatible with various kinds of depth videos. We will demonstrate that superior performance can be achieved in comparison with existing temporal enhancement approaches. Index Terms — depth video enhancement, temporal en-hancement, probabilistic model, variational approximation 1.
Depth map up-sampling using cost-volume filtering
- in Proc. of IVMSP Workshop
, 2013
"... Depth maps captured by active sensors (e.g., ToF cameras and Kinect) typically suffer from poor spatial resolution, con-siderable amount of noise, and missing data. To overcome these problems, we propose a novel depth map up-sampling method which increases the resolution of the original depth map wh ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
(Show Context)
Depth maps captured by active sensors (e.g., ToF cameras and Kinect) typically suffer from poor spatial resolution, con-siderable amount of noise, and missing data. To overcome these problems, we propose a novel depth map up-sampling method which increases the resolution of the original depth map while effectively suppressing aliasing artifacts. Assum-ing that a registered high-resolution texture image is avail-able, the cost-volume filtering framework is applied to this problem. Our experiments show that cost-volume filtering can generate the high-resolution depth map accurately and efficiently while preserving discontinuous object boundaries, which is often a challenge when various state-of-the-art algo-rithms are applied. Index Terms — Depth map super-resolution, cost-volume filtering, up-sampling
Nonrigid Surface Registration and Completion from RGBD Images
"... Abstract. Nonrigid surface registration is a challenging problem that suffers from many ambiguities. Existing methods typically assume the availability of full volumetric data, or require a global model of the sur-face of interest. In this paper, we introduce an approach to nonrigid registration tha ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract. Nonrigid surface registration is a challenging problem that suffers from many ambiguities. Existing methods typically assume the availability of full volumetric data, or require a global model of the sur-face of interest. In this paper, we introduce an approach to nonrigid registration that performs on relatively low-quality RGBD images and does not assume prior knowledge of the global surface shape. To this end, we model the surface as a collection of patches, and infer the patch deformations by performing inference in a graphical model. Our repre-sentation lets us fill in the holes in the input depth maps, thus essentially achieving surface completion. Our experimental evaluation demonstrates the effectiveness of our approach on several sequences, as well as its ro-bustness to missing data and occlusions.
Exploiting Shading Cues in Kinect IR Images for Geometry Refinement
"... In this paper, we propose a method to refine geometry of 3D meshes from the Kinect fusion by exploiting shad-ing cues captured from the infrared (IR) camera of Kinect. A major benefit of using the Kinect IR camera instead of a RGB camera is that the IR images captured by Kinect are narrow band image ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
In this paper, we propose a method to refine geometry of 3D meshes from the Kinect fusion by exploiting shad-ing cues captured from the infrared (IR) camera of Kinect. A major benefit of using the Kinect IR camera instead of a RGB camera is that the IR images captured by Kinect are narrow band images which filtered out most undesired ambient light that makes our system robust to natural in-door illumination. We define a near light IR shading model which describes the captured intensity as a function of sur-face normals, albedo, lighting direction, and distance be-tween a light source and surface points. To resolve ambigu-ity in our model between normals and distance, we utilize an initial 3D mesh from the Kinect fusion and multi-view in-formation to reliably estimate surface details that were not reconstructed by the Kinect fusion. Our approach directly operates on a 3D mesh model for geometry refinement. The effectiveness of our approach is demonstrated through sev-eral challenging real-world examples. 1.
Similarity-Aware Patchwork Assembly for Depth Image Super-Resolution
"... This paper describes a patchwork assembly algorithm for depth image super-resolution. An input low resolution depth image is disassembled into parts by matching similar regions on a set of high resolution training images, and a super-resolution image is then assembled using these cor-responding matc ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
This paper describes a patchwork assembly algorithm for depth image super-resolution. An input low resolution depth image is disassembled into parts by matching similar regions on a set of high resolution training images, and a super-resolution image is then assembled using these cor-responding matched counterparts. We convert the super-resolution problem into a Markov Random Field (MRF) la-beling problem, and propose a unified formulation embed-ding (1) the consistency between the resolution enhanced image and the original input, (2) the similarity of disas-sembled parts with the corresponding regions on training images, (3) the depth smoothness in local neighborhoods, (4) the additional geometric constraints from self-similar structures in the scene, and (5) the boundary coincidence between the resolution enhanced depth image and an op-tional aligned high resolution intensity image. Experimen-tal results on both synthetic and real-world data demon-strate that the proposed algorithm is capable of recovering high quality depth images with ×4 resolution enhancement along each coordinate direction, and that it outperforms state-of-the-arts [14] in both qualitative and quantitative evaluations. 1.