Results 1 -
2 of
2
Sampling Based Scene-Space Video Processing
"... Figure 1: Single frames from video results created with our sampling based scene-space video processing framework. It enables fundamental video applications such as denoising (left) as well as new artistic results such as action shots (center) and virtual aperture effects (right). Our approach is ro ..."
Abstract
-
Cited by 1 (1 self)
- Add to MetaCart
Figure 1: Single frames from video results created with our sampling based scene-space video processing framework. It enables fundamental video applications such as denoising (left) as well as new artistic results such as action shots (center) and virtual aperture effects (right). Our approach is robust to unavoidable inaccuracies in 3D information, and can be used on casually recorded, moving video. Many compelling video processing effects can be achieved if per-pixel depth information and 3D camera calibrations are known. However, the success of such methods is highly dependent on the accuracy of this “scene-space ” information. We present a novel, sampling-based framework for processing video that enables high-quality scene-space video effects in the presence of inevitable errors in depth and camera pose estimation. Instead of trying to improve the explicit 3D scene representation, the key idea of our method is to exploit the high redundancy of approximate scene information that arises due to most scene points being visible multiple times across many frames of video. Based on this observation, we pro-
Efficient GPU Based Sampling for Scene-Space Video Processing
"... We describe a method to efficiently collect and filter a large set of 2D pixel observations of unstructured 3D points, with applications to scene-space aware video processing. One of the main challenges in scene-space video processing is to achieve reasonable computation time despite the very large ..."
Abstract
- Add to MetaCart
We describe a method to efficiently collect and filter a large set of 2D pixel observations of unstructured 3D points, with applications to scene-space aware video processing. One of the main challenges in scene-space video processing is to achieve reasonable computation time despite the very large volumes of data, often in the order of billions of pixels. The bottleneck is determining a suitable set of candidate samples used to compute each output video pixel color. These samples are observations of the same 3D point, and must be gathered from a large number of candidate pixels, by volumetric 3D queries in scene-space. Our approach takes advantage of the spatial and temporal continuity inherent to video to greatly reduce the candidate set of samples by solving 3D volumetric queries directly on a series of 2D projections, using out-of-core data streaming and an efficient GPU producer-consumer scheme that maximizes hardware utilization by exploiting memory locality. Our system is capable of processing over a trillion pixel samples, enabling various scene-space video processing applications on full HD video output with hundreds of frames and processing times in the order of a few minutes.