Results 1  10
of
31
Outofcore tensor approximation of multidimensional matrices of visual data
 ACM Transactions on Graphics
, 2005
"... Tensor approximation is necessary to obtain compact multilinear models for multidimensional visual datasets. Traditionally, each multidimensional data item is represented as a vector. Such a scheme flattens the data and partially destroys the internal structures established throughout the multiple ..."
Abstract

Cited by 33 (5 self)
 Add to MetaCart
Tensor approximation is necessary to obtain compact multilinear models for multidimensional visual datasets. Traditionally, each multidimensional data item is represented as a vector. Such a scheme flattens the data and partially destroys the internal structures established throughout the multiple dimensions. In this paper, we retain the original dimensionality of the data items to more effectively exploit existing spatial redundancy and allow more efficient computation. Since the size of visual datasets can easily exceed the memory capacity of a single machine, we also present an outofcore algorithm for higherorder tensor approximation. The basic idea is to partition a tensor into smaller blocks and perform tensorrelated operations blockwise. We have successfully applied our techniques to three graphicsrelated datadriven models, including 6D bidirectional texture functions, 7D dynamic BTFs and 4D volume simulation sequences. Experimental results indicate that our techniques can not only process outofcore data, but also achieve higher compression ratios and quality than previous methods.
Rapid High Quality Compression of Volume Data for Visualization
 Computer Graphics Forum
, 2001
"... Volume data sets resulting from, e.g., computerized tomography (CT) or magnetic resonance (MR) imaging modalities require enormous storage capacity even at moderate resolution levels. Such large files may require compression for processing in CPU memory which, however, comes at the cost of decoding ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
Volume data sets resulting from, e.g., computerized tomography (CT) or magnetic resonance (MR) imaging modalities require enormous storage capacity even at moderate resolution levels. Such large files may require compression for processing in CPU memory which, however, comes at the cost of decoding times and some loss in reconstruction quality with respect to the original data. For many typical volume visualization applications (rendering of volume slices, subvolumes of interest, or isosurfaces) only a part of the volume data needs to be decoded. Thus, efficient compression techniques are needed that provide random access and rapid decompression of arbitrary parts the volume data. We propose a technique which is block based and operates in the wavelet transformed domain. We report performance results which compare favorably with previously published methods yielding large reconstruction quality gains from about 6 to 12 dB in PSNR for a 512 3volume extracted from the Visible Human data set. In terms of compression our algorithm compressed the data 6 times as much as the previous stateoftheart block based coder for a given PSNR quality. 1.
P.: Randomaccessible compressed triangle meshes
 IEEE Transactions on Visualization and Computer Graphics
"... AbstractWith the exponential growth in size of geometric data, it is becoming increasingly important to make effective use of multilevel caches, limited disk storage, and bandwidth. As a result, recent work in the visualization community has focused either on designing sequential access compression ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
AbstractWith the exponential growth in size of geometric data, it is becoming increasingly important to make effective use of multilevel caches, limited disk storage, and bandwidth. As a result, recent work in the visualization community has focused either on designing sequential access compression schemes or on producing cachecoherent layouts of (uncompressed) meshes for random access. Unfortunately combining these two strategies is challenging as they fundamentally assume conicting modes of data access. In this paper, we propose a novel orderpreserving compression method that supports transparent random access to compressed triangle meshes. Our decompression method selectively fetches from disk, decodes, and caches in memory requested parts of a mesh. We also provide a general mesh access API for seamless mesh traversal and incidence queries. While the method imposes no particular mesh layout, it is especially suitable for cacheoblivious layouts, which minimize the number of decompression I/O requests and provide high cache utilization during access to decompressed, inmemory portions of the mesh. Moreover, the transparency of our scheme enables improved performance without the need for application code changes. We achieve compression rates on the order of 20:1 and signicantly improved I/O performance due to reduced data transfer. To demonstrate the benets of our method, we implement two common applications as benchmarks. By using cacheoblivious layouts for the input models, we observe 26 times overall speedup compared to using uncompressed meshes. Index TermsMesh compression, random access, cachecoherent layouts, mesh data structures, external memory algorithms. F 1
HIERARCHICAL TENSOR APPROXIMATION OF MULTIDIMENSIONAL IMAGES
"... Visual data comprises of multiscale and inhomogeneous signals. In this paper, we exploit these characteristics and develop an adaptive data approximation technique based on a hierarchical tensorbased transformation. In this technique, an original multidimensional image is transformed into a hiera ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
(Show Context)
Visual data comprises of multiscale and inhomogeneous signals. In this paper, we exploit these characteristics and develop an adaptive data approximation technique based on a hierarchical tensorbased transformation. In this technique, an original multidimensional image is transformed into a hierarchy of signals to expose its multiscale structures. The signal at each level of the hierarchy is further divided into a number of smaller tensors to expose its spatially inhomogeneous structures. These smaller tensors are further transformed and pruned using a collective tensor approximation technique. Experimental results indicate that our technique can achieve higher compression ratios than existing functional approximation methods, including wavelet transforms, wavelet packet transforms and singlelevel tensor approximation.
Rendering of 3D Wavelet Compressed Concentric Mosaic Scenery with Progressive Inverse Wavelet Synthesis (PIWS)
"... The concentric mosaics offer a quick solution to the construction and navigation of a virtual environment. To reduce the vast data amount of the concentric mosaics, a compression scheme based on 3D wavelet transform has been proposed in a previous paper. In this work, we investigate the efficient im ..."
Abstract

Cited by 12 (4 self)
 Add to MetaCart
(Show Context)
The concentric mosaics offer a quick solution to the construction and navigation of a virtual environment. To reduce the vast data amount of the concentric mosaics, a compression scheme based on 3D wavelet transform has been proposed in a previous paper. In this work, we investigate the efficient implementation of the renderer. It is preferable not to expand the compressed bitstream as a whole, so that the memory consumption of the renderer can be reduced. Instead, only the data necessary to render the current view are accessed and decoded. The progressive inverse wavelet synthesis (PIWS) algorithm is proposed to provide the random data access and to reduce the calculation for the data access requests to a minimum. A mixed cache is used in PIWS, where the entropy decoded wavelet coefficient, intermediate result of lifting and fully synthesized pixel are all stored at the same memory unit because of the inplace calculation property of the lifting implementation. PIWS operates with a finite state machine, where each memory unit is attached with a state to indicate what type of content is currently stored. The computation saving achieved by PIWS is demonstrated with extensive experiment results.
K.L.: Transform coding for hardwareaccelerated volume rendering
 IEEE Transactions on Visualization and Computer Graphics
"... Abstract—Hardwareaccelerated volume rendering using the GPU is now the standard approach for realtime volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been s ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Hardwareaccelerated volume rendering using the GPU is now the standard approach for realtime volume rendering, although limited graphics memory can present a problem when rendering large volume data sets. Volumetric compression in which the decompression is coupled to rendering has been shown to be an effective solution to this problem; however, most existing techniques were developed in the context of software volume rendering, and all but the simplest approaches are prohibitive in a realtime hardwareaccelerated volume rendering context. In this paper we present a novel blockbased transform coding scheme designed specifically with realtime volume rendering in mind, such that the decompression is fast without sacrificing compression quality. This is made possible by consolidating the inverse transform with dequantization in such a way as to allow most of the reprojection to be precomputed. Furthermore, we take advantage of the freedom afforded by offline compression in order to optimize the encoding as much as possible while hiding this complexity from the decoder. In this context we develop a new block classification scheme which allows us to preserve perceptually important features in the compression. The result of this work is an asymmetric transform coding scheme that allows very large volumes to be compressed and then decompressed in realtime while rendering on the GPU.
Advanced Techniques for High Quality Multiresolution Volume Rendering
 In Computers & Graphics (2004
, 2004
"... We present several improvements for compression based multiresolution rendering of very large volume data sets at interactive to realtime frame rates on standard PC hardware. The algorithm accepts scalar or multivariant data sampled on a regular grid as input. The input data is converted into a c ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
(Show Context)
We present several improvements for compression based multiresolution rendering of very large volume data sets at interactive to realtime frame rates on standard PC hardware. The algorithm accepts scalar or multivariant data sampled on a regular grid as input. The input data is converted into a compressed hierarchical wavelet representation in a preprocessing step. During rendering, the wavelet representation is decompressed onthefly and rendered using hardware texture mapping. The levelofdetail used for rendering is adapted to the estimated screenspace error. To increase the rendering performance additional visibility tests, such as empty space skipping and occlusion culling, are applied. Furthermore we discuss how to render the remaining multiresolution blocks efficiently using modern graphics hardware. Using a prototype implementation of this algorithm we are able to perform a high quality interactive rendering of large data sets on a single offtheshelf PC.
Interactive Multiscale Tensor Reconstruction for Multiresolution Volume Visualization
"... Abstract — Large scale and structurally complex volume datasets from highresolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on t ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
(Show Context)
Abstract — Large scale and structurally complex volume datasets from highresolution 3D imaging devices or computational simulations pose a number of technical challenges for interactive visual analysis. In this paper, we present the first integration of a multiscale volume representation based on tensor approximation within a GPUaccelerated outofcore multiresolution rendering framework. Specific contributions include (a) a hierarchical bricktensor decomposition approach for preprocessing large volume data, (b) a GPU accelerated tensor reconstruction implementation exploiting CUDA capabilities, and (c) an effective tensorspecific quantization strategy for reducing data transfer bandwidth and outofcore memory footprint. Our multiscale representation allows for the extraction, analysis and display of structural features at variable spatial scales, while adaptive levelofdetail rendering methods make it possible to interactively explore large datasets within a constrained memory footprint. The quality and performance of our prototype system is evaluated on large structurally complex datasets, including gigabytesized microtomographic volumes. Index Terms—GPU / CUDA, multiscale, tensor reconstruction, interactive volume visualization, multiresolution rendering 1
Lossy Dictionaries
 In ESA ’01: Proceedings of the 9th Annual European Symposium on Algorithms
, 2001
"... Bloom filtering is an important technique for space efficient storage of a conservative approximation of a set S. The set stored may have up to some specified number of false positive members, but all elements of S are included. In this paper we consider lossy dictionaries that are also allowed to h ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
Bloom filtering is an important technique for space efficient storage of a conservative approximation of a set S. The set stored may have up to some specified number of false positive members, but all elements of S are included. In this paper we consider lossy dictionaries that are also allowed to have false negatives, i.e., leave out elements of S. The aim is to maximize the weight of included keys within a given space constraint. This relaxation allows a very fast and simple data structure making almost optimal use of memory. Being more time efficient than Bloom filters, we believe our data structure to be well suited for replacing Bloom filters in some applications. Also, the fact that our data structure supports information associated to keys paves the way for new uses, as illustrated by an application in lossy image compression.
3D wavelet compression and progressive inverse wavelet synthesis rendering of concentric mosaic
 IEEE Trans. On Image Processing
, 2002
"... Abstract—Using an array of photo shots, the concentric mosaic offers a quick way to capture and model a realistic threedimensional (3D) environment. In this work, we compress the concentric mosaic image array with a 3D wavelet transform and coding scheme. Our compression algorithm and bitstream ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Using an array of photo shots, the concentric mosaic offers a quick way to capture and model a realistic threedimensional (3D) environment. In this work, we compress the concentric mosaic image array with a 3D wavelet transform and coding scheme. Our compression algorithm and bitstream syntax are designed to ensure that a local view rendering of the environment requires only a partial bitstream, thereby eliminating the need to decompress the entire compressed bitstream before rendering. By exploiting the ladderlike structure of the wavelet lifting scheme, the progressive inverse wavelet synthesis (PIWS) algorithm is proposed to maximally reduce the computational cost of selective data accesses on such wavelet compressed datasets. Experimental results show that the 3D wavelet coder achieves highcompression performance. With the PIWS algorithm, a 3D environment can be rendered in real time from a compressed dataset. Index Terms—Compression, concentric mosaic, lifting, progressive inverse wavelet synthesis (PIWS), ratedistortion optimization, 3D wavelet. I.