Results 1  10
of
10
Twolevel volume rendering  fusing MIP and DVR
, 2000
"... In this paper we present a twolevel approach for fusing direct volume rendering (DVR) and maximumintensity projection (MIP) within a joint rendering method. Dierent structures within the dataset are rendered locally by either MIP or DVR on an objectbyobject basis. Globally all the results of su ..."
Abstract

Cited by 28 (10 self)
 Add to MetaCart
In this paper we present a twolevel approach for fusing direct volume rendering (DVR) and maximumintensity projection (MIP) within a joint rendering method. Dierent structures within the dataset are rendered locally by either MIP or DVR on an objectbyobject basis. Globally all the results of subsequent object renderings are combined in a merging step (usually compositing in our case). This allows to selectively choose the most suitable technique for depicting each object within the data, while keeping the amount of information contained in the image at a reasonable level. This is especially useful when inner structures should be visualized together with semitransparent outer parts, similar to the focusandcontext approach known from information visualization. We also present an implementation of our approach, which allows to explore volumetric data using twolevel rendering at interactive frame rates.
Advanced Visualization Techniques for Vessel Investigation
, 2001
"... Different approaches of visualization techniques and segmentation methods for computed tomograpy angiography datasets are investigated. The particular characteristics of this data are addressed. A global pathoptimisation method for a reliabilityenhanced vesseltracking method was introduced. Furth ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
Different approaches of visualization techniques and segmentation methods for computed tomograpy angiography datasets are investigated. The particular characteristics of this data are addressed. A global pathoptimisation method for a reliabilityenhanced vesseltracking method was introduced. Furthermore an interactive segmentation technique focusing on the clinical use is proposed.
VOTS: VOlume doTS as a PointBased Representation of Volumetric Data
"... We present Volume dots (Vots), a new primitive for volumetric data modelling, processing, and rendering. Vots are a pointbased representation of volumetric data. An individual Vot is specified by the coefficients of a Taylor series expansion, i.e. the function value and higher order derivatives at ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We present Volume dots (Vots), a new primitive for volumetric data modelling, processing, and rendering. Vots are a pointbased representation of volumetric data. An individual Vot is specified by the coefficients of a Taylor series expansion, i.e. the function value and higher order derivatives at a specific point. A Vot does not only represent a single sample point, it represents the underlying function within a region. With the Vots representation we have a more intuitive and highlevel description of the volume data. This allows direct analytical examination and manipulation of volumetric datasets. Vots enable the representation of the underlying scalar function with specified precision. Usercentric importance sampling is also possible, i.e., unimportant volume parts are still present but represented with just very few Vots. As proof of concept, we show Maximum Intensity Projection based on Vots.
An Implementation of Frequency Domain Volume Rendering Using the Hartley Transform
 In The Course of Special Topics in Computer Graphics
, 1999
"... Frequency domain volume rendering is a technique which allows projections of n 3 sized volume data to be generated in O(n 2 log n) time. This is achieved by exploiting the projectionslice theorem which states that a projection in spatial domain can be achieved by extracting a slice in frequency ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Frequency domain volume rendering is a technique which allows projections of n 3 sized volume data to be generated in O(n 2 log n) time. This is achieved by exploiting the projectionslice theorem which states that a projection in spatial domain can be achieved by extracting a slice in frequency domain. This paper first gives an overview of the theoretical principles, whereas it also concentrates on the Hartley transform as a means of changing between spatial and frequency domain. The Hartley transform is more suitable in this context as the more familiar Fourier transform, since it generates real output for real data. As an extension, depth cueing to improve the spatial perception is described. Finally, some results achieved by an actual implementation are given. Keywords: volume rendering, Fourier transform, Hartley transform, projectionslice theorem, scientific visualization, depth cueing 1 Introduction Direct volume rendering is a technique that produces pictures of the vol...
A viewdependent approach to MIP for very large data
"... A simple and yet useful approach to visualize a variety of structures from sampled data is the Maximum Intensity Projection (MIP). Higher valued structures of interest project over occluding structures. This can make MIP images difficult to interpret due to the loss of depth information. Animating a ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
A simple and yet useful approach to visualize a variety of structures from sampled data is the Maximum Intensity Projection (MIP). Higher valued structures of interest project over occluding structures. This can make MIP images difficult to interpret due to the loss of depth information. Animating about the data is one key way to try to decipher such ambiguities. The challenge is that MIP is inherently expensive and thus high frame rates are difficult to achieve. Variations to the original MIP algorithm and classification can help to further alleviate ambiguities and provide improved image quality. Unfortunately, these improved techniques are even more expensive. In addition, they require substantial parameter searching and tweaking. As today’s data sizes are increasingly getting larger, current methods only allow very limited interaction. We explore a viewdependent approach using concepts from imagebased rendering. A novel multilayered image representation storing scalar information is computed at a view sample and then warped to the user’s view. We present algorithms using OpenGL to quickly compute MIP and its variations using commodity offtheshelf graphics hardware to achieve near interactive rates.
Viewindependent Contour Culling of 3D Density Maps for Farfield Viewing of Isosurfaces
"... In many applications, isosurface is the primary method for visualizing the structure of 3D density maps. We consider a common scenario where the user views the isosurfaces from a distance and varies the level associated with the isosurface as well as the view direction to gain a sense of the gene ..."
Abstract
 Add to MetaCart
(Show Context)
In many applications, isosurface is the primary method for visualizing the structure of 3D density maps. We consider a common scenario where the user views the isosurfaces from a distance and varies the level associated with the isosurface as well as the view direction to gain a sense of the general 3D structure of the density map. For many types of density data, the isosurfaces associated with a particular threshold may be nested and never visible during this type of viewing. In this paper, we discuss a simple, conservative culling method that avoids the generation of interior portions of isosurfaces at the contouring stage. Unlike existing methods that perform culling based on the current view direction, our culling is performed once for all views and requires no additional computation as the view changes. By precomputing a single visibility map, culling is done at any isovalue with little overhead in contouring. We demonstrate the effectiveness of the algorithm on a range of biomedical data and discuss a practical application in online visualization.
Feature Enhancement using Locally Adaptive Volume Rendering Abstract
"... Classical direct volume rendering techniques accumulate color and opacity contributions using the standard volume rendering equation approximated by alpha blending. However, such standard rendering techniques, often also aiming at visual realism, are not always adequate for efficient data exploratio ..."
Abstract
 Add to MetaCart
Classical direct volume rendering techniques accumulate color and opacity contributions using the standard volume rendering equation approximated by alpha blending. However, such standard rendering techniques, often also aiming at visual realism, are not always adequate for efficient data exploration, especially when large opaque areas are present in a dataset, since such areas can occlude important features and make them invisible. On the other hand, the use of highly transparent transfer functions allows viewing all the features at once, but often makes these features barely visible. In this paper we introduce a new, straightforward rendering technique called locally adaptive volume rendering, that consists in slightly modifying the traditional volume rendering equation in order to improve the visibility of the features, independently of any transfer function. Our approach is fully automatic and based only on an initial binary classification of empty areas. This classification is used to dynamically adjust the opacity of the contributions perpixel depending on the number of nonempty contributions to that pixel. As will be shown by our comparative study with standard volume rendering, this makes our rendering method much more suitable for interactive data exploration at a low extra cost. Thereby, our method avoids feature visibility restrictions without relying on a transfer function and yet maintains a visual similarity with standard volume rendering. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Picture/Image GenerationDisplay Algorithms; I.3.7 [Computer Graphics]: ThreeDimensional Graphics and Realism;
1Perpixel Opacity Modulation for Feature Enhancement in Volume Rendering
"... Abstract — Classical direct volume rendering techniques accumulate color and opacity contributions using the standard volume rendering equation approximated by alpha blending. However, such standard rendering techniques, often also aiming at visual realism, are not always adequate for efficient dat ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract — Classical direct volume rendering techniques accumulate color and opacity contributions using the standard volume rendering equation approximated by alpha blending. However, such standard rendering techniques, often also aiming at visual realism, are not always adequate for efficient data exploration, especially when large opaque areas are present in a dataset, since such areas can occlude important features and make them invisible. On the other hand, the use of highly transparent transfer functions allows viewing all the features at once, but often makes these features barely visible. In order to enhance feature visibility, we present in this paper a straightforward rendering technique that consists in modifying the traditional volume rendering equation independently of any transfer function. Our approach is fully automatic and based on a function quantifying the relative importance of each voxel in the final rendering called relevance function. This function is subsequently used to dynamically adjust the opacity of the contributions perpixel. As will be shown by our comparative study with standard volume rendering, this makes our rendering method much more suitable for interactive data exploration at a low extra cost. Thereby, our method avoids feature visibility restrictions without relying on a transfer function and yet maintains a visual similarity with standard volume rendering. I.