Results 1  10
of
41
SemiAutomatic Generation of Transfer Functions for Direct Volume Rendering
 In IEEE Symposium on Volume Visualization
, 1998
"... Although direct volume rendering is a powerful tool for visualizing complex structures within volume data, the size and complexity of the parameter space controlling the rendering process makes generating an informative rendering challenging. In particular, the specification of the transfer function ..."
Abstract

Cited by 244 (7 self)
 Add to MetaCart
Although direct volume rendering is a powerful tool for visualizing complex structures within volume data, the size and complexity of the parameter space controlling the rendering process makes generating an informative rendering challenging. In particular, the specification of the transfer function  the mapping from data values to renderable optical properties  is frequently a timeconsuming and unintuitive task. Ideally, the data being visualized should itself suggest an appropriate transfer function that brings out the features of interest without obscuring them with elements of little importance. We demonstrate that this is possible for a large class of scalar volume data, namely that where the regions of interest are the boundaries between different materials. A transfer function which makes boundaries readily visible can be generated from the relationship between three quantities: the data value and its first and second directional derivatives along the gradient direction. ...
Volume Illustration: NonPhotorealistic Rendering of Volume Models
 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
, 2001
"... Accurately and automatically conveying the structure of a volume model is a problem not fully solved by existing volume rendering approaches. Physicsbased volume rendering approaches create images which may match the appearance of translucent materials in nature, but may not embody important struct ..."
Abstract

Cited by 158 (14 self)
 Add to MetaCart
Accurately and automatically conveying the structure of a volume model is a problem not fully solved by existing volume rendering approaches. Physicsbased volume rendering approaches create images which may match the appearance of translucent materials in nature, but may not embody important structural details. Transfer function approaches allow flexible design of the volume appearance, but generally require substantial hand tuning for each new data set in order to be effective. We introduce the volume illustration approach, combining the familiarity of a physicsbased illumination model with the ability to enhance important features using nonphotorealistic rendering techniques. Since features to be enhanced are defined on the basis of local volume characteristics rather than volume sample value, the application of volume illustration techniques requires less manual tuning than the design of a good transfer function. Volume illustration provides a flexible unified framework for enhancing structural perception of volume models through the amplification of features and the addition of illumination effects.
A chronology of interpolation: From ancient astronomy to modern signal and image processing
 Proceedings of the IEEE
, 2002
"... This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into histo ..."
Abstract

Cited by 61 (0 self)
 Add to MetaCart
This paper presents a chronological overview of the developments in interpolation theory, from the earliest times to the present date. It brings out the connections between the results obtained in different ages, thereby putting the techniques currently used in signal and image processing into historical perspective. A summary of the insights and recommendations that follow from relatively recent theoretical as well as experimental studies concludes the presentation. Keywords—Approximation, convolutionbased interpolation, history, image processing, polynomial interpolation, signal processing, splines. “It is an extremely useful thing to have knowledge of the true origins of memorable discoveries, especially those that have been found not by accident but by dint of meditation. It is not so much that thereby history may attribute to each man his own discoveries and others should be encouraged to earn like commendation, as that the art of making discoveries should be extended by considering noteworthy examples of it. ” 1 I.
Evaluation and Design of Filters Using a Taylor Series Expansion
 IEEE Transactions on Visualization and Computer Graphics
, 1997
"... We describe a new method for analyzing, classifying, and evaluating filters that can be applied to interpolation filters as well as to arbitrary derivative filters of any order. Our analysis is based on the Taylor series expansion of the convolution sum. Our analysis shows the need and derives the m ..."
Abstract

Cited by 60 (6 self)
 Add to MetaCart
We describe a new method for analyzing, classifying, and evaluating filters that can be applied to interpolation filters as well as to arbitrary derivative filters of any order. Our analysis is based on the Taylor series expansion of the convolution sum. Our analysis shows the need and derives the method for the normalization of derivative filter weights. Under certain minimal restrictions of the underlying function, we are able to compute tight absolute error bounds of the reconstruction process. We demonstrate the utilization of our methods to the analysis of the class of cubic BCspline filters. As our technique is not restricted to interpolation filters, we are able to show that the CatmullRom spline filter and its derivative are the most accurate reconstruction and derivative filters, respectively, among the class of BCspline filters. We also present a new derivative filter which features better spatial accuracy than any derivative BCspline filter, and is optimal within our fra...
Using distance maps for accurate surface representation in sampled volumes
 In IEEE Vol. Vis
, 1998
"... Figure 1: Shaded, volume rendered spheres stored with two values per voxel: a value indicating the distance to the closest surface point; and a binary intensity value. The sphere in a) has radius 30 voxels and is stored in an array of size. The spheres in b), c), and d) have radii 3 voxels, 2 voxels ..."
Abstract

Cited by 59 (3 self)
 Add to MetaCart
Figure 1: Shaded, volume rendered spheres stored with two values per voxel: a value indicating the distance to the closest surface point; and a binary intensity value. The sphere in a) has radius 30 voxels and is stored in an array of size. The spheres in b), c), and d) have radii 3 voxels, 2 voxels and 1.5 voxels respectively and are stored in arrays of size. The surface normal used in surface shading was calculated using a 6point central difference operator on the distance values. Remarkably smooth shading can be achieved for these low resolution data volumes because the function of the distancetoclosest surface varies smoothly across surfaces. (See color plate.) High quality rendering and physicsbased modeling in volume graphics have been limited because intensitybased volumetric data do not represent surfaces well. High spatial frequencies due to abrupt intensity changes at object surfaces result in jagged or terraced surfaces in rendered images. The use of a distancetoclosestsurface function to encode object surfaces is proposed. This function varies smoothly across surfaces and hence can be accurately reconstructed from sampled data. The zerovalue isosurface of the distance map yields the object surface and the derivative of the distance map yields the surface normal. Examples of rendered images are presented along with a new method for calculating distance maps from sampled binary data.
A practical evaluation of popular volume rendering algorithms
 IN PROCEEDINGS OF THE 2000 IEEE SYMPOSIUM ON VOLUME VISUALIZATION
, 2000
"... This paper evaluates and compares four volume rendering algorithms that have become rather popular for rendering datasets described on uniform rectilinear grids: raycasting, splatting, shearwarp, and hardwareassisted 3D texturemapping. In order to assess both the strengths and the weaknesses of t ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
This paper evaluates and compares four volume rendering algorithms that have become rather popular for rendering datasets described on uniform rectilinear grids: raycasting, splatting, shearwarp, and hardwareassisted 3D texturemapping. In order to assess both the strengths and the weaknesses of these algorithms in a wide variety of scenarios, a set of reallife benchmark datasets with different characteristics was carefully selected. In the rendering, all algorithmindependent image synthesis parameters, such as viewing matrix, transfer functions, and optical model, were kept constant to enable a fair comparison of the rendering results. Both image quality and computational complexity were evaluated and compared, with the aim of providing both researchers and practitioners with guidelines on which algorithm is most suited in which scenario. Our analysis also indicates the current weaknesses in each algorithm’s pipeline, and possible solutions to these as well as pointers for future research are offered.
Ray Casting Architectures for Volume Visualization
, 1999
"... Realtime visualization of large volume datasets demands high performance computation, pushing the storage, processing, and data communication requirements to the limits of current technology. General purpose parallel processors have been used to visualize moderate size datasets at interactive frame ..."
Abstract

Cited by 21 (2 self)
 Add to MetaCart
Realtime visualization of large volume datasets demands high performance computation, pushing the storage, processing, and data communication requirements to the limits of current technology. General purpose parallel processors have been used to visualize moderate size datasets at interactive frame rates; however, the cost and size of these supercomputers inhibits the widespread use for realtime visualization. This paper surveys several special purpose architectures that seek to render volumes at interactive rates. These specialized visualization accelerators have cost, performance, and size advantages over parallel processors. All architectures implement ray casting using parallel and pipelined hardware. We introduce a new metric that normalizes performance to compare these architectures. The architectures included in this survey are VOGUE, VIRIM, Array Based Ray Casting, EMCube, and VIZARD II. We also discuss future applications of special purpose accelerators.
Reconstruction Error Characterization and Control: A Sampling Theory Approach
 IEEE Transactions on Visualization and Computer Graphics
, 1996
"... Reconstruction is prerequisite whenever a discrete signal needs to be resampled as a result of transformation such as texture mapping, image manipulation, volume slicing, and rendering. We present a new method for the characterization and measurement of reconstruction error in spatial domain. Our ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
Reconstruction is prerequisite whenever a discrete signal needs to be resampled as a result of transformation such as texture mapping, image manipulation, volume slicing, and rendering. We present a new method for the characterization and measurement of reconstruction error in spatial domain. Our method uses the Classical Shannon's Sampling Theorem as a basis to develop error bounds. We use this formulation to provide, for the first time, an efficient way to guarantee an error bound at every point by varying the size of the reconstruction filter. We go further to support positionadaptive reconstruction and dataadaptive reconstruction which adjust filter size to the location of reconstruction point and to the data values in its vicinity. We demonstrate the effectiveness of our methods with 1D signals, 2D signals (images), and 3D signals (volumes).  F  1I NTRODUCTION ECONSTRUCTION is the process of recovering a contin...
Mastering Windows: Improving Reconstruction
 Proc. of IEEE/ACM SIGGRAPH Volume visualization and graphics symposium 2000
, 2000
"... Ideal reconstruction filters, for function or arbitrary derivative reconstruction, have to be bounded in order to be practicable since they are infinite in their spatial extent. This can be accomplished by multiplying them with windowing functions. In this paper we discuss and assess the quality of ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
Ideal reconstruction filters, for function or arbitrary derivative reconstruction, have to be bounded in order to be practicable since they are infinite in their spatial extent. This can be accomplished by multiplying them with windowing functions. In this paper we discuss and assess the quality of commonly used windows and show that most of them are unsatisfactory in terms of numerical accuracy. The best performing windows are Blackman, Kaiser and Gaussian win ftheussl,helwig,meisterg@cg.tuwien.ac.at dows. The latter two are particularly useful since both have a parameter to control their shape, which, on the other hand, requires to find appropriate values for these parameters. We show how to derive optimal parameter values for Kaiser and Gaussian windows using a Taylor series expansion of the convolution sum. Optimal values for function and first derivative reconstruction for window widths of two, three, four and five are presented explicitly. Keywords: ideal reconstruction, wind...
An AntiAliasing Technique for Splatting
, 1997
"... Splatting is a popular direct volume rendering algorithm. However, the algorithm does not correctly render cases where the volume sampling rate is higher than the image sampling rate (e.g. more than one voxel maps into a pixel). This situation arises with orthographic projections of highresolution ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
Splatting is a popular direct volume rendering algorithm. However, the algorithm does not correctly render cases where the volume sampling rate is higher than the image sampling rate (e.g. more than one voxel maps into a pixel). This situation arises with orthographic projections of highresolution volumes, as well as with perspective projections of volumes of any resolution. The result is potentially severe spatial and temporal aliasing artifacts. Some volume ray casting algorithms avoid these artifacts by employing reconstruction kernels which vary in width as rays diverge. Unlike ray casting algorithms, existing splatting algorithms to not have an equivalent mechanism for avoiding these artifacts. In this paper we propose such a mechanism, which delivers highquality splatted images and has the potential for a very efficient hardware implementation.