Results 1  10
of
30
HypergraphPartitioningBased Remapping Models for ImageSpaceParallel Direct Volume Rendering of Unstructured Grids
 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
, 2005
"... In this work, imagespaceparallel direct volume rendering (DVR) of unstructured grids is investigated for distributedmemory architectures. A hypergraphpartitioningbased model is proposed for the adaptive screen partitioning problem in this context. The proposed model aims to balance the renderin ..."
Abstract

Cited by 13 (5 self)
 Add to MetaCart
In this work, imagespaceparallel direct volume rendering (DVR) of unstructured grids is investigated for distributedmemory architectures. A hypergraphpartitioningbased model is proposed for the adaptive screen partitioning problem in this context. The proposed model aims to balance the rendering loads of processors while trying to minimize the amount of data replication. In the parallel DVR framework we adopted, each data primitive is statically owned by its home processor, which is responsible from replicating its primitives on other processors. Two appropriate remapping models are proposed by enhancing the above model for use within this framework. These two remapping models aim to minimize the total volume of communication in data replication while balancing the rendering loads of processors. Based on the proposed models, a parallel DVR algorithm is developed. The experiments conducted on a PC cluster show that the proposed remapping models achieve better speedup values compared to the remapping models previously suggested for imagespaceparallel DVR.
Fast data parallel polygon rendering
 In Proceedings of Supercomputing '93
, 1993
"... This paper describes a data parallel method forpolygon rendering on a massively parallel machine. This method, based on a simple shading model, is targeted for applications which require very fast rendering for extremely large sets of polygons. Such sets are found in many scienti c visualization app ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
This paper describes a data parallel method forpolygon rendering on a massively parallel machine. This method, based on a simple shading model, is targeted for applications which require very fast rendering for extremely large sets of polygons. Such sets are found in many scienti c visualization applications. The renderer can handle arbitrarily complex polygons which need notbe meshed. Issues involving load balancing are addressed andadataparallel load balancing algorithm is presented. The rendering and load balancing algorithms are implemented onboth the CM200 and the CM5. Experimental results are presented. This rendering toolkit enables a scientist to display 3D shaded polygons directly from aparallel machine avoiding the transmission of huge amounts of data to a postprocessing rendering system. 1
Robust and Efficient Surface Reconstruction from Contours
 CONTOURS”, THE VISUAL COMPUTER
, 2001
"... In this paper, we propose a new approach for surface recovery from planar sectional contours. The surface is reconstructed based on the socalled "Equal Importance Criterion", which suggests that every point in the region contributes equally to the reconstruction process. The prob ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
In this paper, we propose a new approach for surface recovery from planar sectional contours. The surface is reconstructed based on the socalled &quot;Equal Importance Criterion&quot;, which suggests that every point in the region contributes equally to the reconstruction process. The problem is then formulated in terms of a partial differential equation, and the solution is efficiently calculated from distance transform. To make the algorithm valid for different application purposes, both the isosurface and the primitive representations of the object surface are derived. The isosurface is constructed by interpolating between sectional distance transformations. The primitive are approximated by Voronoi Diagram transformation of the surface space. Isosurfaces have the advantage that subsequent geometric analysis of the object can be easily carried out while primitives representation is easy to be visualized. The proposed technique allows for surface recovery at any desired resolution, thus, inherent problems due to correspondence, tiling, and branching are avoided.
Volume visualizing highresolution turbulence computations
 Theor. Comput. Fluid Dyn
, 1998
"... Abstract: Using several volume visualization packages including a new package we developed called Volsh, we investigate a 25 GB dataset from a 256 3 computation of decaying quasigeostrophic turbulence. We compare surface fitting and direct volume rendering approaches, as well as a number of techniq ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Abstract: Using several volume visualization packages including a new package we developed called Volsh, we investigate a 25 GB dataset from a 256 3 computation of decaying quasigeostrophic turbulence. We compare surface fitting and direct volume rendering approaches, as well as a number of techniques for producing featurerevealing spatial cues. We also study the pros and cons of using batch and interactive tools for visualizing the data and discuss the relative merits of using each approach. We find that each tool has its own advantages and disadvantages, and a combination of tools is most effective at exploring large fourdimensional scalar datasets. The resulting visualizations show several new phenomena in the dynamics of coherent vortices.
Network Requirements for 3D Flying in a Zoomable Brain Database
 IEEE JSAC SPECIAL ISSUE ON GIGABIT NETWORKING
, 1995
"... ... this papers analysis, realistic presentation of these visualizations across computer networks will stress current and proposed gigabit networks. Image compression can reduce network loads, but widespread use of the visualizations will still require networks capable of sustaining terabits/second ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
... this papers analysis, realistic presentation of these visualizations across computer networks will stress current and proposed gigabit networks. Image compression can reduce network loads, but widespread use of the visualizations will still require networks capable of sustaining terabits/second of throughput. The analysis techniques presented in this paper apply to other applications that can be descibed with analogous user requirements.
Volume Graphics: FieldBased Modelling and Rendering
, 2002
"... The main contributions of this work are summarised as follows: A flexible and lowcost object modelling framework, with rendering methods, for intermixing discrete and continuous volume data. Imageswept volumes: A new modelling paradigm in which attribute fields of volume objects are defined by swe ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
The main contributions of this work are summarised as follows: A flexible and lowcost object modelling framework, with rendering methods, for intermixing discrete and continuous volume data. Imageswept volumes: A new modelling paradigm in which attribute fields of volume objects are defined by sweeping discrete image or volume templates along arbitrary trajectories. A projectionbased texture mapping method for volume objects. A method for rendering Bezier volumes and freeform deformations of volume objects. vlib: A volume graphics API, including detailed design and implementation details. The fieldbased modelling framework addresses the limitations of using discrete data for representing volume objects. It not only results in very high quality images (with shadows, reflection and refraction) while supporting "traditional" volume graphics, which we demonstrate using several examples, but also it frequently reduces the significant memory overhead that is normally associated
Multiresolution and hierarchical methods for the visualization of volume data
 Future Generation Computer Systems
, 1999
"... As threedimensional data sets resulting from simulations or measurements become available at ever growing sizes the need for visualization tools which allow the inspection and the analysis of these data sets at interactive rates is increasing. One way to deal with the complexity is the compression ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
As threedimensional data sets resulting from simulations or measurements become available at ever growing sizes the need for visualization tools which allow the inspection and the analysis of these data sets at interactive rates is increasing. One way to deal with the complexity is the compression of the data in such a way that the number of cells which have to be processed by the visualization mapping is reduced. Since this compression will be lossy, it is up to the user to choose between quality or speed. The decision will usually be made interactively requiring fast access to a complete hierarchy of representations of the data set at various levels of resolution. Two different approaches and visualization algorithms based upon them are presented in this paper: wavelet analysis deriving a hierarchy of coarser representations from the original data set and multilevel finite elements generating successively refined tetrahedral grids from an initially coarse triangulation.
Massively Parallel Visualization: Parallel Rendering
, 1995
"... This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are prese ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper presents rendering algorithms, developed for massively parallel processors (MPPs), for polygonal, spheres, and volumetric data. The polygon algorithm uses a data parallel approach whereas the sphere and volume renderer use a MIMD approach. Implementations for these algorithms are presented for the Thinking Machines Corporation CM5 MPP. 1 Introduction In recent years, massively parallel processors (MPPs) have proven to be a valuable tool for performing scientific computation. Available memory on this type of computer is far greater than that which is found on traditional vector supercomputers. For example, a 1024 node CM5 contains 32 gigabytes of physical memory. As a result, scientists who utilize these MPPs can execute their three dimensional simulation models with much greater detail than previously possible. Molecular dynamics simulations can consist of over 100 million atoms [8] and CFD simulations can contain over 23 million cells with numerous variables [9]. Wh...
Volume Visualization in a Collaborative Computing Environment
 Computers&Graphics, Oxford
, 1996
"... Introduction 1.1 Volume rendering One popular technique for imaging scientific data is volume visualization  the process of projecting (rendering) a threedimensional data lattice onto a twodimensional image plane [Dreb88] [Elvi92]. Volume visualization is most often used to gain an understandi ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Introduction 1.1 Volume rendering One popular technique for imaging scientific data is volume visualization  the process of projecting (rendering) a threedimensional data lattice onto a twodimensional image plane [Dreb88] [Elvi92]. Volume visualization is most often used to gain an understanding of, or to perform measurements on, the structure contained within the threedimensional data. To be useful for such data analysis a volume rendering system should be capable of producing multiple highfidelity images per second. Direct volume rendering just one image of a threedimensional grid of scalar values (a data volume), however, requires a multisecond (sometimes multiminute) computation on a largememory computer. 1.2 Future rendering systems In future visualization environments fast desktop computers and efficient volume rendering algorithms will enable full screen renderings of large data volumes at interactive rates. This will allow scientists
Reviewing data visualization: an analytical taxonomical study
 In IV ’06: Proceedings of the conference on Information Visualization
, 2006
"... This paper presents an analytical taxonomy that can suitably describe, rather than simply classify, techniques for data presentation. Unlike previous works, we do not consider particular aspects of visualization techniques, but their mechanisms and foundational vision perception. Instead of just adj ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper presents an analytical taxonomy that can suitably describe, rather than simply classify, techniques for data presentation. Unlike previous works, we do not consider particular aspects of visualization techniques, but their mechanisms and foundational vision perception. Instead of just adjusting visualization research to a classification system, our aim is to better understand its process. For doing so, we depart from elementary concepts to reach a model that can describe how visualization techniques work and how they convey meaning. 1