Results 1  10
of
10
Interactive ViewDependent Rendering of Large Isosurfaces
 Proceedings of the IEEE Conference on Visualization 2002
, 2002
"... We present an algorithm for interactively extracting and rendering isosurfaces of large volume datasets in a viewdependent fashion. A recursive tetrahedral mesh refinement scheme, based on longest edge bisection, is used to hierarchically decompose the data into a multiresolution structure. This da ..."
Abstract

Cited by 48 (10 self)
 Add to MetaCart
We present an algorithm for interactively extracting and rendering isosurfaces of large volume datasets in a viewdependent fashion. A recursive tetrahedral mesh refinement scheme, based on longest edge bisection, is used to hierarchically decompose the data into a multiresolution structure. This data structure allows fast extraction of arbitrary isosurfaces to within user specified viewdependent error bounds. A data layout scheme based on hierarchical space filling curves provides access to the data in a cache coherent manner that follows the data access pattern indicated by the mesh refinement.
Level Set Modeling and Segmentation of DTMRI Brain Data
 JOURNAL OF ELECTRONIC IMAGING
, 2003
"... Segmentation of anatomical regions of the brain is one of the fundamental problems in medical image analysis. It is traditionally solved by isosurfacing or through the use of activecontours/deformable models on a grayscale MRI data. In this paper we develop a technique that uses anisotropic diffus ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
Segmentation of anatomical regions of the brain is one of the fundamental problems in medical image analysis. It is traditionally solved by isosurfacing or through the use of activecontours/deformable models on a grayscale MRI data. In this paper we develop a technique that uses anisotropic diffusion properties of brain tissue available from DTMRI to segment out brain structures. Wedevelop a computational pipeline starting from raw diffusion tensor data, through computation of invariant anisotropy measures to construction of geometric models of the brain structures. This provides an environment for usercontrolled 3D segmentation of DTMRI datasets. Weusealevel set approach to remove noise from the data and to produce smooth, geometric models. We apply our technique to DTMRI data of a human subject and build models of the isotropic and strongly anisotropic regions of the brain. Once geometric models have been constructed they may be combined to study spatial relationships and quantitatively analyzed to produce the volume and surface area of the segmented regions.
Edge transformations for improving mesh quality of marching cubes
 IEEE TVCG
"... Abstract—Marching Cubes is a popular choice for isosurface extraction from regular grids due to its simplicity, robustness, and efficiency. One of the key shortcomings of this approach is the quality of the resulting meshes, which tend to have many poorly shaped and degenerate triangles. This issue ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Abstract—Marching Cubes is a popular choice for isosurface extraction from regular grids due to its simplicity, robustness, and efficiency. One of the key shortcomings of this approach is the quality of the resulting meshes, which tend to have many poorly shaped and degenerate triangles. This issue is often addressed through postprocessing operations such as smoothing. As we demonstrate in experiments with several data sets, while these improve the mesh, they do not remove all degeneracies and incur an increased and unbounded error between the resulting mesh and the original isosurface. Rather than modifying the resulting mesh, we propose a method to modify the grid on which Marching Cubes operates. This modification greatly increases the quality of the extracted mesh. In our experiments, our method did not create a single degenerate triangle, unlike any other method we experimented with. Our method incurs minimal computational overhead, requiring at most twice the execution time of the original Marching Cubes algorithm in our experiments. Most importantly, it can be readily integrated in existing Marching Cubes implementations and is orthogonal to many Marching Cubes enhancements (particularly, performance enhancements such as outofcore and acceleration structures). Index Terms—Meshing, marching cubes. Ç 1
Meshing Nonuniformly Sampled and Incomplete Data Based on Displaced Tspline Level Sets
, 2007
"... We propose a new method for constructing a piecewise smooth mesh from a set of unorganized data points, which may be nonuniformly sampled, noisy, and even containing holes. The method is based on the construction of an implicit representation of the surface, by using smooth (C 2 in our case) Tspli ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
We propose a new method for constructing a piecewise smooth mesh from a set of unorganized data points, which may be nonuniformly sampled, noisy, and even containing holes. The method is based on the construction of an implicit representation of the surface, by using smooth (C 2 in our case) Tspline scalar functions. We first generate the Tspline control grid, and use an evolution process such that the resulting Tspline level sets capture the topology and outline of the object to be reconstructed. The initial mesh with high quality is obtained from the implicit Tspline function through the marching triangulation method. Then we project each data point to the initial mesh, and get a scalar displacement field. Detailed features will be captured by the displaced mesh. We also propose an additional evolution process, which combines datadriven velocities and featurepreserving bilateral filters, in order to reproduce sharp features.
Evolution of Tspline Level Sets for Meshing Nonuniformly Sampled and Incomplete Data
, 2008
"... Given a large set of unorganized point sample data, we propose a new framework for computing a triangular mesh representing an approximating piecewise smooth surface. The data may be non–uniformly distributed, noisy, and they may contain holes. This framework is based on the combination of two type ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
Given a large set of unorganized point sample data, we propose a new framework for computing a triangular mesh representing an approximating piecewise smooth surface. The data may be non–uniformly distributed, noisy, and they may contain holes. This framework is based on the combination of two types of surface representations: triangular meshes, and Tspline level sets, which are implicit surfaces defined by refinable spline functions allowing Tjunctions. Our method contains three main steps. Firstly, we construct an implicit representation of a smooth (C² in our case) surface, by using an evolution process of Tspline level sets, such that the implicit surface captures the topology and outline of the object to be reconstructed. The initial mesh with high quality is obtained through the marching triangulation of the implicit surface. Secondly, we project each data point to the initial mesh, and get a scalar displacement field. Detailed features will be captured by the displaced mesh. Finally, we present an additional evolution process, which combines datadriven velocities and featurepreserving bilateral filters, in order to reproduce sharp features. We also show that various shape constraints, such as distance field constraints, range constraints and volume constraints can be naturally added to our framework, which is helpful to obtain a desired reconstruction result, especially when the given data contains noise and inaccuracies.
Adaptive Multivalued Volume Data Visualization Using Datadependent Error Metrics
 in Proceedings of the Third IASTED International Conference on Visualization, Imaging, and Image Processing (VIIP 2003), The International Association of Science and Technology for Development (IASTED
, 2003
"... Adaptive, and especially viewdependent, volume visualization is used to display large volume data at interactive frame rates preserving high visual quality in specified or implied regions of importance. In typical approaches, the error metrics and refinement oracles used for viewdependent rendering ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Adaptive, and especially viewdependent, volume visualization is used to display large volume data at interactive frame rates preserving high visual quality in specified or implied regions of importance. In typical approaches, the error metrics and refinement oracles used for viewdependent rendering are based on viewing parameters only. The approach presented in this paper considers viewing parameters and parameters for data exploration such as isovalues, velocity field magnitude, gradient magnitude, curl, or divergence. Error metrics are described for scalar fields, vector fields, and more general multivalued combinations of scalar and vector field data. The number of data being considered in these combinations is not limited by the error metric but the ability to use them to create meaningful visualizations. Our framework supports the application of visualization methods such as isosurface extraction to adaptively refined meshes. For multivalued data exploration purposes, we combine extracted mapping with color information and/or streamlines mapped onto an isosurface. Such a combined visualization seems advantageous, as scalar and vector field quantities can be combined visually in a highly expressive manner.
WAVELETS FOR ADAPTIVELY REFINED 3 √ 2SUBDIVISION MESHES ABSTRACT
"... For viewdependent visualization, adaptively refined volumetric meshes are used to adapt resolution to given error constraints. A mesh hierarchy based on the 3 √ 2subdivision scheme produces structured grids with highest adaptivity. Downsampling filters reduce aliasing effects and lead to higherqu ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
For viewdependent visualization, adaptively refined volumetric meshes are used to adapt resolution to given error constraints. A mesh hierarchy based on the 3 √ 2subdivision scheme produces structured grids with highest adaptivity. Downsampling filters reduce aliasing effects and lead to higherquality data representation (in terms of lower approximation error) at coarser levels of resolution. We present a method for applying waveletbased downsampling filters to adaptively refined meshes. We use a linear Bspline wavelet lifting scheme to derive narrow filter masks. Using these narrow masks, the wavelet filters are applicable to adaptively refined meshes without imposing any restrictions on the adaptivity of the meshes, i. e., all wavelet filtering operations can be performed without further subdivision steps. We define rules for vertex dependencies in waveletbased adaptive refinement and resolve them in an unambiguous manner. We use the wavelet filters for viewdependent visualization in order to demonstrate the functionality and the benefits of our approach. When using wavelet filters the approximation quality is higher at each resolution level. Thus, less polyhedra need to be traversed by a visualization method to meet certain error bounds / quality measures.
GPUAccelerated Surface Denoising and Morphing with Lattice Boltzmann Scheme
"... In this paper, we introduce a parallel numerical scheme, the lattice Boltzmann method, to shape modeling applications. The motivation of using this originallydesigned fluid dynamics solver in surface modeling is its simplicity, locality, parallelism from the cellularautomataoriginated updating ru ..."
Abstract
 Add to MetaCart
In this paper, we introduce a parallel numerical scheme, the lattice Boltzmann method, to shape modeling applications. The motivation of using this originallydesigned fluid dynamics solver in surface modeling is its simplicity, locality, parallelism from the cellularautomataoriginated updating rules, which can directly be mapped onto modern graphics hardware. A surface is implicitly represented by the signed distance field. The distances are then used in a modified LBM scheme as its computing primitive, instead of the densities in traditional LBM. The scheme can simulate curvature motions to smooth the surface with a diffusion process. Furthermore, an initial value level set method can be implemented for surface morphing. The distance difference between a morphing surface and a target surface defines the speed function of the evolving level sets, and is used as the driving force in the LBM. Our GPUaccelerated LBM algorithm has achieved outstanding performance for the denoising and morphing examples. It has the great potential to be further applied as a general GPU computing framework to many other solid and shape modeling applications.
Computer Graphics Group
"... Figure 1: Samples generated with our diffusion tensor visualization application. (a) shows automatically clustered whole brain tracking with assigned colors. (b) displays the optic tract (orange) and the pyramidal tract (blue) in combination with direct volume rendering of high resolution magnetic r ..."
Abstract
 Add to MetaCart
Figure 1: Samples generated with our diffusion tensor visualization application. (a) shows automatically clustered whole brain tracking with assigned colors. (b) displays the optic tract (orange) and the pyramidal tract (blue) in combination with direct volume rendering of high resolution magnetic resonance data. (c) is an example for safety estimation with the use of staggered hulls. (d) shows color encoded cluster segments of the corpus callosum combined with MRIT 1 data. Diffusion tensor imaging is a magnetic resonance imaging method which has gained increasing importance in neuroscience and especially in neurosurgery. It acquires diffusion properties represented by a symmetric 2nd order tensor for each voxel in the gathered dataset. From the medical point of view, the data is of special interest due to different diffusion characteristics of varying brain tissue allowing conclusions about the underlying structures such as white matter tracts. An obvious way to visualize this data is to focus on the anisotropic areas using the major eigenvector for tractography and rendering lines for visualization of the simulation results. Our approach extends this technique to avoid line representation since lines lead to very complex illustrations and furthermore are mistakable. Instead, we generate surfaces wrapping bundles of lines. Thereby, a more intuitive representation of different tracts is achieved.
Visual Comput (2008) 24: 435–448 DOI 10.1007/s0037100802223 ORIGINAL ARTICLE
, 2008
"... Abstract Given a large set of unorganized point sample data, we propose a new framework for computing a triangular mesh representing an approximating piecewise smooth surface. The data may be nonuniformly distributed, noisy, and may contain holes. This framework is based on the combination of two ty ..."
Abstract
 Add to MetaCart
Abstract Given a large set of unorganized point sample data, we propose a new framework for computing a triangular mesh representing an approximating piecewise smooth surface. The data may be nonuniformly distributed, noisy, and may contain holes. This framework is based on the combination of two types of surface representations, triangular meshes and Tspline level sets, which are implicit surfaces defined by refinable spline functions allowing Tjunctions. Our method contains three main steps. Firstly, we construct an implicit representation of a smooth (C 2 in our case) surface, by using an evolution process of Tspline level sets, such that the implicit surface captures the topology and outline of the object to be reconstructed. The initial mesh with high quality is obtained through the marching triangulation of the implicit surface. Secondly, we project each data point to the initial mesh, and get a scalar displacement field. Detailed features will be captured by the displaced mesh. Finally, we present an additional evolution process, which combines datadriven velocities and featurepreserving bilateral filters, in order to reproduce sharp features. We also show that various shape constraints, such as distance field constraints, range constraints and volume constraints can be naturally added to our framework, which is helpful to obtain a desired reconstruction result, especially when the given data contains noise and inaccuracies.