Results 1  10
of
76
Optimal Coding and Sampling of Triangulations
, 2003
"... We present a simple encoding of plane triangulations (aka. maximal planar graphs) by plane trees with two leaves per inner node. Our encoding is a bijection taking advantage of the minimal Schnyder tree decomposition of a plane triangulation. Coding and decoding take linear time. As a byproduct we ..."
Abstract

Cited by 42 (5 self)
 Add to MetaCart
(Show Context)
We present a simple encoding of plane triangulations (aka. maximal planar graphs) by plane trees with two leaves per inner node. Our encoding is a bijection taking advantage of the minimal Schnyder tree decomposition of a plane triangulation. Coding and decoding take linear time. As a byproduct we derive: (i) a simple interpretation of the formula for the number of plane triangulations with n vertices, (ii) a linear random sampling algorithm, (iii) an explicit and simple information theory optimal encoding.
Progressive Encoding of Complex Isosurfaces
, 2003
"... Some of the largest and most intricate surfaces result from isosurface extraction of volume data produced by 3D imaging modalities and scientific simulations. Such surfaces often possess both complicated geometry and topology (i.e., many connected components and high genus). Because of their sheer s ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
Some of the largest and most intricate surfaces result from isosurface extraction of volume data produced by 3D imaging modalities and scientific simulations. Such surfaces often possess both complicated geometry and topology (i.e., many connected components and high genus). Because of their sheer size, efficient compression algorithms, in particular progressive encodings, are critical in working with these surfaces. Most standard mesh compression algorithms have been designed to deal with generally smooth surfaces of low topologic complexity. Much better results can be achieved with algorithms which are specifically designed for isosurfaces arising from volumetric datasets.
Compact representations of simplicial meshes in two and three dimensions
 International Journal of Computational Geometry and Applications
, 2003
"... We describe data structures for representing simplicial meshes compactly while supporting online queries and updates efficiently. Our data structure requires about a factor of five less memory than the most efficient standard data structures for triangular or tetrahedral meshes, while efficiently su ..."
Abstract

Cited by 25 (6 self)
 Add to MetaCart
(Show Context)
We describe data structures for representing simplicial meshes compactly while supporting online queries and updates efficiently. Our data structure requires about a factor of five less memory than the most efficient standard data structures for triangular or tetrahedral meshes, while efficiently supporting traversal among simplices, storing data on simplices, and insertion and deletion of simplices. Our implementation of the data structures uses about 5 bytes/triangle in two dimensions (2D) and 7.5 bytes/tetrahedron in three dimensions (3D). We use the data structures to implement 2D and 3D incremental algorithms for generating a Delaunay mesh. The 3D algorithm can generate 100 Million tetrahedrons with 1 Gbyte of memory, including the space for the coordinates and all data used by the algorithm. The runtime of the algorithm is as fast as Shewchuk’s Pyramid code, the most efficient we know of, and uses a factor of 3.5 less memory overall. 1
Statistical Point Geometry
, 2003
"... We propose a scheme for modeling point sample geometry with statistical analysis. In our scheme we depart from the current schemes that deterministically represent the attributes of each point sample. We show how the statistical analysis of a densely sampled point model can be used to improve the ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
We propose a scheme for modeling point sample geometry with statistical analysis. In our scheme we depart from the current schemes that deterministically represent the attributes of each point sample. We show how the statistical analysis of a densely sampled point model can be used to improve the geometry bandwidth bottleneck and to do randomized rendering without sacrificing visual realism. We first carry out a hierarchical principal component analysis (PCA) of the model. This stage partitions the model into compact local geometries by exploiting local coherence. Our scheme handles vertex coordinates, normals, and color. The input model is reconstructed and rendered using a probability distribution derived from the PCA analysis. We demonstrate the benefits of this approach in all stages of the graphics pipeline: (1) orders of magnitude improvement in the storage and transmission complexity of point geometry, (2) direct rendering from compressed data, and (3) viewdependent randomized rendering.
A survey on data structures for levelofdetail models
 Advances in Multiresolution for Geometric Modelling, Series in Mathematics and Visualization
, 2004
"... Summary. In this paper we survey some of the major data structures for encoding Level Of Detail (LOD) models. We classify LOD data structures according to the dimensionality of the basic structural element they represent into point, triangle, and tetrahedronbased data structures. Within each clas ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
(Show Context)
Summary. In this paper we survey some of the major data structures for encoding Level Of Detail (LOD) models. We classify LOD data structures according to the dimensionality of the basic structural element they represent into point, triangle, and tetrahedronbased data structures. Within each class we will review singlelevel data structures, general data structures for LOD models based on irregular meshes as well as more specialized data structures that assume a certain (semi) regularity of the data. 1
Freelence  coding with free valences
 EUROGRAPHICS’05 PROCEEDINGS
, 2005
"... We introduce FreeLence, a novel and simple singlerate compression coder for triangle manifold meshes. Our method uses free valences and exploits geometric information for connectivity encoding. Furthermore, we introduce a novel linear prediction scheme for geometry compression of 3D meshes. Togethe ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
(Show Context)
We introduce FreeLence, a novel and simple singlerate compression coder for triangle manifold meshes. Our method uses free valences and exploits geometric information for connectivity encoding. Furthermore, we introduce a novel linear prediction scheme for geometry compression of 3D meshes. Together, these approaches yield a significant entropy reduction for mesh encoding with an average of 2030 % over leading singlerate regiongrowing coders, both for connectivity and geometry.
A Generic Scheme for Progressive Point Cloud Coding
"... In this paper, we propose a generic point cloud encoder that provides a unified framework for compressing different attributes of point samples corresponding to 3D objects with arbitrary topology. In the proposed scheme, the coding process is led by an iterative octree cell subdivision of the objec ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
(Show Context)
In this paper, we propose a generic point cloud encoder that provides a unified framework for compressing different attributes of point samples corresponding to 3D objects with arbitrary topology. In the proposed scheme, the coding process is led by an iterative octree cell subdivision of the object space. At each level of subdivision, positions of point samples are approximated by the geometry centers of all treefront cells while normals and colors are approximated by their statistical average within each of treefront cells. With this framework, we employ attributedependent encoding techniques to exploit different characteristics of various attributes. All of these have led to significant improvement in the ratedistortion (RD) performance and a computational advantage over the state of the art. Furthermore, given sufficient levels of octree expansion, normal space partitioning and resolution of color quantization, the proposed point cloud encoder can be potentially used for lossless coding of 3D point clouds.
Ratedistortion optimization for progressive compression of 3D mesh with color attributes
 VIS COMPUT
, 2011
"... We propose a new lossless progressive compression algorithm based on ratedistortion optimization for meshes with color attributes; the quantization precision of both the geometry and the color information is adapted to each intermediate mesh during the encoding/decoding process. This quantization p ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
We propose a new lossless progressive compression algorithm based on ratedistortion optimization for meshes with color attributes; the quantization precision of both the geometry and the color information is adapted to each intermediate mesh during the encoding/decoding process. This quantization precision can either be optimally determined with the use of a mesh distortion measure or quasioptimally decided based on an analysis of the mesh complexity in order to reduce the calculation time. Furthermore, we propose a new metric which estimates the geometry and color importance of each vertex during the simplification in order to faithfully preserve the feature elements. Experimental results show that our method outperforms the stateoftheart algorithm for colored meshes and competes with the most efficient algorithms for noncolored meshes.