Results 1  10
of
168
Edgebreaker: Connectivity compression for triangle meshes
 IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
, 1999
"... Edgebreaker is a simple scheme for compressing the triangle/vertex incidence graphs (sometimes called connectivity or topology) of threedimensional triangle meshes. Edgebreaker improves upon the worst case storage required by previously reported schemes, most of which require O(nlogn) bits to sto ..."
Abstract

Cited by 265 (22 self)
 Add to MetaCart
Edgebreaker is a simple scheme for compressing the triangle/vertex incidence graphs (sometimes called connectivity or topology) of threedimensional triangle meshes. Edgebreaker improves upon the worst case storage required by previously reported schemes, most of which require O(nlogn) bits to store the incidence graph of a mesh of n triangles. Edgebreaker requires only 2n bits or less for simple meshes and can also support fully general meshes by using additional storage per handle and hole. Edgebreaker's compression and decompression processes perform the same traversal of the mesh from one triangle to an adjacent one. At each stage, compression produces an opcode describing the topological relation between the current triangle and the boundary of the remaining part of the mesh. Decompression uses these opcodes to reconstruct the entire incidence graph. Because Edgebreaker's compression and decompression are independent of the vertex locations, they may be combined with a variety of vertexcompressing techniques that exploit topological information about the mesh to better estimate vertex locations. Edgebreaker may be used to compress the connectivity of an entire mesh bounding a 3D polyhedron or the connectivity of a triangulated surface patch whose boundary needs not be encoded. Its superior compression capabilities, the simplicity of its implementation, and its versatility make Edgebreaker particularly suitable for the emerging 3D data exchange standards for interactive graphic applications. The paper also offers a comparative survey of the rapidly growing field of geometric compression.
Geometric Compression through Topological Surgery
 ACM TRANSACTIONS ON GRAPHICS
, 1998
"... ... this article introduces a new compressed representation for complex triangulated models and simple, yet efficient, compression and decompression algorithms. In this scheme, vertex positions are quantized within the desired accuracy, a vertex spanning tree is used to predict the position of each ..."
Abstract

Cited by 250 (26 self)
 Add to MetaCart
... this article introduces a new compressed representation for complex triangulated models and simple, yet efficient, compression and decompression algorithms. In this scheme, vertex positions are quantized within the desired accuracy, a vertex spanning tree is used to predict the position of each vertex from 2, 3, or 4 of its ancestors in the tree, and the correction vectors are entropy encoded. Properties, such as normals, colors, and texture coordinates, are compressed in a similar manner. The connectivity is encoded with no loss of information to an average of less than two bits per triangle. The vertex spanning tree and a small set of jump edges are used to split the model into a simple polygon. A triangle spanning tree and a sequence of marching bits are used to encode the triangulation of the polygon. Our approach improves on Michael Deering's pioneering results by exploiting the geometric coherence of several ancestors in the vertex spanning tree, preserving the connectivity with no loss of information, avoiding vertex repetitions, and using about three times fewer bits for the connectivity. However, since decompression requires random access to all vertices, this method must be modified for hardware rendering with limited onboard memory. Finally, we demonstrate implementation results for a variety of VRML models with up to two orders of magnitude compression
Approximation Algorithms for Disjoint Paths Problems
, 1996
"... The construction of disjoint paths in a network is a basic issue in combinatorial optimization: given a network, and specified pairs of nodes in it, we are interested in finding disjoint paths between as many of these pairs as possible. This leads to a variety of classical NPcomplete problems for w ..."
Abstract

Cited by 140 (0 self)
 Add to MetaCart
The construction of disjoint paths in a network is a basic issue in combinatorial optimization: given a network, and specified pairs of nodes in it, we are interested in finding disjoint paths between as many of these pairs as possible. This leads to a variety of classical NPcomplete problems for which very little is known from the point of view of approximation algorithms. It has recently been brought into focus in work on problems such as VLSI layout and routing in highspeed networks; in these settings, the current lack of understanding of the disjoint paths problem is often an obstacle to the design of practical heuristics.
Cluster algebras II: Finite type classification, Invent
 Department of Mathematics, Northeastern University
"... 1.2. Basic definitions 3 1.3. Finite type classification 5 ..."
Abstract

Cited by 105 (16 self)
 Add to MetaCart
1.2. Basic definitions 3 1.3. Finite type classification 5
Removing excess topology from isosurfaces
 ACM Trans. Graph
"... Many highresolution surfaces are created through isosurface extraction from volumetric representations, obtained by 3D photography, CT, or MRI. Noise inherent in the acquisition process can lead to geometrical and topological errors. Reducing geometrical errors during reconstruction is well studied ..."
Abstract

Cited by 74 (1 self)
 Add to MetaCart
Many highresolution surfaces are created through isosurface extraction from volumetric representations, obtained by 3D photography, CT, or MRI. Noise inherent in the acquisition process can lead to geometrical and topological errors. Reducing geometrical errors during reconstruction is well studied. However, isosurfaces often contain many topological errors in the form of tiny handles. These nearly invisible artifacts hinder subsequent operations like mesh simplification, remeshing, and parametrization. In this article we present a practical method for removing handles in an isosurface. Our algorithm makes an axisaligned sweep through the volume to locate handles, compute their sizes, and selectively remove them. The algorithm is designed to facilitate outofcore execution. It finds the handles by incrementally constructing and analyzing a Reeb graph. The size of a handle is measured by a short nonseparating cycle. Handles are removed robustly by modifying the volume rather than attempting “mesh surgery. ” Finally, the volumetric modifications are spatially localized to preserve geometrical detail. We demonstrate topology simplification on several complex models, and show its benefits for subsequent surface processing.
Convex partitions of polyhedra: a lower bound and worstcase optimal algorithm
 SIAM J. Comput
, 1984
"... Abstract. The problem of partitioning a polyhedron into aminimum number of convex pieces is known to be NPhard. We establish here a quadratic lower bound on the complexity of this problem, and we describe an algorithm that produces a number of convex parts within a constant factor of optimal in the ..."
Abstract

Cited by 69 (3 self)
 Add to MetaCart
Abstract. The problem of partitioning a polyhedron into aminimum number of convex pieces is known to be NPhard. We establish here a quadratic lower bound on the complexity of this problem, and we describe an algorithm that produces a number of convex parts within a constant factor of optimal in the worst case. The algorithm is linear in the size of the polyhedron and cubic in the number of reflex angles. Since in most applications areas, the former quantity greatly exceeds the latter, the algorithm is viable in practice. Key words. Computational geometry, convex decompositions, data structures, lower bounds, polyhedra 1. Introduction. The
Graphcover decoding and finitelength analysis of messagepassing iterative decoding of LDPC codes
 IEEE TRANS. INFORM. THEORY
, 2005
"... The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are comp ..."
Abstract

Cited by 67 (12 self)
 Add to MetaCart
The goal of the present paper is the derivation of a framework for the finitelength analysis of messagepassing iterative decoding of lowdensity paritycheck codes. To this end we introduce the concept of graphcover decoding. Whereas in maximumlikelihood decoding all codewords in a code are competing to be the best explanation of the received vector, under graphcover decoding all codewords in all finite covers of a Tanner graph representation of the code are competing to be the best explanation. We are interested in graphcover decoding because it is a theoretical tool that can be used to show connections between linear programming decoding and messagepassing iterative decoding. Namely, on the one hand it turns out that graphcover decoding is essentially equivalent to linear programming decoding. On the other hand, because iterative, locally operating decoding algorithms like messagepassing iterative decoding cannot distinguish the underlying Tanner graph from any covering graph, graphcover decoding can serve as a model to explain the behavior of messagepassing iterative decoding. Understanding the behavior of graphcover decoding is tantamount to understanding
Disjoint Paths in Densely Embedded Graphs
 in Proceedings of the 36th Annual Symposium on Foundations of Computer Science
, 1995
"... We consider the following maximum disjoint paths problem (mdpp). We are given a large network, and pairs of nodes that wish to communicate over paths through the network  the goal is to simultaneously connect as many of these pairs as possible in such a way that no two communication paths share a ..."
Abstract

Cited by 60 (6 self)
 Add to MetaCart
We consider the following maximum disjoint paths problem (mdpp). We are given a large network, and pairs of nodes that wish to communicate over paths through the network  the goal is to simultaneously connect as many of these pairs as possible in such a way that no two communication paths share an edge in the network. This classical problem has been brought into focus recently in papers discussing applications to routing in highspeed networks, where the current lack of understanding of the mdpp is an obstacle to the design of practical heuristics. We consider the class of densely embedded, nearlyEulerian graphs, which includes the twodimensional mesh and many other planar and locally planar interconnection networks. We obtain a constantfactor approximation algorithm for the maximum disjoint paths problem for this class of graphs; this improves on an O(log n)approximation for the special case of the twodimensional mesh due to AumannRabani and the authors. For networks that ...
Geometry Coding and VRML
, 1998
"... The Virtual Reality Modeling Language (VRML) is rapidly becoming the standard file format for transmitting 3D virtual worlds across the Internet. Static and dynamic descriptions of 3D objects, multimedia content, and a variety of hyperlinks can be represented in VRML files. Both VRML browsers and au ..."
Abstract

Cited by 57 (10 self)
 Add to MetaCart
The Virtual Reality Modeling Language (VRML) is rapidly becoming the standard file format for transmitting 3D virtual worlds across the Internet. Static and dynamic descriptions of 3D objects, multimedia content, and a variety of hyperlinks can be represented in VRML files. Both VRML browsers and authoring tools for the creation of VRML files are widely available for several different platforms. In this paper we describe the topologicallyassisted geometric compression technology included in our proposal for the VRML Compressed Binary Format. This technology produces significant reduction of file sizes and, subsequently, of the time required for transmission of such files across the Internet. Compression ratios of up to 50:1 or more are achieved for large models. The proposal also includes combines a binary encoding to create compact, rapidlyparsable binary VRML files. The proposal is currently being evaluated by the Compressed Binary Format Working Group of the VRML Consortium as a ...