Results 11  20
of
138
UserGuided Simplification
, 2003
"... While many effective automatic surface simplification algorithms have been developed, they often produce poor approximations when a model is simplified to a very low level of detail. Furthermore, previous algorithms are not sensitive to semantic or highlevel meanings of models. In this paper, we pre ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
While many effective automatic surface simplification algorithms have been developed, they often produce poor approximations when a model is simplified to a very low level of detail. Furthermore, previous algorithms are not sensitive to semantic or highlevel meanings of models. In this paper, we present a userguided approach for mesh simplification that aims to overcome such limitations. Our proposed method allows users to selectively control the relative importance of different surface regions and preserve various features through the imposition of geometric constraints. Using our system, users can produce perceptually improved approximations with very little effort.
Tetfusion: an algorithm for rapid tetrahedral mesh simplification
 In IEEE Visualization ’02 (2002), IEEE Computer Society
"... (cells that intersect a vertical cutting plane in the XY plane at a specific Z value) is rendered to show the interior elements. (Dataset courtesy: Peter Williams, Lawrence Livermoore National Laboratory). This paper introduces an algorithm for rapid progressive simplification of tetrahedral meshes: ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
(Show Context)
(cells that intersect a vertical cutting plane in the XY plane at a specific Z value) is rendered to show the interior elements. (Dataset courtesy: Peter Williams, Lawrence Livermoore National Laboratory). This paper introduces an algorithm for rapid progressive simplification of tetrahedral meshes: TetFusion. We describe how a simple geometry decimation operation steers a rapid and controlled progressive simplification of tetrahedral meshes, while also taking care of complex meshinconsistency problems. The algorithm features a high decimation ratio per step, and inherently discourages any cases of selfintersection of boundary, elementboundary intersection at concave boundaryregions, and negative volume tetrahedra (flipping). We achieved rigorous reduction ratios of up to 98 % for meshes consisting of 827,904 elements in less than 2 minutes, progressing through a series of levelofdetails (LoDs) of the mesh in a controlled manner. We describe how the approach supports a balanced redistribution of space between tetrahedral elements, and explain some useful control parameters that make it faster and more intuitive than ‘edge collapse’based decimation methods for volumetric meshes [3, 19, 21, 22]. Finally, we discuss how this approach can be employed for rapid LoD prototyping of large timevarying datasets as an aid to interactive visualization.
A Stream Algorithm for the Decimation of Massive Meshes
, 2003
"... We present an outofcore mesh decimation algorithm that is able to handle input and output meshes of arbitrary size. The algorithm reads the input from a data stream in a single pass and writes the output to another stream while using only a fixedsized incore buffer. By applying randomized multip ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
(Show Context)
We present an outofcore mesh decimation algorithm that is able to handle input and output meshes of arbitrary size. The algorithm reads the input from a data stream in a single pass and writes the output to another stream while using only a fixedsized incore buffer. By applying randomized multiple choice optimization, we are able to use incremental mesh decimation based on edge collapses and the quadric error metric. The quality of our results is comparable to stateoftheart highquality mesh decimation schemes (which are slower than our algorithm) and the decimation performance matches the performance of the most efficient outofcore techniques (which generate meshes of inferior quality).
ContentBased Retrieval of 3D Models
 ACM Trans. Multimedia Computing Comm. Applications
, 2006
"... In the past few years, there has been an increasing availability of technologies for the acquisition of digital 3D models of real objects and the consequent use of these models in a variety of applications, in medicine, engineering, and cultural heritage. In this framework, contentbased retrieval o ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
In the past few years, there has been an increasing availability of technologies for the acquisition of digital 3D models of real objects and the consequent use of these models in a variety of applications, in medicine, engineering, and cultural heritage. In this framework, contentbased retrieval of 3D objects is becoming an important subject of research, and finding adequate descriptors to capture global or local characteristics of the shape has become one of the main investigation goals. In this article, we present a comparative analysis of a few different solutions for description and retrieval by similarity of 3D models that are representative of the principal classes of approaches proposed. We have developed an experimental analysis by comparing these methods according to their robustness to deformations, the ability to capture an object’s structural complexity, and the resolution at which models are considered.
Survey on SemiRegular Multiresolution Models for Interactive Terrain Rendering
"... Abstract Rendering high quality digital terrains at interactive rates requires carefully crafted algorithms and data structures able to balance the competing requirements of realism and frame rates, while taking into account the memory and speed limitations of the underlying graphics platform. In th ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
(Show Context)
Abstract Rendering high quality digital terrains at interactive rates requires carefully crafted algorithms and data structures able to balance the competing requirements of realism and frame rates, while taking into account the memory and speed limitations of the underlying graphics platform. In this survey, we analyze multiresolution approaches that exploit a certain semiregularity of the data. These approaches have produced some of the most efficient systems to date. After providing a short background and motivation for the methods, we focus on illustrating models based on tiled blocks and nested regular grids, quadtrees and triangle bintrees triangulations, as well as cluster based approaches. We then discuss LOD error metrics and systemlevel data management aspects of interactive terrain visualization, including dynamic scene management, outofcore data organization and compression, as well as numerical accuracy.
Permission Grids: Practical, ErrorBounded Simplification
 ACM Transactions on Graphics
, 2002
"... We introduce the permission grid, a spatial occupancy grid used to guide almost any standard polygonal surface simplification algorithm into generating an approximation with a guaranteed geometric error bound. In particular, the distance between any point on the approximation and the original surfac ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
(Show Context)
We introduce the permission grid, a spatial occupancy grid used to guide almost any standard polygonal surface simplification algorithm into generating an approximation with a guaranteed geometric error bound. In particular, the distance between any point on the approximation and the original surface is bounded by a userspecified tolerance. Such bounds are notably absent from most current simplification methods, and are becoming increasingly important for applications such as collision detection and scientific computing. Conceptually simple, the permission grid defines a volume in which the approximation must lie, and does not permit the underlying simplification algorithm to generate approximations outside of this volume. The permission grid makes three important, practical improvements over current errorbounded simplification methods. First, it works on arbitrary triangular models, handling all manners of mesh degeneracies gracefully. Further, the error tolerance may be expanded as simplification proceeds, allowing the construction
Mining Scalefree Networks using Geodesic Clustering
 IN PROC. 10 TH ACM SIGKDD INT. CONF
, 2004
"... Many realworld graphs have been shown to be scalefree vertex degrees follow power law distributions, vertices tend to cluster, and the average length of all shortest paths is small. We present a new model for understanding scalefree networks based on multilevel geodesic approximation, using a ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
Many realworld graphs have been shown to be scalefree vertex degrees follow power law distributions, vertices tend to cluster, and the average length of all shortest paths is small. We present a new model for understanding scalefree networks based on multilevel geodesic approximation, using a new data structure called a multilevel mesh. Using this
Fast Mesh Decimation by MultipleChoice Techniques
, 2002
"... We present a new mesh decimation framework which is based on the probabilistic optimization technique of MultipleChoice algorithms. While producing the same expected quality of the output meshes, the MultipleChoice approach leads to a significant speedup compared to the wellestablished standard f ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
We present a new mesh decimation framework which is based on the probabilistic optimization technique of MultipleChoice algorithms. While producing the same expected quality of the output meshes, the MultipleChoice approach leads to a significant speedup compared to the wellestablished standard framework for mesh decimation as a greedy optimization scheme. Moreover, MultipleChoice decimation does not require a global priority queue data structure which reduces the memory overhead and simplifies the algorithmic structure. We explain why and how the MultipleChoice optimization works well for the mesh decimation problem and give a detailed CPU profile analysis to explain where the speedup comes from.
Marching Intersections: an Efficient Resampling Algorithm for Surface Management
 In Proceedings of Shape Modeling International (SMI
, 2001
"... The paper presents a simple and efficient algorithm for the removal of small topological inconsistencies and high frequency details from surface models. The method, called Marching Intersections (MI), adopts a volumetric approach and acts as a resampling filter: all the intersection points between t ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
(Show Context)
The paper presents a simple and efficient algorithm for the removal of small topological inconsistencies and high frequency details from surface models. The method, called Marching Intersections (MI), adopts a volumetric approach and acts as a resampling filter: all the intersection points between the input model and the lines of a user selected 3D reference grid are located and then, beginning from these intersections, an output surface is reconstructed. MI, which presents good characteristics in terms of efficiency, compactness, and quality of the output models, can be also used: for the conversion between different representation schemes; to perform logical operations on geometric models; for the topological simplification of surfaces; and for the simplification of huge meshes, i.e. meshes too large to be allocated in main memory during the simplification process. All these aspects are discussed in the paper and timing and graphic results are presented.
A MultiResolution Topological Representation for NonManifold Meshes
 IN 7TH ACM SYMPOSIUM ON SOLID MODELING AND APPLICATIONS
, 2004
"... We address the problem of representing and processing 3D objects, described through simplicial meshes, which consist of parts of mixed dimensions, and with a nonmanifold topology, at different levels of detail. First, we describe a multiresolution model, that we call a Nonmanifold MultiTessel ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
We address the problem of representing and processing 3D objects, described through simplicial meshes, which consist of parts of mixed dimensions, and with a nonmanifold topology, at different levels of detail. First, we describe a multiresolution model, that we call a Nonmanifold MultiTessellation (NMT), and we consider the selective refinement query, which is at the heart of several analysis operations on multiresolution meshes. Next, we focus on a specific instance of a NMT, generated by simplifying simplicial meshes based on vertexpair contraction, and we describe a compact data structure for encoding such a model. We also propose a new data structure for twodimensional simplicial meshes, capable of representing both connectivity and adjacency information with a small memory overhead, which is used to describe the mesh extracted from an NMT through selective refinement. Finally, we present algorithms to efficiently perform updates on such a data structure.