Results 11  20
of
227
SemiRegular Mesh Extraction from Volumes
, 2000
"... We present a novel method to extract isosurfaces from distance volumes. It generates high quality semiregular multiresolution meshes of arbitrary topology. Our technique proceeds in two stages. First, a very coarse mesh with guaranteed topology is extracted. Subsequently an iterative multiscale f ..."
Abstract

Cited by 104 (13 self)
 Add to MetaCart
(Show Context)
We present a novel method to extract isosurfaces from distance volumes. It generates high quality semiregular multiresolution meshes of arbitrary topology. Our technique proceeds in two stages. First, a very coarse mesh with guaranteed topology is extracted. Subsequently an iterative multiscale forcebased solver refines the initial mesh into a semiregular mesh with geometrically adaptive sampling rate and good aspect ratio triangles. The coarse mesh extraction is performed using a new approach we call surface wavefront propagation. A set of discrete isodistance ribbons are rapidly built and connected while respecting the topology of the isosurface implied by the data. Subsequent multiscale refinement is driven by a simple forcebased solver designed to combine good isosurface fit and high quality sampling through reparameterization. In contrast to the Marching Cubes technique our output meshes adapt gracefully to the isosurface geometry, have a natural multiresolution structure and good aspect ratio triangles, as demonstrated with a number of examples.
Silhouette Clipping
, 2000
"... Approximating detailed models with coarse, texturemapped meshes results in polygonal silhouettes. To eliminate this artifact, we introduce silhouette clipping, a framework for efficiently clipping the rendering of coarse geometry to the exact silhouette of the original model. The coarse mesh is obt ..."
Abstract

Cited by 102 (8 self)
 Add to MetaCart
(Show Context)
Approximating detailed models with coarse, texturemapped meshes results in polygonal silhouettes. To eliminate this artifact, we introduce silhouette clipping, a framework for efficiently clipping the rendering of coarse geometry to the exact silhouette of the original model. The coarse mesh is obtained using progressive hulls, a novel representation with the nesting property required for proper clipping. We describe an improved technique for constructing texture and normal maps over this coarse mesh. Given a perspective view, silhouettes are efficiently extracted from the original mesh using a precomputed search tree. Within the tree, hierarchical culling is achieved using pairs of anchored cones. The extracted silhouette edges are used to set the hardware stencil buffer and alpha buffer, which in turn clip and antialias the rendered coarse geometry. Results demonstrate that silhouette clipping can produce renderings of similar quality to highresolution meshes in less rendering time.
Geometric approximation via coresets
 COMBINATORIAL AND COMPUTATIONAL GEOMETRY, MSRI
, 2005
"... The paradigm of coresets has recently emerged as a powerful tool for efficiently approximating various extent measures of a point set P. Using this paradigm, one quickly computes a small subset Q of P, called a coreset, that approximates the original set P and and then solves the problem on Q usin ..."
Abstract

Cited by 83 (9 self)
 Add to MetaCart
(Show Context)
The paradigm of coresets has recently emerged as a powerful tool for efficiently approximating various extent measures of a point set P. Using this paradigm, one quickly computes a small subset Q of P, called a coreset, that approximates the original set P and and then solves the problem on Q using a relatively inefficient algorithm. The solution for Q is then translated to an approximate solution to the original point set P. This paper describes the ways in which this paradigm has been successfully applied to various optimization and extent measure problems.
Segmenting Time Series: A Survey and Novel Approach
 In an Edited Volume, Data mining in Time Series Databases. Published by World Scientific
, 1993
"... In recent years, there has been an explosion of interest in mining time series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This represen ..."
Abstract

Cited by 82 (0 self)
 Add to MetaCart
(Show Context)
In recent years, there has been an explosion of interest in mining time series databases. As with most computer science problems, representation of the data is the key to efficient and effective solutions. One of the most commonly used representations is piecewise linear approximation. This representation has been used by various researchers to support clustering, classification, indexing and association rule mining of time series data. A variety of algorithms have been proposed to obtain this representation, with several algorithms having been independently rediscovered several times. In this paper, we undertake the first extensive review and empirical comparison of all proposed techniques. We show that all these algorithms have fatal flaws from a data mining perspective. We introduce a novel algorithm that we empirically show to be superior to all others in the literature.
Progressive lossless compression of arbitrary simplicial complexes
 ACM Trans. Graphics (Proc. ACM SIGGRAPH 2002
, 2002
"... Efficient algorithms for compressing geometric data have been widely developed in the recent years, but they are mainly designed for closed polyhedral surfaces which are manifold or “nearly manifold”. We propose here a progressive geometry compression scheme which can handle manifold models as well ..."
Abstract

Cited by 76 (0 self)
 Add to MetaCart
(Show Context)
Efficient algorithms for compressing geometric data have been widely developed in the recent years, but they are mainly designed for closed polyhedral surfaces which are manifold or “nearly manifold”. We propose here a progressive geometry compression scheme which can handle manifold models as well as “triangle soups ” and 3D tetrahedral meshes. The method is lossless when the decompression is complete which is extremely important in some domains such as medical or finite element. While most existing methods enumerate the vertices of the mesh in an order depending on the connectivity, we use a kdtree technique [8] which does not depend on the connectivity. Then we compute a compatible sequence of meshes which can be encoded using edge expansion [14] and vertex split [24]. 1 The main contributions of this paper are: the idea of using the kdtree encoding of the geometry to drive the construction of a sequence of meshes, an improved coding of the edge expansion and vertex split since the vertices to split are implicitly defined, a prediction scheme which reduces the code for simplices incident to the split vertex, and a new generalization of the edge expansion operation to tetrahedral meshes. 1
Visibility Preprocessing with Occluder Fusion for Urban Walkthroughs
 EUROGRAPHICS WORKSHOP ON RENDERING
, 2000
"... This paper presents an efficient algorithm for occlusion culling of urban environments. It is conservative and accurate in finding all significant occlusion. It discretizes the scene into view cells, for which celltoobject visibility is precomputed, making online overhead negligible. Unlike other ..."
Abstract

Cited by 72 (8 self)
 Add to MetaCart
This paper presents an efficient algorithm for occlusion culling of urban environments. It is conservative and accurate in finding all significant occlusion. It discretizes the scene into view cells, for which celltoobject visibility is precomputed, making online overhead negligible. Unlike other precomputation methods for view cells, it is able to conservatively compute all forms of occluder interaction for an arbitrary number of occluders. To speed up preprocessing, standard graphics hardware is exploited and occluder occlusion is considered. A walkthrough application running an 8 million polygon model of the city of Vienna on consumerlevel hardware illustrates our results.
Algorithms for reverse engineering boundary representation models
 ComputerAided Design
, 2001
"... Aprocedure for reconstructing solid models of conventional engineering objects from a multipleview, 3D point cloud is described. (Conventional means bounded by simple analytical surfaces, swept surfaces and blends.) Emphasis is put on producing accurate and topologically consistent boundary represe ..."
Abstract

Cited by 67 (11 self)
 Add to MetaCart
(Show Context)
Aprocedure for reconstructing solid models of conventional engineering objects from a multipleview, 3D point cloud is described. (Conventional means bounded by simple analytical surfaces, swept surfaces and blends.) Emphasis is put on producing accurate and topologically consistent boundary representation models, ready to be used in computer aided design and manufacture. The basic phases of our approach to reverse engineering are summarised, and related computational difficulties are analysed. Four key algorithmic components are presented in more detail: efficiently segmenting point data into regions; creating translational and rotational surfaces with smooth, constrained profiles; creating the topology of Brep models; and finally adding blends. The application of these algorithms in an integrated system is illustrated by means of various examples, including a wellknown reverse engineering benchmark. 1.
Nearlinear time approximation algorithms for curve simplification
 Proc. of the 10th European Symposium on Algorithms, 2002
, 2002
"... Abstract We consider the problem of approximating a polygonal curve P under a given error criterionby another polygonal curve P 0 whose vertices are a subset of the vertices of P. The goal is tominimize the number of vertices of P 0 while ensuring that the error between P 0 and P is belowa certain t ..."
Abstract

Cited by 64 (8 self)
 Add to MetaCart
Abstract We consider the problem of approximating a polygonal curve P under a given error criterionby another polygonal curve P 0 whose vertices are a subset of the vertices of P. The goal is tominimize the number of vertices of P 0 while ensuring that the error between P 0 and P is belowa certain threshold. We consider two different error measures: Hausdorff and Fr'echet. For both error criteria, we present nearlinear time approximation algorithms that, given a parameter &quot; ? 0, compute a simplified polygonal curve P 0 whose error is less than &quot; and size at most the sizeof an optimal simplified polygonal curve with error &quot;=2. We consider monotone curves in R2in the case of Hausdorff error measure under the uniform distance metric and arbitrary curves
A MultiSensor Approach to Creating Accurate Virtual Environments
, 1998
"... Creating virtual environment models often requires geometric data from range sensors as well as photometric data from CCD cameras. The model must be geometrically correct, visually realistic, and small enough in size to allow realtime rendering. We present an approach based on 3D range sensor data, ..."
Abstract

Cited by 55 (10 self)
 Add to MetaCart
Creating virtual environment models often requires geometric data from range sensors as well as photometric data from CCD cameras. The model must be geometrically correct, visually realistic, and small enough in size to allow realtime rendering. We present an approach based on 3D range sensor data, multiple CCD cameras, and a colour highresolution digital still camera. The multiple CCD cameras provide images for a photogrammetric bundle adjustment with constraints. The results of the bundle adjustments are used to register the 3D images from the range sensor in one coordinate system. The images from the highresolution still camera provide the texture for the final model. The paper describes the system, the techniques for the registration of the 3D images, the building of the efficient geometric model, and the registration and integration of the texture with a simplified geometric model. 1998 Elsevier Science B.V. All rights reserved.