Results 1  10
of
42
Realtime shape editing using radial basis functions
 Computer Graphics Forum
, 2005
"... Current surfacebased methods for interactive freeform editing of high resolution 3D models are very powerful, but at the same time require a certain minimum tessellation or sampling quality in order to guarantee sufficient robustness. In contrast to this, space deformation techniques do not depend ..."
Abstract

Cited by 44 (8 self)
 Add to MetaCart
Current surfacebased methods for interactive freeform editing of high resolution 3D models are very powerful, but at the same time require a certain minimum tessellation or sampling quality in order to guarantee sufficient robustness. In contrast to this, space deformation techniques do not depend on the underlying surface representation and hence are affected neither by its complexity nor by its quality aspects. However, while analogously to surfacebased methods high quality deformations can be derived from variational optimization, the major drawback lies in the computation and evaluation, which is considerably more expensive for volumetric space deformations. In this paper we present techniques which allow us to use triharmonic radial basis functions for realtime freeform shape editing. An incremental leastsquares method enables us to approximately solve the involved linear systems in a robust and efficient manner and by precomputing a special set of deformation basis functions we are able to significantly reduce the perframe costs. Moreover, evaluating these linear basis functions on the GPU finally allows us to deform highly complex polygon meshes or pointbased models at a rate of 30M vertices or 13M splats per second, respectively. 1.
A surfacegrowing approach to multiview stereo reconstruction
 IN CVPR, 2007. [4] . OKUTOMI
"... We present a new approach to reconstruct the shape of a 3D object or scene from a set of calibrated images. The central idea of our method is to combine the topological flexibility of a pointbased geometry representation with the robust reconstruction properties of scenealigned planar primitives. ..."
Abstract

Cited by 33 (1 self)
 Add to MetaCart
We present a new approach to reconstruct the shape of a 3D object or scene from a set of calibrated images. The central idea of our method is to combine the topological flexibility of a pointbased geometry representation with the robust reconstruction properties of scenealigned planar primitives. This can be achieved by approximating the shape with a set of surface elements (surfels) in the form of planar disks which are independently fitted such that their footprint in the input images matches. Instead of using an artificial energy functional to promote the smoothness of the recovered surface during fitting, we use the smoothness assumption only to initialize planar primitives and to check the feasibility of the fitting result. After an initial disk has been found, the recovered region is iteratively expanded by growing further disks in tangent direction. The expansion stops when a disk rotates by more than a given threshold during the fitting step. A global sampling strategy guarantees that eventually the whole surface is covered. Our technique does not depend on a shape prior or silhouette information for the initialization and it can automatically and simultaneously recover the geometry, topology, and visibility information which makes it superior to other stateoftheart techniques. We demonstrate with several highquality reconstruction examples that our algorithm performs highly robustly and is tolerant to a wide range of image capture modalities.
Accurate, dense and robust multiview stereopsis
, 2007
"... Abstract: This paper proposes a novel algorithm for calibrated multiview stereopsis that outputs a (quasi) dense set of rectangular patches covering the surfaces visible in the input images. This algorithm does not require any initialization in the form of a bounding volume, and it detects and disc ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
Abstract: This paper proposes a novel algorithm for calibrated multiview stereopsis that outputs a (quasi) dense set of rectangular patches covering the surfaces visible in the input images. This algorithm does not require any initialization in the form of a bounding volume, and it detects and discards automatically outliers and obstacles. It does not perform any smoothing across nearby features, yet is currently the top performer in terms of both coverage and accuracy for four of the six benchmarkdatasets presented in [20]. The keys to its performance are effective techniques for enforcing local photometric consistency and global visibility constraints. Stereopsis is implemented as a match, expand, and filter procedure, starting from a sparse set of matched keypoints, and repeatedly expanding these to nearby pixel correspondences before using visibility constraints to filter away false matches. A simple but effective method for turning the resulting patch model into a mesh appropriate for imagebased modeling is also presented. The proposed approach is demonstrated on various datasets including objects with fine surface details, deep concavities, and thin structures, outdoor scenes observed from a restricted set of viewpoints, and “crowded ” scenes where moving obstacles appear in different places in multiple images of a static structure of interest. 1.
Anisotropic point set surfaces
 IN AFRIGRAPH ’06: PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS, VIRTUAL REALITY, VISUALISATION AND INTERACTION IN AFRICA (2006), ACM
"... Point Set Surfaces define smooth surfaces from regular samples based on weighted averaging of the points. Because weighting is done based on a spatial scale parameter, point set surfaces apply basically only to regular samples. We suggest to attach individual weight functions to each sample rather t ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
Point Set Surfaces define smooth surfaces from regular samples based on weighted averaging of the points. Because weighting is done based on a spatial scale parameter, point set surfaces apply basically only to regular samples. We suggest to attach individual weight functions to each sample rather than to the location in space. This extends Point Set Surfaces to irregular settings, including anisotropic sampling adjusting to the principal curvatures of the surface. In particular, we describe how to represent surfaces with ellipsoidal weight functions per sample. Details of deriving such a representation from typical inputs and computing points on the surface are discussed.
DuoDecim  A Structure for Point Scan Compression and Rendering
, 2005
"... In this paper we present a compression scheme for large point scans including perpoint normals. For the encoding of such scans we introduce a particular type of closest sphere packing grids, the hexagonal close packing (HCP). HCP grids provide a structure for an optimal packing of 3D space, and for ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
In this paper we present a compression scheme for large point scans including perpoint normals. For the encoding of such scans we introduce a particular type of closest sphere packing grids, the hexagonal close packing (HCP). HCP grids provide a structure for an optimal packing of 3D space, and for a given sampling error they result in a minimal number of cells if geometry is sampled into these grids. To compress the data, we extract linear sequences (runs) of filled cells in HCP grids. The problem of determining optimal runs is turned into a graph theoretical one. Point positions and normals in these runs are incrementally encoded. At a grid spacing close to the point sampling distance, the compression scheme only requires slightly more than 3 bits per point position. Incrementally encoded perpoint normals are quantized at high fidelity using only 5 bits per normal (see Figure 1). The compressed data stream can be decoded in the graphics processing unit (GPU). Decoded point positions are saved in graphics memory, and they are then used on the GPU again to render point primitives. In this way we render gigantic point scans from their compressed representation in local GPU memory at interactive frame rates (see Figure 2).
Interpolatory refinement for realtime processing of pointbased geometry
 COMPUTER GRAPHICS FORUM
, 2005
"... The point set is a flexible surface representation suitable for both geometry processing and realtime rendering. In most applications, the control of the point cloud density is crucial and being able to refine a set of points appears to be essential. In this paper, we present a new interpolatory re ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
The point set is a flexible surface representation suitable for both geometry processing and realtime rendering. In most applications, the control of the point cloud density is crucial and being able to refine a set of points appears to be essential. In this paper, we present a new interpolatory refinement framework for pointbased geometry. First we carefully select an appropriate onering neighborhood around the central interpolated point. Then new points are locally inserted where the density is too low using a √ 3like refinement procedure and they are displaced on the corresponding curved Point Normal triangle. Thus, a smooth surface is reconstructed by combining the smoothing property produced by the rotational effect of √ 3like refinements with the points/normal interpolation of PN triangles. In addition we show how to handle sharp features and how our algorithm naturally fills large holes in the geometry. Finally, we illustrate the robustness of our approach, its realtime capabilities and the smoothness of the reconstructed surface on a large set of input models, including irregular and sparse point clouds.
Interpolatory Point Set Surfaces  Convexity and Hermite Data
 ACM TRANSACTIONS ON GRAPHICS
, 2007
"... Point Set Surfaces define a (typically) manifold surface from a set of scattered points. The definition involves weighted centroids and a gradient field. The data points are interpolated if singular weight functions are used to define the centroids. While this way of deriving an interpolatory scheme ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Point Set Surfaces define a (typically) manifold surface from a set of scattered points. The definition involves weighted centroids and a gradient field. The data points are interpolated if singular weight functions are used to define the centroids. While this way of deriving an interpolatory scheme appears natural we show that it has two deficiencies: convexity of the input is not preserved and the extension to Hermite data is numerically unstable. We present a generalization of the standard scheme that we call Hermite Point Set Surface. It allows interpolating given normal constraints in a stable way. In addition, it yields an intuitive parameter for shape control and preserves convexity in most situations.
PointSampled Cell Complexes
"... A piecewise smooth surface, possibly with boundaries, sharp edges, corners, or other features is defined by a set of samples. The basic idea is to model surface patches, curve segments and points explicitly, and then to glue them together based on explicit connectivity information. The geometry is d ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
A piecewise smooth surface, possibly with boundaries, sharp edges, corners, or other features is defined by a set of samples. The basic idea is to model surface patches, curve segments and points explicitly, and then to glue them together based on explicit connectivity information. The geometry is defined as the set of stationary points of a projection operator, which is generalized to allow modeling curves with samples, and extended to account for the connectivity information. Additional tangent constraints can be used to model shapes with continuous tangents across edges and corners.
Pointbased Stream Surfaces and Path Surfaces
 In Proceedings of Graphics Interface 2007
, 2007
"... Figure 1: Visualization of the flow field of a tornado with: (left) a pointbased stream surface; (right) the combination of a stream surface and texturebased flow visualization to show the vector field within the surface. Each stream surface is seeded along a straight line in the center of the res ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Figure 1: Visualization of the flow field of a tornado with: (left) a pointbased stream surface; (right) the combination of a stream surface and texturebased flow visualization to show the vector field within the surface. Each stream surface is seeded along a straight line in the center of the respective image. We introduce a pointbased algorithm for computing and rendering stream surfaces and path surfaces of a 3D flow. The points are generated by particle tracing, and an even distribution of those particles on the surfaces is achieved by selective particle removal and creation. Texturebased surface flow visualization is added to show inner flow structure on those surfaces. We demonstrate that our visualization method is designed for steady and unsteady flow alike: both the path surface component and the texturebased flow representation are capable of processing timedependent data. Finally, we show that our algorithms lend themselves to an efficient GPU implementation that allows the user to interactively visualize and explore stream surfaces and path surfaces, even when seed curves are modified and even for timedependent vector fields.
Anisotropic Sampling for Differential Point Rendering of Implicit Surfaces
 In WSCG (Winter School of Computer Graphics
, 2005
"... In this paper, we propose a solution to adapt the differential point rendering technique developed by Kalaiah and Varshney to implicit surfaces. Differential point rendering was initially designed for parametric surfaces as a twostage sampling process that strongly relies on an adjacency relationshi ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In this paper, we propose a solution to adapt the differential point rendering technique developed by Kalaiah and Varshney to implicit surfaces. Differential point rendering was initially designed for parametric surfaces as a twostage sampling process that strongly relies on an adjacency relationship for the samples, which does not naturally exist for implicit surfaces. This fact made it particularly challenging to adapt the technique to implicit surfaces. To overcome this difficulty, we extended the particle sampling technique developed by Witkin and Heckbert in order to locally account for the principal directions of curvatures of the implicit surface. The final result of our process is a curvature driven anisotropic sampling where each sample "rules" a rectangular or elliptical surrounding domain and is oriented according to the directions of maximal and minimal curvatures. As in the differential point rendering technique, these samples can then be efficiently rendered using a specific shader on a programmable GPU.