Results 1  10
of
85
Visual Simulation of Smoke
, 2001
"... In this paper, we propose a new approach to numerical smoke simulation for computer graphics applications. The method proposed here exploits physics unique to smoke in order to design a numerical method that is both fast and efficient on the relatively coarse grids traditionally used in computer gra ..."
Abstract

Cited by 264 (20 self)
 Add to MetaCart
In this paper, we propose a new approach to numerical smoke simulation for computer graphics applications. The method proposed here exploits physics unique to smoke in order to design a numerical method that is both fast and efficient on the relatively coarse grids traditionally used in computer graphics applications (as compared to the much finer grids used in the computational fluid dynamics literature). We use the inviscid Euler equations in our model, since they are usually more appropriate for gas modeling and less computationally intensive than the viscous NavierStokes equations used by others. In addition, we introduce a physically consistent vorticity confinement term to model the small scale rolling features characteristic of smoke that are absent on most coarse grid simulations. Our model also correctly handles the interaction of smoke with moving objects. Keywords: Smoke, computational fluid dynamics, NavierStokes equations, Euler equations, SemiLagrangian methods, stable fluids, vorticity confinement, participating media 1
Perspective Shadow Maps
 ACM Transactions on Graphics
, 2002
"... Figure 1: (Left) Uniform 512x512 shadow map and resulting image. (Right) The same with a perspective shadow map of the same size. Shadow maps are probably the most widely used means for the generation of shadows, despite their well known aliasing problems. In this paper we introduce perspective shad ..."
Abstract

Cited by 152 (8 self)
 Add to MetaCart
Figure 1: (Left) Uniform 512x512 shadow map and resulting image. (Right) The same with a perspective shadow map of the same size. Shadow maps are probably the most widely used means for the generation of shadows, despite their well known aliasing problems. In this paper we introduce perspective shadow maps, which are generated in normalized device coordinate space, i.e., after perspective transformation. This results in important reduction of shadow map aliasing with almost no overhead. We correctly treat light source transformations and show how to include all objects which cast shadows in the transformed space. Perspective shadow maps can directly replace standard shadow maps for interactive hardware accelerated rendering as well as in highquality, offline renderers. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generationâ€”Bitmap and framebuffer operations; I.3.7 [Computer
Using Generative Models for Handwritten Digit Recognition
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 1996
"... We describe a method of recognizing handwritten digits by fitting generative models that are built from deformable Bsplines with Gaussian "ink generators" spaced along the length of the spline. The splines are adjusted using a novel elastic matching procedure based on the Expectation Maximization ( ..."
Abstract

Cited by 69 (8 self)
 Add to MetaCart
We describe a method of recognizing handwritten digits by fitting generative models that are built from deformable Bsplines with Gaussian "ink generators" spaced along the length of the spline. The splines are adjusted using a novel elastic matching procedure based on the Expectation Maximization (EM) algorithm that maximizes the likelihood of the model generating the data. This approach has many advantages. (1) After identifying the model most likely to have generated the data, the system not only produces a classification of the digit but also a rich description of the instantiation parameters which can yield information such as the writing style. (2) During the process of explaining the image, generative models can perform recognition driven segmentation. (3) The method involves a relatively small number of parameters and hence training is relatively easy and fast. (4) Unlike many other recognition schemes it does not rely on some form of prenormalization of input images, but can ...
Evaluation and Design of Filters Using a Taylor Series Expansion
 IEEE Transactions on Visualization and Computer Graphics
, 1997
"... We describe a new method for analyzing, classifying, and evaluating filters that can be applied to interpolation filters as well as to arbitrary derivative filters of any order. Our analysis is based on the Taylor series expansion of the convolution sum. Our analysis shows the need and derives the m ..."
Abstract

Cited by 60 (6 self)
 Add to MetaCart
We describe a new method for analyzing, classifying, and evaluating filters that can be applied to interpolation filters as well as to arbitrary derivative filters of any order. Our analysis is based on the Taylor series expansion of the convolution sum. Our analysis shows the need and derives the method for the normalization of derivative filter weights. Under certain minimal restrictions of the underlying function, we are able to compute tight absolute error bounds of the reconstruction process. We demonstrate the utilization of our methods to the analysis of the class of cubic BCspline filters. As our technique is not restricted to interpolation filters, we are able to show that the CatmullRom spline filter and its derivative are the most accurate reconstruction and derivative filters, respectively, among the class of BCspline filters. We also present a new derivative filter which features better spatial accuracy than any derivative BCspline filter, and is optimal within our fra...
Opacity Shadow Maps
 In Proceedings of the 12th Eurographics Workshop on Rendering Techniques
, 2001
"... Opacity shadow maps approximate light transmittance inside a complex volume with a set of planar opacity maps. A volume made of standard primitives (points, lines, and polygons) is sliced and rendered with graphics hardware to each opacity map that stores alpha values instead of traditionally used d ..."
Abstract

Cited by 45 (2 self)
 Add to MetaCart
Opacity shadow maps approximate light transmittance inside a complex volume with a set of planar opacity maps. A volume made of standard primitives (points, lines, and polygons) is sliced and rendered with graphics hardware to each opacity map that stores alpha values instead of traditionally used depth values. The alpha values are sampled in the maps enclosing each primitive point and interpolated for shadow computation. The algorithm is memory efficient and extensively exploits existing graphics hardware. The method is suited for generation of selfshadows in discontinuous volumes with explicit geometry, such as foliage, fur, and hairs. Continuous volumes such as clouds and smoke may also benefit from the approach.
Perceptually Modulated Level of Detail for Virtual Environments
, 1997
"... This thesis presents a generic and principled solution for optimising the visual complexity of any arbitrary computergenerated virtual environment (VE). This is performed with the ultimate goal of reducing the inherent latencies of current virtual reality (VR) technology. Effectively, we wish to re ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
This thesis presents a generic and principled solution for optimising the visual complexity of any arbitrary computergenerated virtual environment (VE). This is performed with the ultimate goal of reducing the inherent latencies of current virtual reality (VR) technology. Effectively, we wish to remove extraneous detail from an environment which the user cannot perceive, and thus modulate the graphical complexity of a VE with little or no perceptual artifacts. The work proceeds by investigating contemporary models and theories of visual perception and then applying these to the field of realtime computer graphics. Subsequently, a technique is devised to assess the perceptual content of a computergenerated image in terms of spatial frequency (c/deg), and a model of contrast sensitivity is formulated to describe a user's ability to perceive detail under various conditions in terms of this metric. This allows us to base the level of detail (LOD) of each object in a VE on a measure of ...
ObjectBased Coding of Stereo Image Sequences Using Joint 3D Motion/Disparity Segmentation
 in Proc. SPIE Conf. on Visual Communications and Image Processing
, 1995
"... An objectbased coding scheme is proposed for the coding of a stereoscopic image sequence, using motion and disparity information. A hierarchical blockbased motion estimation approach is used for initialization, while disparity estimation is performed using a pixelbased hierarchical dynamic progra ..."
Abstract

Cited by 34 (22 self)
 Add to MetaCart
An objectbased coding scheme is proposed for the coding of a stereoscopic image sequence, using motion and disparity information. A hierarchical blockbased motion estimation approach is used for initialization, while disparity estimation is performed using a pixelbased hierarchical dynamic programming algorithm. A split and merge segmentation procedure based on 3D motion modeling is then used to determine regions with similar motion parameters. The segmentation part of the algorithm is interleaved with the estimation part in order to optimize the coding performance of the procedure. Furthermore, a technique is examined for propagating the segmentation information with time. A 3D motion compensated prediction technique is used for both intensity and depth image sequence coding. Error images and depth maps are encoded using DCT and Hu man methods. Alternately, an e cient wireframe depth modeling technique may beusedtoconvey depth information to the receiver. Motion and wireframe model parameters are then quantized and transmitted to the decoder, along with the segmentation information. As a straightforward application, the use of the depth map information for the generation of intermediate views at the receiver is also discussed. The performance of the proposed compression methods is evaluated experimentally and is compared to other stereoscopic image sequence coding schemes.
Parallel Volume Visualization on a Hypercube Architecture
 Workshop on Volume Visualization
, 1992
"... A parallel solution to the visualisation of high resolution vol ume data is presented. Based on the ray tracing (RT) visu alization technique, the system works on a distributed memory MIMD architecture. A hybrid strategy to ray tracing parallelization is applied, using ray dataflow within an image ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
A parallel solution to the visualisation of high resolution vol ume data is presented. Based on the ray tracing (RT) visu alization technique, the system works on a distributed memory MIMD architecture. A hybrid strategy to ray tracing parallelization is applied, using ray dataflow within an image partition approach. This strategy allows the flexible and effective management of huge dataset on architectures with limited local memory. The dataset is distributed over the nodes using a slicepartitioning technique. The simple data partition chosen implies a straighforward communications pattern of the visualization processes and this improves both software design and eJciency, while providing deadlock prevention. The partitioning technique used and the network interconnection topology allow for the efficient implementation of a statical load balancing technique through prerendering of a low resolution image. Details related to the practical issues involved in the parallelization of volumetric RT are discussed, with particular reference to deadlock and termi nation issues.
Easily Adding Animations to Interfaces Using Constraints
 In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST '96
, 1996
"... Adding animation to interfaces is a very difficult task with today's toolkits, even though there are many situations in which it would be useful and effective. The Amulet toolkit contains a new form of animation constraint that allows animations to be added to interfaces extremely easily without ch ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
Adding animation to interfaces is a very difficult task with today's toolkits, even though there are many situations in which it would be useful and effective. The Amulet toolkit contains a new form of animation constraint that allows animations to be added to interfaces extremely easily without changing the logic of the application or the graphical objects themselves. An animation constraint detects changes to the value of the slot to which it is attached, and causes the slot to instead take on a series of values interpolated between the original and new values. The advantage over previous approaches is that animation constraints provide significantly better modularity and reuse. The programmer has independent control over the graphics to be animated, the start and end values of the animation, the path through value space, and the timing of the animation. Animations can be attached to any object, even existing widgets from the toolkit, and any type of value can be animated: scalars, ...
A comparison of Gaussian and mean curvatures estimation methods on triangular meshes
 In: ICRA
, 2003
"... Estimating intrinsic geometric properties of a surface from a polygonal mesh obtained from range data is an important stage of numerous algorithms in computer and robot vision, computer graphics, geometric modeling, industrial and biomedical engineering. This work considers different computational s ..."
Abstract

Cited by 30 (0 self)
 Add to MetaCart
Estimating intrinsic geometric properties of a surface from a polygonal mesh obtained from range data is an important stage of numerous algorithms in computer and robot vision, computer graphics, geometric modeling, industrial and biomedical engineering. This work considers different computational schemes for local estimation of intrinsic curvature geometric properties. Five different algorithms and their modifications were tested on triangular meshes that represent tesselations of synthetic geometric models. The results were compared with the analytically computed values of the Gaussian and mean curvatures of the non uniform rational Bspline (NURBs) surfaces, these meshes originated from. This work manifests the best algorithms suited for total (Gaussian) and mean curvature estimation, and shows that indeed different alogrithms should be employed to compute the Gaussian and mean curvatures.