Results 1 - 10
of
11
Pyramidal parametrics
- Computer Graphics (SIGGRAPH ’83 Proceedings
, 1983
"... The mapping of images onto surfaces may substantially increase the realism and information content of computer-generated imagery. The projection of a flat source image onto a curved surface may involve sampling difficulties, however, which are compounded as the view of the surface changes. As the pr ..."
Abstract
-
Cited by 304 (1 self)
- Add to MetaCart
(Show Context)
The mapping of images onto surfaces may substantially increase the realism and information content of computer-generated imagery. The projection of a flat source image onto a curved surface may involve sampling difficulties, however, which are compounded as the view of the surface changes. As the projected scale of the surface increases, interpolation between the original samples of the source image is necessary; as the scale is reduced, approximation of multiple samples in the source is required. Thus a constantly changing sampling window of view-dependent shape must traverse the source image. To reduce the computation implied by these requirements, a set of prefiltered source images may be created. This approach can be applied to particular advantage in animation, where a large number of frames using the same source image must be generated. This paper advances a "pyramidal parametric " prefiltering and sampling geometry which minimizes aliasing effects and assures continuity within and between target images. Although the mapping of texture onto surfaces is an excellent example of the process and provided the original motivation for its development, pyramidal parametric data structures admit of wider application. The aliasing of not only surface texture, but also highlights and even the surface representations themselves, may be minimized by pyramidal parametric means.
Survey Of Texture Mapping
- IEEE Computer Graphics and Applications
, 1986
"... This paper appeared in IEEE Computer Graphics and Applications, Nov. 1986, pp. 56-67. An earlier version of thi aper appeared in Graphics Interface '86, May 1986, pp. 207-212. This postscript version is missing all of the paste-up - ..."
Abstract
-
Cited by 227 (3 self)
- Add to MetaCart
This paper appeared in IEEE Computer Graphics and Applications, Nov. 1986, pp. 56-67. An earlier version of thi aper appeared in Graphics Interface '86, May 1986, pp. 207-212. This postscript version is missing all of the paste-up -
Constant-Time Filtering with Space-Variant Kernels
- Computer Graphics (SIGGRAPH '88 Proceedings
"... Filtering is an essential but costly step in many computer graphics applications, most notably in texture mapping. Several techniques have been previously developed which allow prefiltering of a texture (or in general an image) in time that is independent of the number of texture elements under the ..."
Abstract
-
Cited by 23 (2 self)
- Add to MetaCart
Filtering is an essential but costly step in many computer graphics applications, most notably in texture mapping. Several techniques have been previously developed which allow prefiltering of a texture (or in general an image) in time that is independent of the number of texture elements under the filter kernel. These are limited, however, to space-invariant kernels whose shape in texture space is the same independently of their positions, and usually are also limited to a small range of filters. We present here a technique that permits constant-time filtering for space-variant kernels. The essential step is to approximate a filter surface in texture space by a sum of suitably-chosen basis functions. The convolution of a filter with a texture is replaced by the weighted sum of the convolution of the basis functions with the texture, which can be precomputed. To achieve constant time, convolutions with the basis functions are computed and stored in a pyramidal fashion, and the right le...
Texture On Demand
, 1990
"... Texture On Demand (TOD) is a technique for organizing large amounts of stored texture data in disk files and accessing it efficiently. Simply reading entire texture images into memory is not a good solution for real memory systems or for virtual memory systems. Texture data should be read from disk ..."
Abstract
-
Cited by 11 (0 self)
- Add to MetaCart
Texture On Demand (TOD) is a technique for organizing large amounts of stored texture data in disk files and accessing it efficiently. Simply reading entire texture images into memory is not a good solution for real memory systems or for virtual memory systems. Texture data should be read from disk files only on demand. In the TOD technique, each texture image is stored as a sequence of fixed-size rectangular regions called tiles, rather than in the conventional raster scan-line order. Tiles are an appropriate unit of texture data to read into memory on demand. As fixed-size units with good locality of reference in a variety of rendering schemes, tiles can be cached in main memory using the paging algorithms common in virtual memory systems. Good results have been obtained using an LRU tile replacement algorithm to select a tile to be deleted when main memory space is required. Prefiltered textures are an important means of limiting bandwidth. TOD uses a set of prefiltered texture images called an r-set, a generalization of the texture pyramid (‘‘mip map’’). Texture filtering methods are reconsidered in terms of their performance in the TOD environment. Efficient filtering methods using the r-set are described. The paper describes various implementations of TOD, including a virtual memory implementation and a distributed implementation on a 16-processor multicomputer.
Illumination and Reflection Maps: Simulated Objects In . . .
, 1984
"... Blinn and Newell introduced reflection maps for computer simulated mirror highlights. This paper extends their method to cover a wider class of reflectance models. Panoramic images of real, painted and simulated environments are used as illumination maps that are convolved (blurred) and transformed ..."
Abstract
-
Cited by 7 (0 self)
- Add to MetaCart
Blinn and Newell introduced reflection maps for computer simulated mirror highlights. This paper extends their method to cover a wider class of reflectance models. Panoramic images of real, painted and simulated environments are used as illumination maps that are convolved (blurred) and transformed to create reflection maps. These tables of reflected light values are used to efficiently shade objects in an animation sequence. Shaders based on point illumination may be improved in a straightforward manner to use reflection maps. Shading is by table-lookup, and the number of calculations per pixel is constant regardless of the complexity of the reflected scene. Antialiased mapping further improves image quality. The resulting pictures have many of the reality cues associated with ray-tracing but at greatly reduced computational cost. The geometry of highlights is less exact than in ray-tracing, and multiple surface reflections are not explicitly handled. The color of diffuse reflections can be rendered more accurately than in ray-tracing.
Quantum Quackery
, 2012
"... Peer review is no panacea; every generation must reevaluate empirical evidence in the context of its own time. For the past century, quantum mechanics has defied common sense. Consistent in every detail with holographic virtual images, Young’s double-slit experiment generates diffraction patterns wi ..."
Abstract
- Add to MetaCart
Peer review is no panacea; every generation must reevaluate empirical evidence in the context of its own time. For the past century, quantum mechanics has defied common sense. Consistent in every detail with holographic virtual images, Young’s double-slit experiment generates diffraction patterns with coherent light, even one photon at a time, while incoherent light does not. Hence, interference pattern analogies are flagrant misrepresentations of the facts. Bohr’s quantum leap scenario violates the second law of thermodynamics and contradicts phase transition temperatures. Common sense dictates that the stability of molecular bonds contradicts probabilistic, leaping electrons. Bell’s inequality, a specious proof of quantum mechanics, derives from a false premise whose revelation by Joy Christian went widely unnoticed. Misinterpreting the facts, the blind-leadingthe-blind faith of quantum mechanics twisted inductive speculation into a Gordian knot of mass delusions. As long as peer review science chooses to legitimize theoretical speculation, the demarcation between science and pseudoscience will remain indefensible.
Title: Quantum Quackery
, 2012
"... Abstract: Peer review is no panacea; every generation must reevaluate empirical evidence in the context of its own time. For a century, quantum mechanics has defied common sense. However, common sense dictates that the stability of molecular bonds contradicts probabilistic, leaping electrons. In fac ..."
Abstract
- Add to MetaCart
Abstract: Peer review is no panacea; every generation must reevaluate empirical evidence in the context of its own time. For a century, quantum mechanics has defied common sense. However, common sense dictates that the stability of molecular bonds contradicts probabilistic, leaping electrons. In fact, Bohr’s quantum leap scenario violates the second law of thermodynamics and contradicts phase transition temperatures. Bell’s inequality, a specious proof of quantum mechanics, derives from a false premise whose revelation by Joy Christian went widely unnoticed. Consistent in every detail with holographic virtual images, Young’s double-slit experiment generates diffraction patterns with coherent light, even one photon at a time, while incoherent light does not. Hence, interference pattern analogies are flagrant misrepresentations of the facts. With no basis in fact, the blind-leading-the-blind faith of quantum mechanics twisted inductive speculation into a Gordian knot of mass delusions. As long as peer review science chooses to legitimize theoretical speculation, the demarcation between science and pseudoscience will remain indefensible. One Sentence Summary: Bohr was wrong and Einstein was right, “God doesn’t play dice with the world.”
FILTERING BY REPEATED INTEGRA- TION
"... Many applications of digital filtering require a space variant filter- one whose shape or size varies with position. The usual algorithm for such filters, direct convolution, is very costly for wide kernels. Image prefiltering provides an efficient alternative. Weexplore one prefiltering technique, ..."
Abstract
- Add to MetaCart
Many applications of digital filtering require a space variant filter- one whose shape or size varies with position. The usual algorithm for such filters, direct convolution, is very costly for wide kernels. Image prefiltering provides an efficient alternative. Weexplore one prefiltering technique, repeated integration, which is a generalization of Crow’s summed area table. We find that convolution of a signal with any piecewise polynomial kernel of degree n − 1can be computed by integrating the signal n times and point sampling it several times for each output sample. The use of second or higher order integration permits relatively high quality filtering. The advantage over direct convolution is that the cost of repeated integration filtering does not increase with filter width. Generalization to two-dimensional image filtering is straightforward. Implementations of the simple technique are presented in both preprocessing and stream processing styles.
Texture Resampling While Ray-Tracing: Approximating the Convolution Region Using Caching
, 1994
"... We present a cache-based approach to handling the difficult problem of performing visually acceptable texture resampling/filtering while ray-tracing. While many good methods have been proposed to handle the error introduced by the ray-tracing algorithm when sampling in screen space, handling this er ..."
Abstract
- Add to MetaCart
(Show Context)
We present a cache-based approach to handling the difficult problem of performing visually acceptable texture resampling/filtering while ray-tracing. While many good methods have been proposed to handle the error introduced by the ray-tracing algorithm when sampling in screen space, handling this error in texture space has been less adequately addressed. Our solution is to introduce the Convolution Mask Approximation Module (CMAM). The CMAM locally approximates the convolution region in texture space as a set of overlapping texture triangles by using a texture sample caching system and ray tagging. Since the caching mechanism is hidden within the CMAM, the ray-tracing algorithm itself is unchanged while achieving an adequate level of texture filtering (area sampling as opposed to point sampling/interpolation in texture space). The CMAM is easily adapted to incorporate prefiltering methods such as MIP mapping and summed-area