Results 1  10
of
23
An introduction to parallel rendering
 Parallel Computing
, 1997
"... In computer graphics, rendering is the process by which an abstract description of a scene is converted to an image. When the scene is complex, or when highquality images or high frame rates are required, the rendering process becomes computationally demanding. To provide the necessary levels of pe ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
In computer graphics, rendering is the process by which an abstract description of a scene is converted to an image. When the scene is complex, or when highquality images or high frame rates are required, the rendering process becomes computationally demanding. To provide the necessary levels of performance, parallel computing techniques must be brought to bear. Although parallelism has been exploited in computer graphics since the early days of the field, its initial use was primarily in specialized applications. The VLSI revolution of the late 1970Õs and the advent of scalable parallel computers during the late 1980Õs changed this situation. Today, parallel hardware is routinely used in graphics workstations, and numerous softwarebased rendering systems have been developed for generalpurpose parallel architectures. This article provides a broad introduction to the subject of parallel rendering, encompassing both hardware and software systems. The focus is on the underlying concepts and the issues which arise in the design of parallel rendering algorithms and systems. We examine the different types of parallelism and how they can be applied in rendering applications. Concepts from parallel computing, such as data decomposition, task granularity, scalability, and load balancing, are considered in relation to the rendering
Coarsegrained parallelism for hierarchical radiosity using group iterative methods
 In ACM Computer Graphics (Proc. of SIGGRAPH’96
, 1996
"... ..."
(Show Context)
Efficient Parallel Global Illumination using Density Estimation
 Proceedings of ACM Parallel Rendering Symposium, (Atlanta
, 1995
"... This paper presents a multicomputer, parallel version of the recentlyproposed "Density Estimation" (DE) global illumination method, designed for computing solutions of environments with high geometric complexity (as many as hundreds of thousands of initial surfaces). In addition to the d ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
This paper presents a multicomputer, parallel version of the recentlyproposed "Density Estimation" (DE) global illumination method, designed for computing solutions of environments with high geometric complexity (as many as hundreds of thousands of initial surfaces). In addition to the diffuse interreflections commonly handled by conventional radiosity methods, this new method can also handle energy transport involving arbitrary nondiffuse surfaces. Output can either be Gouraudshaded elements for interactive walkthroughs, or raytraced images for higher quality still frames. The key difference of the DE algorithm from conventional radiosity, in terms of its ability to parallelize efficiently, is its microscopic view of energy transport, which avoids the O(n 2 ) pairwise surface interactions of most previous macroscopic radiosity algorithms (i:e:, those without clustering). Parallel DE is implemented as two separate parallel programs which perform different phases of the DE metho...
Load Balancing for a Parallel Radiosity Algorithm
 IN PROC. OF ACM PARALLEL RENDERING SYMPOSIUM'95
, 1995
"... The radiosity method models the interaction of light between diffuse surfaces, thereby accurately predicting global illumination effects. Due to the high computational effort to calculate the transfer of light between surfaces and the memory requirements for the scene description, a distributed, ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
The radiosity method models the interaction of light between diffuse surfaces, thereby accurately predicting global illumination effects. Due to the high computational effort to calculate the transfer of light between surfaces and the memory requirements for the scene description, a distributed, parallelized version of the algorithm is needed for scenes consisting of thousands of surfaces. We present
Coherence in Computer Graphics
, 1992
"... Coherence denotes similarities between items or entities. It describes the extent to which these items or entities are locally constant. An introduction to coherence and a survey of various types of coherence, that are used in computer graphics, are given. Techniques and data structures for exploiti ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Coherence denotes similarities between items or entities. It describes the extent to which these items or entities are locally constant. An introduction to coherence and a survey of various types of coherence, that are used in computer graphics, are given. Techniques and data structures for exploiting coherence in computer graphics are described. Incremental techniques, bounding volume schemes, subdivision techniques and several geometric data structures are discussed in more detail. Applications of coherence principles to computer graphics are treated and a survey of previous research is done. INTRODUCTION General remarks The widespread application of coherence principles allows only a vague definition of coherence. Without giving a formal definition coherence denotes, in the context of this paper, similarities between items or entities. It describes the extent to which these items or entities are locally constant. In many situations properties do not change drastically but rather in ...
Efficient Use of Parallel & Distributed Systems: From Theory to Practice
, 1995
"... . This article focuses on principles for the design of e#cient parallel algorithms for distributed memory computing systems. We describe the general trend in the development of architectural properties and evaluate the stateoftheart in a number of basic primitives like graph embedding, parti ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
. This article focuses on principles for the design of e#cient parallel algorithms for distributed memory computing systems. We describe the general trend in the development of architectural properties and evaluate the stateoftheart in a number of basic primitives like graph embedding, partitioning, dynamic load distribution, and communication which are used, to some extent, within all parallel applications. We discuss possible directions for future work on the design of universal basic primitives, able to perform e#ciently on a broad range of parallel systems and applications, and we also give certain examples of speci#c applications which demand specialized basic primitives in order to obtain e#cient parallel implementations. Finally,we show that programming frames can o#er a convenientway to encapsulate algorithmic knowhow on applications and basic primitives and to o#er this knowledge to nonspecialist users in a very e#ectiveway. 1 Introduction Parallel processi...
Implementation Results and Analysis of a Parallel Progressive Radiosity
, 1995
"... The quality of synthetic images depends, first, on the quality of the modelling of the threedimensional scenes to visualize; more numerous are the geometrical and optical details, more realistic are the resulting images. Unfortunately, such scene descriptions need a big amount of memory, as well as ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The quality of synthetic images depends, first, on the quality of the modelling of the threedimensional scenes to visualize; more numerous are the geometrical and optical details, more realistic are the resulting images. Unfortunately, such scene descriptions need a big amount of memory, as well as a long time of computation. In order to deal with these restrictions, we propose a parallel implementation for an extended stochastic progressive radiosity method, where form factors are computed with a ray tracing scheme, on a network of processors with a distributed memory and a message passing mechanism. Our program has already treated very big scenes (more than one million patches for example). Keywords: Image Synthesis, Realistic Rendering, Progressive Radiosity, Parallelism, MIMD, MultiThreading. 1 Introduction Image synthesis is decomposed in two parts. First, the modelling step describes the geometrical and optical properties of the scene to be rendered. By geometrical descriptio...
Progressive Refinement Radiosity on RingConnected Multicomputers
, 1993
"... The progressive refinement method is investigated for parallelization on ringconnected multicomputers. A synchronous scheme, based on static task assignment, is proposed, in order to achieve better coherence during the parallel light distribution computations. An efficient global circulation scheme ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The progressive refinement method is investigated for parallelization on ringconnected multicomputers. A synchronous scheme, based on static task assignment, is proposed, in order to achieve better coherence during the parallel light distribution computations. An efficient global circulation scheme is proposed for the parallel light distribution computations, which reduces the total volume of concurrent communication by an asymptotical factor. The proposed parallel algorithm is implemented on a ringembedded Intel's iPSC/2 hypercube multicomputer. Load balance quality of the proposed static assignment schemes are evaluated experimentally. The effect of coherence in the parallel light distribution computations on the shooting patch selection sequence is also investigated. Keywords : Progressive refinement radiosity, parallel computing, multicomputers, ring interconnection topology. 1 Introduction Radiosity [7] is an increasingly popular method for generating realistic images of nonex...
Parallel visibility computations for parallel radiosity
 PARALLEL PROCESSING
, 1994
"... The radiosity method models the interaction of light between diffuse reflecting surfaces, thereby accurately predicting global illumination effects. Due to the high computational effort to calculate the transfer of light between surfaces and the memory requirements for the scene description, a distr ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
The radiosity method models the interaction of light between diffuse reflecting surfaces, thereby accurately predicting global illumination effects. Due to the high computational effort to calculate the transfer of light between surfaces and the memory requirements for the scene description, a distributed, parallelized verison of the algorithm is needed for scenes consisting of thousands of surfaces. We present a distributed, parallel radiosity algorithm, which can subdivide the surfaces adaptively. Additionally we present a scheme for parallel visibility calculations. Adaptive load redistribution is also discussed.