Results 1  10
of
76
Wavelet Radiosity
, 1993
"... Radiosity methods have been shown to be an effective means to solve the global illumination problem in Lambertian diffuse environments. These methods approximate the radiosity integral equation by projecting the unknown radiosity function into a set of basis functions with limited support resulting ..."
Abstract

Cited by 149 (10 self)
 Add to MetaCart
Radiosity methods have been shown to be an effective means to solve the global illumination problem in Lambertian diffuse environments. These methods approximate the radiosity integral equation by projecting the unknown radiosity function into a set of basis functions with limited support resulting in a set of n linear equations where n is the number of discrete elements in the scene. Classical radiosity methods required the evaluation of n 2 interaction coefficients. Efforts to reduce the number of required coefficients without compromising error bounds have focused on raising the order of the basis functions, meshing, accounting for discontinuities, and on developing hierarchical approaches, which have been shown to reduce the required interactions to O(n). In this paper we show that the hierarchical radiosity formulation is an instance of a more general set of methods based on wavelet theory. This general framework offers a unified view of both higher order element approaches to...
On the Nyström Method for Approximating a Gram Matrix for Improved KernelBased Learning
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2005
"... A problem for many kernelbased methods is that the amount of computation required to find the solution scales as O(n³), where n is the number of training examples. We develop and analyze an algorithm to compute an easilyinterpretable lowrank approximation to an nn Gram matrix G such that compu ..."
Abstract

Cited by 108 (7 self)
 Add to MetaCart
A problem for many kernelbased methods is that the amount of computation required to find the solution scales as O(n³), where n is the number of training examples. We develop and analyze an algorithm to compute an easilyinterpretable lowrank approximation to an nn Gram matrix G such that computations of interest may be performed more rapidly. The approximation is of the form G k = CW , where C is a matrix consisting of a small number c of columns of G and W k is the best rankk approximation to W , the matrix formed by the intersection between those c columns of G and the corresponding c rows of G. An important aspect of the algorithm is the probability distribution used to randomly sample the columns; we will use a judiciouslychosen and datadependent nonuniform probability distribution. Let F denote the spectral norm and the Frobenius norm, respectively, of a matrix, and let G k be the best rankk approximation to G. We prove that by choosing O(k/# ) columns both in expectation and with high probability, for both # = 2, F , and for all k : 0 rank(W ). This approximation can be computed using O(n) additional space and time, after making two passes over the data from external storage. The relationships between this algorithm, other related matrix decompositions, and the Nyström method from integral equation theory are discussed.
Discontinuity Meshing for Radiosity
 Third Eurographics Workshop on Rendering
, 1992
"... The radiosity method is the most popular algorithm for simulating interreflection of light between diffuse surfaces. Most existing radiosity algorithms employ simple meshes and piecewise constant approximations, thereby constraining the radiosity function to be constant across each polygonal element ..."
Abstract

Cited by 90 (2 self)
 Add to MetaCart
The radiosity method is the most popular algorithm for simulating interreflection of light between diffuse surfaces. Most existing radiosity algorithms employ simple meshes and piecewise constant approximations, thereby constraining the radiosity function to be constant across each polygonal element. Much more accurate simulations are possible if linear, quadratic, or higher degree approximations are used. In order to realize the potential accuracy of higherdegree approximations, however, it is necessary for the radiosity mesh to resolve discontinuities such as shadow edges in the radiosity function. A discontinuity meshing algorithm is presented that places mesh boundaries directly along discontinuities. Such algorithms offer the potential of faster, more accurate simulations. Results are shown for threedimensional scenes. Keywords: global illumination, diffuse interreflection, adaptive mesh, shadow. 1 Introduction One of the most challenging tasks of image synthesis in computer ...
Efficient spatiotemporal grouping using the Nyström method
 In Proc. IEEE Conf. Comput. Vision and Pattern Recognition
, 2001
"... Spectral graph theoretic methods have recently shown great promise for the problem of image segmentation, but due to the computational demands, applications of such methods to spatiotemporal data have been slow to appear. For even a short video sequence, the set of all pairwise voxel similarities is ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
Spectral graph theoretic methods have recently shown great promise for the problem of image segmentation, but due to the computational demands, applications of such methods to spatiotemporal data have been slow to appear. For even a short video sequence, the set of all pairwise voxel similarities is a huge quantity of data: one second of a � � ¢ � � sequence captured at Hz entails on the order of pairwise similarities. The contribution of this paper is a method that substantially reduces the computational requirements of grouping algorithms based on spectral partitioning, making it feasible to apply them to very large spatiotemporal grouping problems. Our approach is based on a technique for the numerical solution of eigenfunction problems known as the Nyström method. This method allows extrapolation of the complete grouping solution using only a small number of “typical ” samples. In doing so, we successfully exploit the fact that there are far fewer coherent groups in an image sequence than pixels. 1
Wavelet Projections for Radiosity
 Computer Graphics Forum
, 1994
"... One important goal of image synthesis research is to accelerate the process of obtaining realistic images using the radiosity method. Two important concepts recently introduced are the general framework of projection methods and the hierarchical radiosity method. Wavelet theory, which explores the s ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
One important goal of image synthesis research is to accelerate the process of obtaining realistic images using the radiosity method. Two important concepts recently introduced are the general framework of projection methods and the hierarchical radiosity method. Wavelet theory, which explores the space of hierarchical basis functions, offers an elegant framework that unites these two concepts and allows us to more formally understand the hierarchical radiosity method. Wavelet expansions of the radiosity kernel have negligible entries in regions where high frequency/fine detail information is not needed. A sparse system remains if these entries are ignored. This is similar to applying a lossy compression scheme to the form factor matrix. The sparseness of the system allows for asymptotically faster radiosity algorithms by limiting the number of matrix terms that need to be computed. The application of these methods to 3D environments is described in [9]. Due to space limitations in tha...
Radiosity in Flatland
 Computer Graphics Forum
, 1992
"... The radiosity method for the simulation of interreflection of light between diffuse surfaces is such a common image synthesis technique that its derivation is worthy of study. We here examine the radiosity method in a two dimensional, flatland world. It is shown that the radiosity method is a simple ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
The radiosity method for the simulation of interreflection of light between diffuse surfaces is such a common image synthesis technique that its derivation is worthy of study. We here examine the radiosity method in a two dimensional, flatland world. It is shown that the radiosity method is a simple finite element method for the solution of the integral equation governing global illumination. These twodimensional studies help explain the radiosity method in general and suggest a number of improvements to existing algorithms. In particular, radiosity solutions can be improved using a priori discontinuity meshing, placing mesh boundaries on discontinuities such as shadow edges. When discontinuity meshing is used along with piecewiselinear approximations instead of the current piecewiseconstant approximations, the accuracy of radiosity simulations can be greatly increased. Keywords: integral equation, adaptive mesh, finite element method, discontinuity, shadow, global illumination, di...
Wavelet Methods for Radiance Computations
, 1994
"... This paper describes a new algorithm to compute radiance in a synthetic environment. Motivated by the success of wavelet methods for radiosity computations we have applied multi wavelet bases to the computation of radiance in the presence of glossy reflectors. We have implemented this algorithm and ..."
Abstract

Cited by 27 (3 self)
 Add to MetaCart
This paper describes a new algorithm to compute radiance in a synthetic environment. Motivated by the success of wavelet methods for radiosity computations we have applied multi wavelet bases to the computation of radiance in the presence of glossy reflectors. We have implemented this algorithm and report on some experiments performed with it. In particular we show that the convergence properties of basis functions with 14 vanishing moments are in accordance with theoretical predictions. As in the case of wavelet radiosity we find higher order bases to have advantages. However, the cost scaling due to the higher dimensionality of the problem is such that the higher order bases only become competitive for very high precision requirements. In practice we rarely go beyond piecewise linear functions. 1 Introduction One of the main areas of computer graphics research concerns the analysis and synthesis of the propagation of light in a given scene. The goal of this research is the develo...
Structured Sampling And Reconstruction Of Illumination For Image Synthesis
, 1994
"... An important goal of image synthesis is to achieve accurate, efficient and consistent sampling and reconstruction of illumination varying over surfaces in an environment. A new approach is introduced for the treatment of diffuse polyhedral environments lit by area light sources, based on the identif ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
An important goal of image synthesis is to achieve accurate, efficient and consistent sampling and reconstruction of illumination varying over surfaces in an environment. A new approach is introduced for the treatment of diffuse polyhedral environments lit by area light sources, based on the identification of important properties of illumination structure. The properties of unimodality and curvature of illumination in unoccluded environments are used to develop a high quality sampling algorithm which includes error bounds. An efficient algorithm is presented to partition the scene polygons into a mesh of cells, in which the visible part of the source has the same topology. A fast incremental algorithm is presented to calculate the backprojection, which is an abstract representation of this topology. The behaviour of illumination in the penumbral regions is carefully studied, and is shown to be monotonic and well behaved within most of the mesh cells. An algorithm to reduce the mesh siz...
Existence and Stability of Standing Pulses in Neural Networks
 I. Existence. SIAM Journal on Applied Dynamical Systems
, 2003
"... Abstract. We analyze the stability of standing pulse solutions of a neural network integrodifferential equation. The network consists of a coarsegrained layer of neurons synaptically connected by lateral inhibition with a nonsaturating nonlinear gain function. When two standing singlepulse soluti ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Abstract. We analyze the stability of standing pulse solutions of a neural network integrodifferential equation. The network consists of a coarsegrained layer of neurons synaptically connected by lateral inhibition with a nonsaturating nonlinear gain function. When two standing singlepulse solutions coexist, the small pulse is unstable, and the large pulse is stable. The large single pulse is bistable with the “alloff ” state. This bistable localized activity may have strong implications for the mechanism underlying working memory. We show that dimple pulses have similar stability properties to large pulses but double pulses are unstable.
Wavelet Algorithms For Illumination Computations
, 1994
"... One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
One of the core problems of computer graphics is the computation of the equilibrium distribution of light in a scene. This distribution is given as the solution to a Fredholm integral equation of the second kind involving an integral over all surfaces in the scene. In the general case such solutions can only be numerically approximated, and are generally costly to compute, due to the geometric complexity of typical computer graphics scenes. For this computation both Monte Carlo and finite element techniques (or hybrid approaches) are typically used. A simplified version of the illumination problem is known as radiosity, which assumes that all surfaces are diffuse reflectors. For this case hierarchical techniques, first introduced by Hanrahan et al.[32], have recently gained prominence. The hierarchical approaches lead to an asymptotic improvement when only finite precision is required. The resulting algorithms have cost proportional to O(k² + n) versus the usual O(n²) (k is the nu...