Results 1  10
of
222
Physically Based Lighting Calculations for Computer Graphics
, 1991
"... Realistic image generation is presented in a theoretical formulation that builds from previous work on the rendering equation. Previous and new solution techniques for the global illumination are discussed in the context of this formulation. The basic ..."
Abstract

Cited by 69 (12 self)
 Add to MetaCart
Realistic image generation is presented in a theoretical formulation that builds from previous work on the rendering equation. Previous and new solution techniques for the global illumination are discussed in the context of this formulation. The basic
Discrepancy as a Quality Measure for Sample Distributions
, 1991
"... Discrepancy, a scalar measure of sample point equidistribution, is discussed in the context of Computer Graphics sampling problems. Several sampling strategies and their discrepancy characteristics are examined. The relationship between image error and the discrepancy of the sampling patterns used t ..."
Abstract

Cited by 67 (7 self)
 Add to MetaCart
Discrepancy, a scalar measure of sample point equidistribution, is discussed in the context of Computer Graphics sampling problems. Several sampling strategies and their discrepancy characteristics are examined. The relationship between image error and the discrepancy of the sampling patterns used to generate the imaqge is established. The definition of discrepancy is extended to nonuniform sampling patterns.
The Multilevel Finite Element Method for Adaptive Mesh Optimization and Visualization of Volume Data
 In Proceedings Visualization
, 1997
"... Multilevel representations and mesh reduction techniques have been used for accelerating the processing and the rendering of large datasets representing scalar or vector valued functions defined on complex 2 or 3 dimensional meshes. We present a method based on finite element approximations which co ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
Multilevel representations and mesh reduction techniques have been used for accelerating the processing and the rendering of large datasets representing scalar or vector valued functions defined on complex 2 or 3 dimensional meshes. We present a method based on finite element approximations which combines these two approaches in a new and unique way that is conceptually simple and theoretically sound. The main idea is to consider mesh reduction as an approximation problem in appropriate finite element spaces. Starting with a very coarse triangulation of the functional domain a hierarchy of highly nonuniform tetrahedral (or triangular in 2D) meshes is generated adaptively by local refinement. This process is driven by controlling the local error of the piecewise linear finite element approximation of the function on each mesh element. A reliable and efficient computation of the global approximation error combined with a multilevel preconditioned conjugate gradient solver are the key co...
Methods for Approximating Integrals in Statistics with Special Emphasis on Bayesian Integration Problems
 Statistical Science
"... This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain method ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain methods. Each method is discussed giving an outline of the basic supporting theory and particular features of the technique. Conclusions are drawn concerning the relative merits of the methods based on the discussion and their application to three examples. The following broad recommendations are made. Asymptotic methods should only be considered in contexts where the integrand has a dominant peak with approximate ellipsoidal symmetry. Importance sampling, and preferably adaptive importance sampling, based on a multivariate Student should be used instead of asymptotics methods in such a context. Multiple quadrature, and in particular subregion adaptive integration, are the algorithms of choice for...
Exposure in Wireless Sensor Networks: Theory and Practical Solutions
 Wireless Networks
, 2002
"... Wireless adhoc sensor networks have the potential to provide the missing interface between the physical world and the Internet, thus impacting a large number of users. This connection will enable computational treatments of the physical world in ways never before possible. In this far reaching scen ..."
Abstract

Cited by 31 (2 self)
 Add to MetaCart
Wireless adhoc sensor networks have the potential to provide the missing interface between the physical world and the Internet, thus impacting a large number of users. This connection will enable computational treatments of the physical world in ways never before possible. In this far reaching scenario, quality of service can be expressed in terms of accuracy and/or latency of observing events and overall state of the physical world. Consequently, one of the fundamental problems in sensor networks is the calculation of coverage, which can be defined as a measure of the ability to detect objects within a sensor filed. Exposure is directly related to coverage in that it is an integral measure of how well the sensor network can observe an object, moving on an arbitrary path, over a period of time.
The Natural Element Method In Solid Mechanics
, 1998
"... The application of the Natural Element Method (NEM) (Traversoni, 1994; Braun and Sambridge, 1995) to boundary value problems in twodimensional small displacement elastostatics is presented. The discrete model of the domain \Omega consists of a set of distinct nodes N , and a polygonal descripti ..."
Abstract

Cited by 31 (12 self)
 Add to MetaCart
The application of the Natural Element Method (NEM) (Traversoni, 1994; Braun and Sambridge, 1995) to boundary value problems in twodimensional small displacement elastostatics is presented. The discrete model of the domain \Omega consists of a set of distinct nodes N , and a polygonal description of the boundary @ In the Natural Element Method, the trial and test functions are constructed using natural neighbor interpolants. These interpolants are based on the Voronoi tessellation of the set of nodes N . The interpolants are smooth (C NEM is identical to linear finite elements. The NEM interpolant is strictly linear between adjacent nodes on the boundary of the convex hull, which facilitates imposition of essential boundary conditions. A methodology to model material discontinuities and nonconvex bodies (cracks) using NEM is also described.
Orthogonal polynomials and cubature formulae on spheres and on balls
 Department of Mathematics, University of Oregon
"... Abstract. Orthogonal polynomials on the standard simplex Σ d in R d are shown to be related to the spherical orthogonal polynomials on the unit sphere S d in R d+1 that are invariant under the group Z2× · · ·×Z2. For a large class of measures on S d cubature formulae invariant under Z2 × · · · ..."
Abstract

Cited by 28 (20 self)
 Add to MetaCart
Abstract. Orthogonal polynomials on the standard simplex Σ d in R d are shown to be related to the spherical orthogonal polynomials on the unit sphere S d in R d+1 that are invariant under the group Z2× · · ·×Z2. For a large class of measures on S d cubature formulae invariant under Z2 × · · · × Z2 are shown to be characterized by cubature formulae on Σ d. Moreover, it is also shown that there is a correspondence between orthogonal polynomials and cubature formulae on Σ d and those invariant on the unit ball B d in R d. The results provide a new approach to study orthogonal polynomials and cubature formulae on spheres and on simplices. 1.
The Stochastic Inventory Routing Problem with Direct Deliveries
 Transportation Science
, 2000
"... Vendor managedi nventory repleni shmenti s a busi ness practi cei n whi ch vendors moni tor thei r customers 'i nventori es, and deci de when and how muchi nventory should be repleni shed. Thei nventory rout i g problem addresses the coordi nati on of i ventory management and transportat ix . T ..."
Abstract

Cited by 22 (5 self)
 Add to MetaCart
Vendor managedi nventory repleni shmenti s a busi ness practi cei n whi ch vendors moni tor thei r customers 'i nventori es, and deci de when and how muchi nventory should be repleni shed. Thei nventory rout i g problem addresses the coordi nati on of i ventory management and transportat ix . The abiS9 y to solve the i ventory routi ng problem contr i utes to the reali(S il of the potentiS sav i gs i i ventory and transportat ix costs brought about by vendor managedi nventory replen ix ment. Thei nventory routi ng problemi s hard, especi allyi f a large number of customersi si nvolved. We formulate thei nventory routi ng problem as a Markov deci si on process, and we propose approxi mati on methods to find good soluti ons wi th reasonable computati onal e#ort. Computati onal results are presented for thei nventory routi ng problem wi thdi rect deli veri es. # Supported by the National Science Foundation under grant DMI9875400. The inventory routing problem (IRP) is one of the core problems that has to besUW ed when implementing the emergingbus#1HH practice called vendor managed inventory replenishment (VMI). VMI refers to the s tuation where the replenis ment of inventory at a number of locations is controlled by a central decis on maker (vendor). The centraldecisH n maker can be thes upplier and the inventory can be kept at independent cusU4#U, or the centraldecis5# maker can be a manager res ons41# for inventoryreplenis,fi3 t at a number of warehous es or retail outlets of thesW e company. Often the central decis on maker manages a fleet of vehicles that make the deliveries Inthis paper the centraldecisH3 maker is called thes,UD4HD and the inventory locations are referred to as the cus omers VMI di#ers from conventional inventory management in the following way. I...
Importance Sampled Learning Ensembles
, 2003
"... Learning a function of many arguments is viewed from the perspective of high dimensional numerical quadrature. It is shown that many of the popular ensemble learning procedures can be cast in this framework. In particular randomized methods, including bagging and random forests, are seen to cor ..."
Abstract

Cited by 18 (5 self)
 Add to MetaCart
Learning a function of many arguments is viewed from the perspective of high dimensional numerical quadrature. It is shown that many of the popular ensemble learning procedures can be cast in this framework. In particular randomized methods, including bagging and random forests, are seen to correspond to random Monte Carlo integration methods each based on particular importance sampling strategies. Non random boosting methods are seen to correspond to deterministic quasi Monte Carlo integration techniques. This view helps explain some of their properties and suggests modifications to them that can substantially improve their accuracy while dramatically improving computational performance.
Efficient automatic quadrature in 3d Galerkin BEM
, 1996
"... We present cubature methods approximating the surface integrals arising by Galerkin discretization of boundary integral equations on surfaces in R³. This numerical integrator does not depend on the explicit form of the kernel function, the trial and test space, or the surface parametrization. Thus, ..."
Abstract

Cited by 17 (9 self)
 Add to MetaCart
We present cubature methods approximating the surface integrals arising by Galerkin discretization of boundary integral equations on surfaces in R³. This numerical integrator does not depend on the explicit form of the kernel function, the trial and test space, or the surface parametrization. Thus, it is possible to generate the system matrix for a broad class of integral equations just by replacing the subroutine for evaluating the kernel function. We will present formulae to determine the minimal order of the cubature methods for a required accuracy. Emphasize is laid on numerical experiments confirming the theoretical results.