Results 1  10
of
96
Computer Experiments
, 1996
"... Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, a ..."
Abstract

Cited by 67 (5 self)
 Add to MetaCart
Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, and so on. Some of the most widely used computer models, and the ones that lead us to work in this area, arise in the design of the semiconductors used in the computers themselves. A process simulator starts with a data structure representing an unprocessed piece of silicon and simulates the steps such as oxidation, etching and ion injection that produce a semiconductor device such as a transistor. A device simulator takes a description of such a device and simulates the flow of current through it under varying conditions to determine properties of the device such as its switching speed and the critical voltage at which it switches. A circuit simulator takes a list of devices and the
Explicit Cost Bounds of Algorithms for Multivariate Tensor Product Problems
 J. Complexity
, 1994
"... We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the form (c( ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the form (c(d) + 2) fi 1 ` fi 2 + fi 3 ln 1=" d \Gamma 1 ' fi 4 (d\Gamma1) ` 1 " ' fi 5 : Here c(d) is the cost of one function evaluation (or one linear functional evaluation), and fi i 's do not depend on d; they are determined by the properties of the problem for d = 1. For certain tensor product problems, these cost bounds do not exceed c(d) K " \Gammap for some numbers K and p, both independent of d. We apply these general estimates to certain integration and approximation problems in the worst and average case settings. We also obtain an upper bound, which is independent of d, for the number, n("; d), of points for which discrepancy (with unequal weights) is at most ", n("; d) 7:26 ...
Numerical Integration using Sparse Grids
 NUMER. ALGORITHMS
, 1998
"... We present and review algorithms for the numerical integration of multivariate functions defined over ddimensional cubes using several variants of the sparse grid method first introduced by Smolyak [51]. In this approach, multivariate quadrature formulas are constructed using combinations of tensor ..."
Abstract

Cited by 40 (16 self)
 Add to MetaCart
We present and review algorithms for the numerical integration of multivariate functions defined over ddimensional cubes using several variants of the sparse grid method first introduced by Smolyak [51]. In this approach, multivariate quadrature formulas are constructed using combinations of tensor products of suited onedimensional formulas. The computing cost is almost independent of the dimension of the problem if the function under consideration has bounded mixed derivatives. We suggest the usage of extended Gauss (Patterson) quadrature formulas as the onedimensional basis of the construction and show their superiority in comparison to previously used sparse grid approaches based on the trapezoidal, ClenshawCurtis and Gauss rules in several numerical experiments and applications.
Adaptive sparse grid multilevel methods for elliptic PDEs based on finite differences
, 1998
"... We present a multilevel approach for the solution of partial differential equations. It is based on a multiscale basis which is constructed from a onedimensional multiscale basis by the tensor product approach. Together with the use of hash tables as data structure, this allows in a simple way for a ..."
Abstract

Cited by 29 (15 self)
 Add to MetaCart
We present a multilevel approach for the solution of partial differential equations. It is based on a multiscale basis which is constructed from a onedimensional multiscale basis by the tensor product approach. Together with the use of hash tables as data structure, this allows in a simple way for adaptive refinement and is, due to the tensor product approach, well suited for higher dimensional problems. Also, the adaptive treatment of partial differential equations, the discretization (involving finite differences) and the solution (here by preconditioned BiCG) can be programmed easily. We describe the basic features of the method, discuss the discretization, the solution and the refinement procedures and report on the results of different numerical experiments.
Optimized TensorProduct Approximation Spaces
"... . This paper is concerned with the construction of optimized grids and approximation spaces for elliptic differential and integral equations. The main result is the analysis of the approximation of the embedding of the intersection of classes of functions with bounded mixed derivatives in standard S ..."
Abstract

Cited by 25 (15 self)
 Add to MetaCart
. This paper is concerned with the construction of optimized grids and approximation spaces for elliptic differential and integral equations. The main result is the analysis of the approximation of the embedding of the intersection of classes of functions with bounded mixed derivatives in standard Sobolev spaces. Based on the framework of tensorproduct biorthogonal wavelet bases and stable subspace splittings, the problem is reduced to diagonal mappings between Hilbert sequence spaces. We construct operator adapted finiteelement subspaces with a lower dimension than the standard fullgrid spaces. These new approximation spaces preserve the approximation order of the standard fullgrid spaces, provided that certain additional regularity assumptions are fulfilled. The form of the approximation spaces is governed by the ratios of the smoothness exponents of the considered classes of functions. We show in which cases the so called curse of dimensionality can be broken. The theory covers e...
Weighted Tensor Product Algorithms for Linear Multivariate Problems
, 1998
"... We study the "approximation of linear multivariate problems defined over weighted tensor product Hilbert spaces of functions f of d variables. A class of weighted tensor product (WTP) algorithms is defined which depends on a number of parameters. Two classes of permissible information are studied. ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
We study the "approximation of linear multivariate problems defined over weighted tensor product Hilbert spaces of functions f of d variables. A class of weighted tensor product (WTP) algorithms is defined which depends on a number of parameters. Two classes of permissible information are studied. all consists of all linear functionals while std consists of evaluations of f or its derivatives. We show that these multivariate problems are sometimes tractable even with a worstcase assurance. We study problem tractability by investigating when a WTP algorithm is a polynomialtime algorithm, that is, when the minimal number of information evaluations is a polynomial in 1=" and d. For all we construct an optimal WTP algorithm and provide a necessary and sufficient condition for tractability in terms of the sequence of weights and the sequence of singular values for d = 1. For std we obtain a weaker result by constructing a WTP algorithm which is optimal only for some weight se...
Efficient collocational approach for parametric uncertainty analysis
 Commun. Comput. Phys
, 2007
"... Abstract. A numerical algorithm for effective incorporation of parametric uncertainty into mathematical models is presented. The uncertain parameters are modeled as random variables, and the governing equations are treated as stochastic. The solutions, or quantities of interests, are expressed as co ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
Abstract. A numerical algorithm for effective incorporation of parametric uncertainty into mathematical models is presented. The uncertain parameters are modeled as random variables, and the governing equations are treated as stochastic. The solutions, or quantities of interests, are expressed as convergent series of orthogonal polynomial expansions in terms of the input random parameters. A highorder stochastic collocation method is employed to solve the solution statistics, and more importantly, to reconstruct the polynomial expansion. While retaining the high accuracy by polynomial expansion, the resulting “pseudospectral ” type algorithm is straightforward to implement as it requires only repetitive deterministic simulations. An estimate on error bounded is presented, along with numerical examples for problems with relatively complicated forms of governing equations. Key words: Collocation methods; pseudospectral methods; stochastic inputs; random differential equations; uncertainty quantification. 1
A note on the complexity of solving Poisson's equation for spaces of bounded mixed derivatives
, 1998
"... We study the complexity of solving the ddimensional Poisson equation on ]0; 1[ d . We restrict ourselves to cases where the solution u lies in some space of functions of bounded mixed derivatives (with respect to the L1  or the L2  norm) up to @ 2d = Q d j=1 @x 2 j . An upper bound for the ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
We study the complexity of solving the ddimensional Poisson equation on ]0; 1[ d . We restrict ourselves to cases where the solution u lies in some space of functions of bounded mixed derivatives (with respect to the L1  or the L2  norm) up to @ 2d = Q d j=1 @x 2 j . An upper bound for the complexity of computing a solution of some prescribed accuracy " with respect to the energy norm is given, which is proportional to " \Gamma1 . We show this result in a constructive manner by proposing a finite element method in a special sparse grid space [Bu1], which is obtained by an a priori grid optimization process based on the energy norm. Thus, the result of this paper is twofold: First, from a theoretical point of view concerning the complexity of solving elliptic PDEs, a strong tractability result of the order O(" \Gamma1 ) is given, and, second, we provide a practically usable hierarchical basis finite element method of this complexity O(" \Gamma1 ), i. e. without logarithm...
The Curse of Dimension and a Universal Method For Numerical Integration
 in Multivariate Approximation and Splines
, 1998
"... . Many high dimensional problems are difficult to solve for any numerical method. This curse of dimension means that the computational cost must increase exponentially with the dimension of the problem. A high dimension, however, can be compensated by a high degree of smoothness. We study numeri ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
. Many high dimensional problems are difficult to solve for any numerical method. This curse of dimension means that the computational cost must increase exponentially with the dimension of the problem. A high dimension, however, can be compensated by a high degree of smoothness. We study numerical integration and prove that such a compensation is possible by a recently invented method. The method is shown to be universal, i.e., simultaneously optimal up to logarithmic factors, on two different smoothness scales. The first scale is defined by isotropic smoothness conditions, while the second scale involves anisotropic smoothness and is related to partially separable functions. 1. Introduction Several applications require the computation of high dimensional integrals. They are present, for example, in statistical mechanics, see [28] for an introduction. Another important example is the fast valuation of financial derivatives, see [16]. Some applications even require approxima...
Fast Numerical Methods for Stochastic Computations: A Review
, 2009
"... This paper presents a review of the current stateoftheart of numerical methods for stochastic computations. The focus is on efficient highorder methods suitable for practical applications, with a particular emphasis on those based on generalized polynomial chaos (gPC) methodology. The framework ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
This paper presents a review of the current stateoftheart of numerical methods for stochastic computations. The focus is on efficient highorder methods suitable for practical applications, with a particular emphasis on those based on generalized polynomial chaos (gPC) methodology. The framework of gPC is reviewed, along with its Galerkin and collocation approaches for solving stochastic equations. Properties of these methods are summarized by using results from literature. This paper also attempts to present the gPC based methods in a unified framework based on an extension of the classical spectral methods into multidimensional random spaces.