Results 1  10
of
15
Computer Experiments
, 1996
"... Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, a ..."
Abstract

Cited by 68 (5 self)
 Add to MetaCart
Introduction Deterministic computer simulations of physical phenomena are becoming widely used in science and engineering. Computers are used to describe the flow of air over an airplane wing, combustion of gasses in a flame, behavior of a metal structure under stress, safety of a nuclear reactor, and so on. Some of the most widely used computer models, and the ones that lead us to work in this area, arise in the design of the semiconductors used in the computers themselves. A process simulator starts with a data structure representing an unprocessed piece of silicon and simulates the steps such as oxidation, etching and ion injection that produce a semiconductor device such as a transistor. A device simulator takes a description of such a device and simulates the flow of current through it under varying conditions to determine properties of the device such as its switching speed and the critical voltage at which it switches. A circuit simulator takes a list of devices and the
Explicit Cost Bounds of Algorithms for Multivariate Tensor Product Problems
 J. Complexity
, 1994
"... We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the form (c( ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
We study multivariate tensor product problems in the worst case and average case settings. They are defined on functions of d variables. For arbitrary d, we provide explicit upper bounds on the costs of algorithms which compute an "approximation to the solution. The cost bounds are of the form (c(d) + 2) fi 1 ` fi 2 + fi 3 ln 1=" d \Gamma 1 ' fi 4 (d\Gamma1) ` 1 " ' fi 5 : Here c(d) is the cost of one function evaluation (or one linear functional evaluation), and fi i 's do not depend on d; they are determined by the properties of the problem for d = 1. For certain tensor product problems, these cost bounds do not exceed c(d) K " \Gammap for some numbers K and p, both independent of d. We apply these general estimates to certain integration and approximation problems in the worst and average case settings. We also obtain an upper bound, which is independent of d, for the number, n("; d), of points for which discrepancy (with unequal weights) is at most ", n("; d) 7:26 ...
Numerical Integration using Sparse Grids
 NUMER. ALGORITHMS
, 1998
"... We present and review algorithms for the numerical integration of multivariate functions defined over ddimensional cubes using several variants of the sparse grid method first introduced by Smolyak [51]. In this approach, multivariate quadrature formulas are constructed using combinations of tensor ..."
Abstract

Cited by 40 (16 self)
 Add to MetaCart
We present and review algorithms for the numerical integration of multivariate functions defined over ddimensional cubes using several variants of the sparse grid method first introduced by Smolyak [51]. In this approach, multivariate quadrature formulas are constructed using combinations of tensor products of suited onedimensional formulas. The computing cost is almost independent of the dimension of the problem if the function under consideration has bounded mixed derivatives. We suggest the usage of extended Gauss (Patterson) quadrature formulas as the onedimensional basis of the construction and show their superiority in comparison to previously used sparse grid approaches based on the trapezoidal, ClenshawCurtis and Gauss rules in several numerical experiments and applications.
Average Case Complexity Of Linear Multivariate Problems Part I: Theory
 APPLICATIONS; J. COMPLEXITY
, 1991
"... We study the average case complexity of linear multivariate problems, that is, the approximation of continuous linear operators on functions of d variables. The function spaces are equipped with Gaussian measures. We consider two classes of information. The first class std consists of function va ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
We study the average case complexity of linear multivariate problems, that is, the approximation of continuous linear operators on functions of d variables. The function spaces are equipped with Gaussian measures. We consider two classes of information. The first class std consists of function values, and the second class all consists of all continuous linear functionals. Tractability of a linear multivariate problem means that the average case complexity of computing an "approximation is O ((1=") p ) with p independent of d. The smallest such p is called the exponent of the problem. Under mild assumptions, we prove that tractability in all is equivalent to tractability in std , and that the difference of the exponents is at most 2. The proof of this result is not constructive. We provide a simple condition to check tractability in all . We also address the issue how to construct optimal (or nearly optimal) sample points for linear multivariate problems. We use rela...
Computing Discrepancies of Smolyak Quadrature Rules
 J. COMPLEXITY
, 1996
"... In recent years, Smolyak quadrature rules (also called quadratures on hyperbolic cross points or sparse grids) have gained interest as a possible competitor to number theoretic quadratures for high dimensional problems. A standard way of comparing the quality of multivariate quadrature formulas cons ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
In recent years, Smolyak quadrature rules (also called quadratures on hyperbolic cross points or sparse grids) have gained interest as a possible competitor to number theoretic quadratures for high dimensional problems. A standard way of comparing the quality of multivariate quadrature formulas consists in computing their L2discrepancy. Especially for larger dimensions, such computations are a highly complex task. In this paper we develop a fast recursive algorithm for computing the L2discrepancy (and related quality measures) of general Smolyak quadratures. We carry out numerical comparisons between the discrepancies of certain Smolyak rules, Hammersley and Monte Carlo sequences.
Multivariate Integration and Approximation for Random Fields satisfying SacksYlvisaker Conditions
 Ann. Appl. Prob
, 1995
"... We present sharp bounds on the minimal errors of linear estimators for multivariate integration and L 2 approximation. This is done for a random field whose covariance kernel is a tensor product of one dimensional kernels that satisfy the SacksYlvisaker regularity conditions. 1. Introduction We s ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
We present sharp bounds on the minimal errors of linear estimators for multivariate integration and L 2 approximation. This is done for a random field whose covariance kernel is a tensor product of one dimensional kernels that satisfy the SacksYlvisaker regularity conditions. 1. Introduction We study multivariate integration and L 2 approximation for random fields Y which are defined on the d dimensional unit cube, D = [0; 1] d , and which have mean zero and known covariance kernel K. We assume that K is at least continuous, and hence we may assume that Y is a measurable random field whose realizations are in L 2 (D) with probability one. For integration we want to estimate the integral R D Y (t) dt, whereas for L 2 approximation we want to estimate the values Y (t) for all t and we study the distance of the estimate and the realization of the field in the space L 2 (D). For both problems we mainly consider linear estimators that use n observations of the random field. These e...
Efficient algorithms for computing the L2discrepancy
 Math. Comp
, 1995
"... Abstract. The L2discrepancy is a quantitative measure of precision for multivariate quadrature rules. It can be computed explicitly. Previously known algorithms needed O(m 2)operations,wheremis the number of nodes. In this paper we present algorithms which require O(m(log m) d)operations. 1. ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
Abstract. The L2discrepancy is a quantitative measure of precision for multivariate quadrature rules. It can be computed explicitly. Previously known algorithms needed O(m 2)operations,wheremis the number of nodes. In this paper we present algorithms which require O(m(log m) d)operations. 1.
Control variates for quasiMonte Carlo
, 2003
"... QuasiMonte Carlo (QMC) methods have begun to displace ordinary Monte Carlo (MC) methods in many practical problems. It is natural and obvious to combine QMC methods with traditional variance reduction techniques used in MC sampling, such as control variates. There can, ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
QuasiMonte Carlo (QMC) methods have begun to displace ordinary Monte Carlo (MC) methods in many practical problems. It is natural and obvious to combine QMC methods with traditional variance reduction techniques used in MC sampling, such as control variates. There can,
Dealing with the complexity of economic calculations
, 1996
"... Abstract: This essay is a response to a growing negative literature that suggests that neoclassical economic theories based on hypotheses of rationality and equilibrium are of limited practical relevance because they require an infeasibly large number of calculations. Many of the negative results ar ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
Abstract: This essay is a response to a growing negative literature that suggests that neoclassical economic theories based on hypotheses of rationality and equilibrium are of limited practical relevance because they require an infeasibly large number of calculations. Many of the negative results are translations of abstract complexity bounds from the computer science literature. I show that these bounds do do not constitute proofs that difficult economic calculations are “impossible” and discuss the type of hardware and software that can make it possible to solve very hard problems. I discuss four different ways to break the curse of dimensionality of economic problems: 1) by exploiting special structure, 2) by decomposition, 3) by randomization, and 4) by taking advantage of “knowledge capital. ” However these four methods may not be enough. I offer some speculations on the role of decentralization for harnessing the power of massively parallel processors. I conjecture that decentralization is an efficient “operating system ” for organizing large scale computations on massively parallel systems. Economies, immune systems and brains are all types of massively parallel processors that use decentralization to solve difficult computational problems. However knowledge capital, in the form of effective institutions, is necessary to ensure that decentralization leads to effective cooperation rather than anarchy and chaos. I suggest that
Asymptotic Optimality Of Regular Sequence Designs
 Ann. Statist
, 1995
"... . We study linear estimators for the weighted integral of a stochastic process. The process may only be observed on a finite sampling design. The error is defined in mean square sense, and the process is assumed to satisfy SacksYlvisaker regularity conditions of order r 2 N 0 . We show that samplin ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
. We study linear estimators for the weighted integral of a stochastic process. The process may only be observed on a finite sampling design. The error is defined in mean square sense, and the process is assumed to satisfy SacksYlvisaker regularity conditions of order r 2 N 0 . We show that sampling at the quantiles of a particular density already yields asymptotically optimal estimators. Hereby we extend results by Sacks and Ylvisaker for regularity r = 0 or 1, and we confirm a conjecture by Eubank, Smith, and Smith. 1. Introduction Let X(t), t 2 [0; 1], be a centered stochastic process which is at least continuous in quadratic mean. For a known function ae 2 L 2 ([0; 1]) we want to estimate the weighted integral Int ae (X) = Z 1 0 X(t) \Delta ae(t) dt: We consider linear estimators I n which are based on n observations of X. Hence I n (X) = n X i=1 X(t i ) \Delta a i with sampling points 0 t 1 ! \Delta \Delta \Delta ! t n 1 and coefficients a i 2 R. The error of I n is de...