Results 1  10
of
32
MULTIPROCESS PARALLEL ANTITHETIC COUPLING FOR BACKWARD AND FORWARD Markov Chain Monte Carlo
, 2005
"... Antithetic coupling is a general stratification strategy for reducing Monte Carlo variance without increasing the simulation size. The use of the antithetic principle in the Monte Carlo literature typically employs two strata via antithetic quantile coupling. We demonstrate here that further stratif ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
Antithetic coupling is a general stratification strategy for reducing Monte Carlo variance without increasing the simulation size. The use of the antithetic principle in the Monte Carlo literature typically employs two strata via antithetic quantile coupling. We demonstrate here that further stratification, obtained by using k>2(e.g.,k = 3–10) antithetically coupled variates, can offer substantial additional gain in Monte Carlo efficiency, in terms of both variance and bias. The reason for reduced bias is that antithetically coupled chains can provide a more dispersed search of the state space than multiple independent chains. The emerging area of perfect simulation provides a perfect setting for implementing the kprocess parallel antithetic coupling for MCMC because, without antithetic coupling, this class of methods delivers genuine independent draws. Furthermore, antithetic backward coupling provides a very convenient theoretical tool for investigating antithetic forward coupling. However, the generation of k>2 antithetic variates that are negatively associated, that is, they preserve negative correlation under monotone
MonteCarlotype techniques for processing interval uncertainty, and their potential engineering applications
 Reliable Computing
, 2007
"... Abstract. In engineering applications, we need to make decisions under uncertainty. Traditionally, in engineering, statistical methods are used, methods assuming that we know the probability distribution of different uncertain parameters. Usually, we can safely linearize the dependence of the desire ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
Abstract. In engineering applications, we need to make decisions under uncertainty. Traditionally, in engineering, statistical methods are used, methods assuming that we know the probability distribution of different uncertain parameters. Usually, we can safely linearize the dependence of the desired quantities y (e.g., stress at different structural points) on the uncertain parameters xi – thus enabling sensitivity analysis. Often, the number n of uncertain parameters is huge, so sensitivity analysis leads to a lot of computation time. To speed up the processing, we propose to use special MonteCarlotype simulations.
Input Model Uncertainty: Why Do We Care And What Should We Do About It?
, 2003
"... An input model is a collection of distributions together with any associated parameters that are used as primitive inputs in a simulation model. Input model uncertainty arises when one is not completely certain what distributions and/or parameters to use. This tutorial attempts to provide a sense of ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
An input model is a collection of distributions together with any associated parameters that are used as primitive inputs in a simulation model. Input model uncertainty arises when one is not completely certain what distributions and/or parameters to use. This tutorial attempts to provide a sense of why one should consider input uncertainty and what methods can be used to deal with it.
Sensitivity in risk analyses with uncertain numbers
, 2006
"... Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is DempsterShafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a “pinching ” strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered. 3
The economics (or lack thereof) of aerosol geoengineering
"... Anthropogenic greenhouse gas emissions are changing the Earth’s climate and impose substantial risks for current and future generations. What are scientifically sound, economically viable, and ethically defendable strategies to manage these climate risks? Ratified international agreements call for a ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Anthropogenic greenhouse gas emissions are changing the Earth’s climate and impose substantial risks for current and future generations. What are scientifically sound, economically viable, and ethically defendable strategies to manage these climate risks? Ratified international agreements call for a reduction of greenhouse gas emissions to avoid dangerous anthropogenic interference with the climate system. Recent proposals, however, call for the deployment of a different approach: to geoengineer climate by injecting aerosol precursors into the stratosphere. Published economic studies typically suggest that substituting aerosol geoengineering for abatement of carbon dioxide emissions results in large net monetary benefits. However, these studies neglect the risks of aerosol geoengineering due to (i) the potential for a failure to sustain the aerosol forcing and (ii) the negative impacts associated with the aerosol forcing. Here we use a simple integrated assessment model of climate change to analyze potential economic impacts of aerosol geoengineering strategies over a wide range of uncertain parameters such as climate sensitivity, the economic damages due to climate change, and the economic damages due to aerosol geoengineering forcing. The simplicity of the model provides the advantages of
Computational Methods for Decision Making Based on Imprecise Information
, 2006
"... In this paper, we investigate computational methods for decision making based on imprecise information in the context of engineering design. The goal is to identify the subtleties of engineering design problems that impact the choice of computational solution methods, and to evaluate some existing s ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
In this paper, we investigate computational methods for decision making based on imprecise information in the context of engineering design. The goal is to identify the subtleties of engineering design problems that impact the choice of computational solution methods, and to evaluate some existing solution methods to determine their suitability and limitations. Although several approaches for propagating imprecise probabilities have been published in the literature, these methods are insufficient for practical engineering analysis. The dependency bounds convolution approach of Williamson and Downs and the distribution envelope determination approach of Berleant work sufficiently well only for open models (that is, models with known mathematical operations). Both of these approaches rely on interval arithmetic and are therefore limited to problems with few repeated variables. In an attempt to overcome the difficulties faced by these deterministic methods, we propose an alternative approach that utilizes both Monte Carlo simulation and optimization. The Monte Carlo/optimization hybrid approach has its own drawbacks in that it assumes that the uncertain inputs can be parameterized, that it requires the solution of a global optimization problem, and that it assumes independence between the uncertain inputs.
Numerical and visual evaluation of hydrological and environmental models using the Monte Carlo Analysis Toolbox
, 2007
"... The detailed evaluation of mathematical models and the consideration of uncertainty in the modeling of hydrological and environmental systems are of increasing importance, and are sometimes even demanded by decision makers. At the same time, the growing complexity of models to represent realworld s ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
The detailed evaluation of mathematical models and the consideration of uncertainty in the modeling of hydrological and environmental systems are of increasing importance, and are sometimes even demanded by decision makers. At the same time, the growing complexity of models to represent realworld systems makes it more and more difficult to understand model behavior, sensitivities and uncertainties. The Monte Carlo Analysis Toolbox (MCAT) is a Matlab library of visual and numerical analysis tools for the evaluation of hydrological and environmental models. Input to the MCAT is the result of a Monte Carlo or population evolution based sampling of the parameter space of the model structure under investigation. The MCAT can be used offline, i.e. it does not have to be connected to the evaluated model, and can thus be used for any model for which an appropriate sampling can be performed. The MCAT contains tools for the evaluation of performance, identifiability, sensitivity, predictive uncertainty and also allows for the testing of hypotheses with respect to the model structure used. In addition to research applications, the MCAT can be used as a teaching tool in courses that include the use of mathematical models.
DESIGN OF AND COMPARISON WITH VERIFICATION AND VALIDATION BENCHMARKS
"... Verification and validation (V&V) are the primary means to assess accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several highconsequence application areas, such as, nuclear reactor safety, underground s ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Verification and validation (V&V) are the primary means to assess accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several highconsequence application areas, such as, nuclear reactor safety, underground storage of nuclear waste, and safety of nuclear weapons. Although the terminology is not uniform across engineering disciplines, code verification deals with the assessment of the reliability of the software coding and solution verification deals with the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. Some fields, such as nuclear reactor safety, place little emphasis on code verification benchmarks and great emphasis on validation benchmarks that are closely related to actual reactors operating near safetycritical conditions. This paper proposes recommendations for the optimum design and use of code verification benchmarks based on classical analytical solutions, manufactured solutions, and highly accurate numerical solutions. It is believed that these benchmarks will prove useful to both inhouse developed codes, as well as commercially licensed codes. In addition, this paper proposes recommendations for the design and use of validation benchmarks with emphasis on careful design of buildingblock experiments, estimation of experiment measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that predictive capability of a computational model is built on both the measurement of achievement in V&V, as well as how closely related are the V&V benchmarks to the actual application of interest, e.g., the magnitude of extrapolation beyond a
COMPARISON OF SAMPLING TECHNIQUES ON THE PERFORMANCE OF MONTE CARLO BASED SENSITIVITY ANALYSIS
"... Sensitivity analysis is a key part of a comprehensive energy simulation study. MonteCarlo techniques have been successfully applied to many simulation tools. Several sampling techniques have been proposed in the literature; however to date there has been no comparison of their performance for typic ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Sensitivity analysis is a key part of a comprehensive energy simulation study. MonteCarlo techniques have been successfully applied to many simulation tools. Several sampling techniques have been proposed in the literature; however to date there has been no comparison of their performance for typical building simulation applications. This paper examines the performance of simple random, stratified and Latin Hypercube sampling when applied to a typical building simulation problem. An integrated natural ventilation problem was selected as it has an inexpensive calculation time thus allowing multiple sensitivity analyses to be undertaken, while being realistic as wind and temperature effects are both modeled. The research shows that compared to simple random sampling: LHS and stratified sampling produce results that are not significantly different (at a 5 % level) with increased robustness (less variance in the mean prediction). However, it should not be inferred from this that fewer simulation runs are required for LHS and stratified sampling. Given the results presented here and in previous work it would indicate that for practical purposes MonteCarlo uncertainty analysis in typical building simulation applications should use about 100 runs and simple random sampling.