Results 1 
5 of
5
Measures of agreement between computation and experiment: Validation metrics
, 2006
"... With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables to sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric, as well as features that we believe should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent freeshear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.
Validation of imprecise probability models
"... Abstract: Validation is the assessment of the match between a model’s predictions and any empirical observations relevant to those predictions. This comparison is straightforward when the data and predictions are deterministic, but is complicated when either or both are expressed in terms of uncerta ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract: Validation is the assessment of the match between a model’s predictions and any empirical observations relevant to those predictions. This comparison is straightforward when the data and predictions are deterministic, but is complicated when either or both are expressed in terms of uncertain numbers (i.e., intervals, probability distributions, pboxes, or more general imprecise probability structures). There are two obvious ways such comparisons might be conceptualized. Validation could measure the discrepancy between the shapes of the uncertain numbers representing prediction and data, or it could characterize the differences between realizations drawn from the respective uncertain numbers. When both prediction and data are represented with probability distributions, comparing shapes would seem to be the most intuitive choice because it sidesteps the issue of stochastic dependence between the prediction and the data values which would accompany a comparison between realizations. However, when prediction and observation are represented as intervals, comparing their shapes seems overly strict as a measure for validation. Intuition demands that the measure of mismatch between two intervals be zero whenever the intervals overlap at all. Thus, intervals are in perfect agreement even though they may have very different shapes. The unification between these two concepts relies on
DESIGN OF AND COMPARISON WITH VERIFICATION AND VALIDATION BENCHMARKS
"... Verification and validation (V&V) are the primary means to assess accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several highconsequence application areas, such as, nuclear reactor safety, underground s ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Verification and validation (V&V) are the primary means to assess accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several highconsequence application areas, such as, nuclear reactor safety, underground storage of nuclear waste, and safety of nuclear weapons. Although the terminology is not uniform across engineering disciplines, code verification deals with the assessment of the reliability of the software coding and solution verification deals with the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. Some fields, such as nuclear reactor safety, place little emphasis on code verification benchmarks and great emphasis on validation benchmarks that are closely related to actual reactors operating near safetycritical conditions. This paper proposes recommendations for the optimum design and use of code verification benchmarks based on classical analytical solutions, manufactured solutions, and highly accurate numerical solutions. It is believed that these benchmarks will prove useful to both inhouse developed codes, as well as commercially licensed codes. In addition, this paper proposes recommendations for the design and use of validation benchmarks with emphasis on careful design of buildingblock experiments, estimation of experiment measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that predictive capability of a computational model is built on both the measurement of achievement in V&V, as well as how closely related are the V&V benchmarks to the actual application of interest, e.g., the magnitude of extrapolation beyond a
DETC200457363 A WEIGHTED THREEPOINTBASED STRATEGY FOR VARIANCE ESTIMATION
"... In manufacturing processes, it is widely accepted that uncertainty plays an important role and should be taken into account during analysis and design processes. However, uncertainty quantification of its effects on an endproduct is a very challenging task, especially when an expensive computationa ..."
Abstract
 Add to MetaCart
In manufacturing processes, it is widely accepted that uncertainty plays an important role and should be taken into account during analysis and design processes. However, uncertainty quantification of its effects on an endproduct is a very challenging task, especially when an expensive computational effort is already needed in deterministic models such as sheet metal forming simulations. In this paper, we focus our work on the variance estimation of the system response. A weighted threepointbased strategy is proposed to efficiently and effectively estimate the variance of the system response. Three firstorder derivatives for each variable are used to estimate the nonlinear behavior and variance of the system. The details of the derivation of the approach are presented in the paper. The optimal locations of the three points along each axis in the standard normal space and weights for input variables following normal distributions are proposed as (1.8257,0.0,+1.8257) and (0.075,0.850,0.075), respectively. For input variables following uniform distributions U(1,1), the optimal locations and weights are proposed as (0.84517, 0.0,+0.84517) and (0.04667,0.90666,0.04667), respectively. The proposed approach is applicable to nonlinear and multivariable systems as well as problems having no explicit function such as those design simulations based on finite element methods. The significant accuracy improvement over the traditional firstorder approximation is demonstrated with a number of test problems. The proposed method requires significantly less computational effort compared with the Monte Carlo simulations. Discussions and conclusions of this work are given at the end of the paper. Key words: weighted threepointbased strategy; first order approximation; uncertainty propagation; variance estimation; design under uncertainty. 1.
VALIDATING DESIGNS THROUGH SEQUENTIAL SIMULATIONBASED OPTIMIZATION
"... Computational simulation models support a rapid design process. Given model approximation and operating conditions uncertainty, designers must have confidence that the designs obtained using simulations will perform as expected. This paper presents a methodology for validating designs as they are ge ..."
Abstract
 Add to MetaCart
Computational simulation models support a rapid design process. Given model approximation and operating conditions uncertainty, designers must have confidence that the designs obtained using simulations will perform as expected. This paper presents a methodology for validating designs as they are generated during a simulationbased optimization process. Current practice focuses on validation of simulation models throughout the entire design space. In contrast, the proposed methodology requires validation only at design points generated during optimization. The goal of such validation is confidence in the resulting design rather than in the underlying simulation model. The proposed methodology is illustrated on a simple cantilever beam design subject to vibration. 1.