Results 1  10
of
11
Measures of agreement between computation and experiment: Validation metrics
, 2006
"... With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
With the increasing role of computational modeling in engineering design, performance estimation, and safety assessment, improved methods are needed for comparing computational results and experimental measurements. Traditional methods of graphically comparing computational and experimental results, though valuable, are essentially qualitative. Computable measures are needed that can quantitatively compare computational and experimental results over a range of input, or control, variables to sharpen assessment of computational accuracy. This type of measure has been recently referred to as a validation metric. We discuss various features that we believe should be incorporated in a validation metric, as well as features that we believe should be excluded. We develop a new validation metric that is based on the statistical concept of confidence intervals. Using this fundamental concept, we construct two specific metrics: one that requires interpolation of experimental data and one that requires regression (curve fitting) of experimental data. We apply the metrics to three example problems: thermal decomposition of a polyurethane foam, a turbulent buoyant plume of helium, and compressibility effects on the growth rate of a turbulent freeshear layer. We discuss how the present metrics are easily interpretable for assessing computational model accuracy, as well as the impact of experimental measurement uncertainty on the accuracy assessment.
Computational Sensor Networks
, 2007
"... We propose Computational Sensor Networks as a methodology to exploit models of physical phenomena in order to better understand the structure of the sensor network. To do so, it is necessary to relate changes in the sensed variables (e.g., temperature) to the aspect of interest in the sensor network ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We propose Computational Sensor Networks as a methodology to exploit models of physical phenomena in order to better understand the structure of the sensor network. To do so, it is necessary to relate changes in the sensed variables (e.g., temperature) to the aspect of interest in the sensor network (e.g., sensor node position, sensor bias, etc.), and to develop a computational method for its solution. As examples, we describe the use of the heat equation to solve (1) the sensor localization problem, and (2) the sensor bias problem. Simulation and physical experiments are described. 1
Validation of computational models in biomechanics
, 2009
"... Abstract: The topics of verification and validation have increasingly been discussed in the field of computational biomechanics, and many recent articles have applied these concepts in an attempt to build credibility for models of complex biological systems. Verification and validation are evolving ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract: The topics of verification and validation have increasingly been discussed in the field of computational biomechanics, and many recent articles have applied these concepts in an attempt to build credibility for models of complex biological systems. Verification and validation are evolving techniques that, if used improperly, can lead to false conclusions about a system under study. In basic science, these erroneous conclusions may lead to failure of a subsequent hypothesis, but they can have more profound effects if the model is designed to predict patient outcomes. While several authors have reviewed verification and validation as they pertain to traditional solid and fluid mechanics, it is the intent of this paper to present them in the context of computational biomechanics. Specifically, the task of model validation will be discussed, with a focus on current techniques. It is hoped that this review will encourage investigators to engage and adopt the verification and validation process in an effort to increase peer acceptance of computational biomechanics models. Keywords: biomechanics, computation, validation, verification, modelling
CamiTK: A Modular Framework Integrating Visualization, Image Processing and Biomechanical Modeling
"... Abstract In this paper, we present CamiTK, a specific modular framework that helps researchers and clinicians to collaborate in order to prototype Computer Assisted Medical Intervention (CAMI) applications by using the best knowledge and knowhow during all the required steps. CamiTK is an opensour ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract In this paper, we present CamiTK, a specific modular framework that helps researchers and clinicians to collaborate in order to prototype Computer Assisted Medical Intervention (CAMI) applications by using the best knowledge and knowhow during all the required steps. CamiTK is an opensource, crossplatform generic tool, written in C++, which can handle medical images, surgical navigations and biomechanical simulations. This paper first gives an overview of CamiTK core architecture and how it can be extended to fit particular scientific needs. The MML extension is then presented: it is an environment for comparing and evaluating softtissue simulation models and algorithms. Specifically designed as a softtissue simulation benchmark and a reference database for validation, it can compare models and algorithms built from different modeling techniques or biomechanical software. This article demonstrates the use of CamiTK on a textbook but complete example, where the medical image and MML extensions are collaborating in order to process and analyze MR brain images, reconstruct a patientspecific mesh of the brain, and simulate a basic brainshift with different biomechanical models from ANSYS, SOFA and ArtiSynth.
Validation of imprecise probability models
"... Abstract: Validation is the assessment of the match between a model’s predictions and any empirical observations relevant to those predictions. This comparison is straightforward when the data and predictions are deterministic, but is complicated when either or both are expressed in terms of uncerta ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract: Validation is the assessment of the match between a model’s predictions and any empirical observations relevant to those predictions. This comparison is straightforward when the data and predictions are deterministic, but is complicated when either or both are expressed in terms of uncertain numbers (i.e., intervals, probability distributions, pboxes, or more general imprecise probability structures). There are two obvious ways such comparisons might be conceptualized. Validation could measure the discrepancy between the shapes of the uncertain numbers representing prediction and data, or it could characterize the differences between realizations drawn from the respective uncertain numbers. When both prediction and data are represented with probability distributions, comparing shapes would seem to be the most intuitive choice because it sidesteps the issue of stochastic dependence between the prediction and the data values which would accompany a comparison between realizations. However, when prediction and observation are represented as intervals, comparing their shapes seems overly strict as a measure for validation. Intuition demands that the measure of mismatch between two intervals be zero whenever the intervals overlap at all. Thus, intervals are in perfect agreement even though they may have very different shapes. The unification between these two concepts relies on
DESIGN OF AND COMPARISON WITH VERIFICATION AND VALIDATION BENCHMARKS
"... Verification and validation (V&V) are the primary means to assess accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several highconsequence application areas, such as, nuclear reactor safety, underground s ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Verification and validation (V&V) are the primary means to assess accuracy and reliability of computational simulations. V&V methods and procedures have fundamentally improved the credibility of simulations in several highconsequence application areas, such as, nuclear reactor safety, underground storage of nuclear waste, and safety of nuclear weapons. Although the terminology is not uniform across engineering disciplines, code verification deals with the assessment of the reliability of the software coding and solution verification deals with the numerical accuracy of the solution to a computational model. Validation addresses the physics modeling accuracy of a computational simulation by comparing with experimental data. Code verification benchmarks and validation benchmarks have been constructed for a number of years in every field of computational simulation. However, no comprehensive guidelines have been proposed for the construction and use of V&V benchmarks. Some fields, such as nuclear reactor safety, place little emphasis on code verification benchmarks and great emphasis on validation benchmarks that are closely related to actual reactors operating near safetycritical conditions. This paper proposes recommendations for the optimum design and use of code verification benchmarks based on classical analytical solutions, manufactured solutions, and highly accurate numerical solutions. It is believed that these benchmarks will prove useful to both inhouse developed codes, as well as commercially licensed codes. In addition, this paper proposes recommendations for the design and use of validation benchmarks with emphasis on careful design of buildingblock experiments, estimation of experiment measurement uncertainty for both inputs and outputs to the code, validation metrics, and the role of model calibration in validation. It is argued that predictive capability of a computational model is built on both the measurement of achievement in V&V, as well as how closely related are the V&V benchmarks to the actual application of interest, e.g., the magnitude of extrapolation beyond a
GCMB 215978—28/12/2006——266842 5
, 2006
"... Computer Methods in Biomechanics and Biomedical Engineering, ..."
Methods in Applied Mechanics and Engineering, 197(2932):2550–2560, 2008. The use of kernel densities and confidence intervals to cope with
"... H.J. Pradlwarter and G.I. Schuëller. The use of kernel densities and confidence intervals to cope with insufficient data in validation experiments. Computer ..."
Abstract
 Add to MetaCart
H.J. Pradlwarter and G.I. Schuëller. The use of kernel densities and confidence intervals to cope with insufficient data in validation experiments. Computer
Computer Models
"... Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DEAC0494AL85000. Why are we here? • We want to build credibility into computational simulations (Devel ..."
Abstract
 Add to MetaCart
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DEAC0494AL85000. Why are we here? • We want to build credibility into computational simulations (Development) • We want to understand and quantify the credibility of our existing computational simulations (Assessment) • We want to use our computational simulations to predict phenomena for conditions in which we do not have data (Utilization)
Uncertainty quantification via codimensionone partitioning
, 2010
"... We consider uncertainty quantification in the context of certification, i.e. showing that that the probability of some “failure ” event is acceptably small. In this paper, we derive a new method for rigorous uncertainty quantification and conservative certification by combining McDiarmid’s inequalit ..."
Abstract
 Add to MetaCart
We consider uncertainty quantification in the context of certification, i.e. showing that that the probability of some “failure ” event is acceptably small. In this paper, we derive a new method for rigorous uncertainty quantification and conservative certification by combining McDiarmid’s inequality with input domain partitioning and a new concentrationofmeasure inequality. We show that arbitrarily sharp upper bounds on the probability of failure can be obtained by partitioning the input parameter space appropriately; in contrast, the bound provided by McDiarmid’s inequality is usually not sharp. We prove an error estimate for the method (proposition 3.2); we define a codimensionone recursive partitioning scheme and prove its convergence properties (theorem 4.1); finally, we apply a new concentrationofmeasure inequality to give confidence levels when empirical means are used in place of exact ones (section 5). 1