Results 1  10
of
22
Experimental Uncertainty Estimation and Statistics for Data Having Interval Uncertainty
 11733, SAND20070939. hal00839639, version 1  28 Jun 2013
"... Sandia is a multiprogram laboratory operated by Sandia Corporation, ..."
Abstract

Cited by 39 (20 self)
 Add to MetaCart
(Show Context)
Sandia is a multiprogram laboratory operated by Sandia Corporation,
Optimal Sensitivity Analysis of Linear Least Squares
, 2003
"... Results from the many years of work on linear least squares problems are combined with a new approach to perturbation analysis to explain in a definitive way the sensitivity of these problems to perturbation. Simple expressions are found for the asymptotic size of optimal backward errors for least s ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Results from the many years of work on linear least squares problems are combined with a new approach to perturbation analysis to explain in a definitive way the sensitivity of these problems to perturbation. Simple expressions are found for the asymptotic size of optimal backward errors for least squares problems. It is shown that such formulas can be used to evaluate condition numbers. For full rank problems, Frobenius norm condition numbers are determined exactly, and spectral norm condition numbers are determined within a factor of squareroottwo. As a result, the necessary and sufficient criteria for well conditioning are established. A source of ill conditioning is found that helps explain the failure of simple iterative refinement. Some textbook discussions of ill conditioning are found to be fallacious, and some error bounds in the literature are found to unnecessarily overestimate the error. Finally, several open questions are described.
Fault Tolerant Matrix Operations for Parallel and Distributed Systems
, 1996
"... With the proliferation of parallel and distributed systems, it is an increasingly important problem to render parallel applications faulttolerant because such applications are more prone to failures with an increasing number of processors. This dissertation explores fault tolerance in a wide variet ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
With the proliferation of parallel and distributed systems, it is an increasingly important problem to render parallel applications faulttolerant because such applications are more prone to failures with an increasing number of processors. This dissertation explores fault tolerance in a wide variety of matrix operations for parallel and distributed scientific computing. It proposes a novel computing paradigm to provide fault tolerance for numerical algorithms. This faulttolerant computing paradigm relies on checkpointing and rollback recovery using processor and memory redundancy. The paradigm is an algorithmbased approach, in which fault tolerance techniques are tailored into each numerical algorithm without redesigning the algorithm and replicating the processes. The paradigm tolerates the changing and failureprone nature of a computing platform, thereby allowing users to run their parallel codes dynamically and efficiently. This dissertation describes the faulttolerant implemen...
Sensitivity in risk analyses with uncertain numbers
, 2006
"... Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
(Show Context)
Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is DempsterShafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a “pinching ” strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered. 3
How ordinary elimination became Gaussian elimination
 Historia Math
"... Newton, in an unauthorized textbook, described a process for solving simultaneous equations that later authors applied specifically to linear equations. This method — that Newton did not want to publish, that Euler did not recommend, that Legendre called “ordinary, ” and that Gauss called “common ” ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Newton, in an unauthorized textbook, described a process for solving simultaneous equations that later authors applied specifically to linear equations. This method — that Newton did not want to publish, that Euler did not recommend, that Legendre called “ordinary, ” and that Gauss called “common ” — is now named after Gauss: “Gaussian ” elimination. (One suspects, he would not be amused.) Gauss’s name became associated with elimination through the adoption, by professional computers, of a specialized notation that Gauss devised for his own least squares calculations. The notation allowed elimination to be viewed as a sequence of arithmetic operations that were repeatedly optimized for hand computing and eventually were described by matrices. In einem unautorisierten Textbuch beschreibt Newton den Prozess für die Lösung von simultanen Gleichungen, den spätere Autoren speziell für lineare Gleichungen anwandten. Diese Methode — welche Newton
Performance Evaluation of ChecksumBased ABFT
 In 16th IEEE International Symposium on Defect and Fault Tolerance in VLSI Systems (DFT’01
, 2001
"... In Algorithmbased fault tolerance (ABFT), fault tolerance is tailored to the algorithm performed. Most of the previous studies that compared ABFT schemes considered only error detection and correction capabilities. Some previous studies looked at the overhead but no previous work as far as we kno ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
In Algorithmbased fault tolerance (ABFT), fault tolerance is tailored to the algorithm performed. Most of the previous studies that compared ABFT schemes considered only error detection and correction capabilities. Some previous studies looked at the overhead but no previous work as far as we know compared different recovery schemes for data processing applications considering throughput as the main metric. In this work, we compare the performance of two recovery schemes: recomputing and ABFT correction, for different error rates. We consider errors that occur during computation as well as those that occur during error detection, location and correction processes. A metric for performance evaluation of different design alternatives is defined. Results show that multiple error correction using ABFT has poorer performance than single error correction even at high error rates. We also present, implement and evaluate early detection in ABFT. In early detection, we try to detect the errors that occur in the checksum calculation before starting the actual computation. Early detection improves throughput in cases of intensive computations and cases of high error rates.
Interval Computations as an Important Part of Granular Computing: An Introduction
 in Handbook of Granular Computing, Chapter 1
, 2008
"... This chapter provides a general introduction to interval computations, especially to interval computations as an important part of granular computing. ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
This chapter provides a general introduction to interval computations, especially to interval computations as an important part of granular computing.
AlgorithmBased Fault Tolerance: A Performance Perspective Based on Error Rate
"... In Algorithmbased fault tolerance (ABFT), the fault tolerance scheme is tailored to the algorithm performed. Most of the previous studies that compared various ABFT schemes considered only their error detection and correction capabilities. Some previous studies looked at the overhead in general but ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
In Algorithmbased fault tolerance (ABFT), the fault tolerance scheme is tailored to the algorithm performed. Most of the previous studies that compared various ABFT schemes considered only their error detection and correction capabilities. Some previous studies looked at the overhead in general but no previous work as far as we know compared different ABFT schemes considering performance as the main metric. In this work, we compare the performance of two ABFT error recovery schemes: recomputing vs. correction, for different error rates. We consider errors that happen during computation as well as those that happen during the error detection, location and correction process. The metrics we use are success ratio and completion time. Results show that multiple error correction using ABFT has worse performance than single error correction. They also show that error rate is an essential factor in making one scheme better than another in terms of performance.
LU Factoring of NonInvertible Matrices
 ACM COMMUNICATIONS IN COMPUTER ALGEBRA, TBA
"... The definition of the LU factoring of a matrix usually requires that the matrix be invertible. Current software systems have extended the definition to nonsquare and rankdeficient matrices, but each has chosen a different extension. Two new extensions, both of which could serve as useful standards ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The definition of the LU factoring of a matrix usually requires that the matrix be invertible. Current software systems have extended the definition to nonsquare and rankdeficient matrices, but each has chosen a different extension. Two new extensions, both of which could serve as useful standards, are proposed here: the first combines LU factoring with fullrank factoring, and the second extension combines fullrank factoring with fractionfree methods. Amongst other applications, the extension to fullrank, fractionfree factoring is the basis for a fractionfree computation of the Moore—Penrose inverse.