Results 1  10
of
33
Software Verification and System Assurance
, 2009
"... Littlewood [1] introduced the idea that software may be possibly perfect and that we can contemplate its probability of (im)perfection. We review this idea and show how it provides a bridge between correctness, which is the goal of software verification (and especially formal verification), and the ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Littlewood [1] introduced the idea that software may be possibly perfect and that we can contemplate its probability of (im)perfection. We review this idea and show how it provides a bridge between correctness, which is the goal of software verification (and especially formal verification), and the probabilistic properties such as reliability that are the targets for systemlevel assurance. We enumerate the hazards to formal verification, consider how each of these may be countered, and propose relative weightings that an assessor may employ in assigning a probability of perfection.
Bayesian inference, Monte Carlo sampling and operational risk
 Journal of Operational Risk
"... Operational risk is an important quantitative topic as a result of the Basel II regulatory requirements. Operational risk models need to incorporate internal and external loss data observations in combination with expert opinion surveyed from business specialists. Following the Loss Distributional A ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Operational risk is an important quantitative topic as a result of the Basel II regulatory requirements. Operational risk models need to incorporate internal and external loss data observations in combination with expert opinion surveyed from business specialists. Following the Loss Distributional Approach, this article considers three aspects of the Bayesian approach to the modeling of operational risk. Firstly we provide an overview of the Bayesian approach to operational risk, before expanding on the current literature through consideration of general families of nonconjugate severity distributions, gandh and GB2 distributions. Bayesian model selection is presented as an alternative to popular frequentist tests, such as KolmogorovSmirnov or AndersonDarling. We present a number of examples and develop techniques for parameter estimation for general severity and frequency distribution models from a Bayesian perspective. Finally we introduce and evaluate recently developed stochastic sampling techniques and highlight their application to operational risk through the models developed.
Building knowledgebased systems by credal networks: a tutorial
 ADVANCES IN MATHEMATICS RESEARCH
, 2010
"... Knowledgebased systems are computer programs achieving expertlevel competence in solving problems for specific task areas. This chapter is a tutorial on the implementation of this kind of systems in the framework of credal networks. Credal networks are a generalization of Bayesian networks where c ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Knowledgebased systems are computer programs achieving expertlevel competence in solving problems for specific task areas. This chapter is a tutorial on the implementation of this kind of systems in the framework of credal networks. Credal networks are a generalization of Bayesian networks where credal sets, i.e., closed convex sets of probability measures, are used instead of precise probabilities. This allows for a more flexible model of the knowledge, which can represent ambiguity, contrast and contradiction in a natural and realistic way. The discussion guides the reader through the different steps involved in the specification of a system, from the evocation and elicitation of the knowledge to the interaction with the system by adequate inference algorithms. Our approach is characterized by a sharp distinction between the domain knowledge and the process linking this knowledge to the perceived evidence, which we call the observational process. This distinction leads to a very flexible representation of both domain knowledge and knowledge about the way the information is collected, together with a technique to aggregate information coming from different sources. The overall procedure is illustrated throughout the chapter by a simple knowledgebased system for the prediction of the result of a football match.
A Psychological Model for Aggregating Judgments of Magnitude
"... Abstract. In this paper, we develop and illustrate a psychologicallymotivated model for aggregating judgments of magnitude across experts. The model assumes that experts ’ judgments are perturbed from the truth by both systematic biases and random error, and it provides aggregated estimates that are ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. In this paper, we develop and illustrate a psychologicallymotivated model for aggregating judgments of magnitude across experts. The model assumes that experts ’ judgments are perturbed from the truth by both systematic biases and random error, and it provides aggregated estimates that are implicitly based on the application of nonlinear weights to individual judgments. The model is also easily extended to situations where experts report multiple quantile judgments. We apply the model to expert judgments concerning flange leaks in a chemical plant, illustrating its use and comparing it to baseline measures.
Reasoning about the Reliability Of Diverse TwoChannel Systems In which One Channel is “Possibly Perfect”
, 2009
"... should appear on the left and oddnumbered pages on the right when opened as a doublepage This report refines and extends an earlier paper by the first author [25]. It considers the problem of reasoning about the reliability of faulttolerant systems with two “channels” (i.e., components) of which o ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
should appear on the left and oddnumbered pages on the right when opened as a doublepage This report refines and extends an earlier paper by the first author [25]. It considers the problem of reasoning about the reliability of faulttolerant systems with two “channels” (i.e., components) of which one, A, because it is conventionally engineered and presumed to contain faults, supports only a claim of reliability, while the other, B, by virtue of extreme simplicity and extensive analysis, supports a plausible claim of “perfection.” We begin with the case where either channel can bring the system to a safe state. The reasoning about system probability of failure on demand (pfd) is divided into two steps. The first concerns aleatory uncertainty about (i) whether channel A will fail on a randomly selected demand and (ii) whether channel B is imperfect. It is shown that, conditional upon knowing pA (the probability that A fails on a randomly selected demand) and pB (the probability that channel B is imperfect), a conservative bound on the probability that the system fails on a randomly selected demand is simply pA × pB. That is, there is conditional independence between the events “A fails ” and “B is imperfect. ” The second
Elicitation of Multivariate Prior Distributions: A nonparametric Bayesian approach
"... In the context of Bayesian statistical analysis, elicitation is the process of formulating a prior density f(·) about one or more uncertain quantities to represent a person’s knowledge and beliefs. Several different methods of eliciting prior distributions for one unknown parameter have been propose ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
In the context of Bayesian statistical analysis, elicitation is the process of formulating a prior density f(·) about one or more uncertain quantities to represent a person’s knowledge and beliefs. Several different methods of eliciting prior distributions for one unknown parameter have been proposed. However, there are relatively few methods for specifying a multivariate prior distribution and most are just applicable to specific classes of problems and/or based on restrictive conditions, such as independence of variables. Besides, many of these procedures require the elicitation of variances and correlations, and sometimes elicitation of hyperparameters which are difficult for experts to specify in practice. Garthwaite, Kadane and O’Hagan (2005) discuss the different methods proposed in the literature and the difficulties of eliciting multivariate prior distributions. We describe a flexible method of eliciting multivariate prior distributions applicable to a wide class of practical problems. Our approach does not assume a parametric form for the unknown prior density f(·), instead we use nonparametric Bayesian inference, modelling f(·) by a Gaussian process prior distribution. The expert is then asked to specify certain summaries of his/her distribution, such as the mean, mode, marginal quantiles and a small number of joint probabilities. The analyst receives that information, treating it as a data set D with which to update his/her prior beliefs to obtain the posterior distribution for f(·). Theoretical properties of joint and marginal priors are derived and numerical illustrations to demonstrate our approach are given.
Default Estimation, Correlated Defaults, and Expert Information
, 2008
"... The statements made and views expressed herein are solely those of the author, and do not necessarily represent official policies, statements or views of the Office of the Capital allocation decisions are made on the basis of an assessment of creditworthiness. Default is a rare event for most segmen ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
The statements made and views expressed herein are solely those of the author, and do not necessarily represent official policies, statements or views of the Office of the Capital allocation decisions are made on the basis of an assessment of creditworthiness. Default is a rare event for most segments of a bank’s portfolio and data information can be minimal. Inference about default rates is essential for efficient capital allocation, for risk management and for compliance with the requirements of the Basel II rules on capital standards for banks. Expert information is crucial in inference about defaults. A Bayesian approach is proposed and illustrated using prior distributions assessed from industry experts. A maximum entropy approach is used to represent expert information. The binomial model, most common in applications, is extended to allow correlated defaults yet remain consistent with Basel II. The application shows that probabilistic information can be elicited from experts and econometric methods can be useful even when data information is sparse.
How much can we learn about missing data?: an exploration of a clinical trial in psychiatry
, 2008
"... Summary. When a randomized controlled trial has missing outcome data, any analysis is based on untestable assumptions, e.g. that the data are missing at random, or less commonly on other assumptions about the missing data mechanism. Given such assumptions, there is an extensive literature on suitabl ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Summary. When a randomized controlled trial has missing outcome data, any analysis is based on untestable assumptions, e.g. that the data are missing at random, or less commonly on other assumptions about the missing data mechanism. Given such assumptions, there is an extensive literature on suitable methods of analysis. However, little is known about what assumptions are appropriate. We use two sources of ancillary data to explore the missing data mechanism in a trial of adherence therapy in patients with schizophrenia: carerreported (proxy) outcomes and the number of contact attempts. This requires additional assumptions to be made whose plausibility we discuss. Proxy outcomes are found to be unhelpful in this trial because they are insufficiently associated with patient outcome and because the ancillary assumptions are implausible. The number of attempts required to achieve a followup interview is helpful and suggests that these data are unlikely to depart far from being missing at random.We also perform sensitivity analyses to departures from missingness at random, based on the investigators ’ prior beliefs elicited at the start of the trial. Wider use of techniques such as these will help to inform the choice of suitable assumptions for the analysis of randomized controlled trials.
An Experimental Procedure for Evaluating UserCentered Methods for Rapid Bayesian Network Construction
"... Bayesian networks (BNs) are excellent tools for reasoning about uncertainty and capturing detailed domain knowledge. However, the complexity of BN structures can pose a challenge to domain experts without a background in artificial intelligence or probability when they construct or analyze BN models ..."
Abstract
 Add to MetaCart
Bayesian networks (BNs) are excellent tools for reasoning about uncertainty and capturing detailed domain knowledge. However, the complexity of BN structures can pose a challenge to domain experts without a background in artificial intelligence or probability when they construct or analyze BN models. Several canonical models have been developed to reduce the complexity of BN structures, but there is little research on the accessibility and usability of these canonical models, their associated user interfaces, and the contents of the models, including their probabilistic relationships. In this paper, we present an experimental procedure to evaluate our novel Causal Influence Model structure by measuring users ’ ability to construct new models from scratch, and their ability to comprehend previously constructed models. [Results of our experiment will be presented at the workshop.]