Results 11  20
of
31
Formalization of Standard Uniform Random Variable
 Theoretical Computer Science
, 2006
"... Continuous random variables are widely used to mathematically describe random phenomenon in engineering and physical sciences. In this paper, we present a higherorder logic formalization of the Standard Uniform random variable. We show the correctness of this specification by proving the correspond ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
Continuous random variables are widely used to mathematically describe random phenomenon in engineering and physical sciences. In this paper, we present a higherorder logic formalization of the Standard Uniform random variable. We show the correctness of this specification by proving the corresponding probability distribution properties within the HOL theorem prover and the proof steps have been summarized. This formalized Standard Uniform random variable can be transformed to formalize other continuous random variables, such as Uniform, Exponential, Normal, etc., by using various Nonuniform random number generation techniques. The formalization of these continuous random variables will enable us to perform error free probabilistic analysis of systems within the framework of a higherorderlogic theorem prover. For illustration purposes, we present the formalization of the Continuous Uniform random variable based on our Standard Uniform random variable and then utilize it to perform a simple probabilistic analysis of roundoff error in HOL. 1 1
Propagation Networks: A Flexible and Expressive Substrate for Computation
, 2009
"... In this dissertation I propose a shift in the foundations of computation. Modern programming systems are not expressive enough. The traditional image of a single computer that has global effects on a large memory is too restrictive. The propagation paradigm replaces this with computing by networks o ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
In this dissertation I propose a shift in the foundations of computation. Modern programming systems are not expressive enough. The traditional image of a single computer that has global effects on a large memory is too restrictive. The propagation paradigm replaces this with computing by networks of local, independent, stateless machines interconnected with stateful storage cells. In so doing, it offers great flexibility and expressive power, and has therefore been much studied, but has not yet been tamed for generalpurpose computation. The novel insight that should finally permit computing with generalpurpose propagation is that a cell should not be seen as storing a value, but as accumulating information about a value. Various forms of the general idea of propagation have been used with great success for various special purposes; perhaps the most immediate example is constraint propagation in constraint satisfaction systems. This success is evidence both
Nonuniform distributions in quantitative informationflow
 In ASIACCS 2011
, 2011
"... Quantitative informationflow analysis (QIF) determines the amount of information that a program leaks about its secret inputs. For this, QIF requires an assumption about the distribution of the secret inputs. Existing techniques either consider the worstcase over a (sub)set of all input distribu ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Quantitative informationflow analysis (QIF) determines the amount of information that a program leaks about its secret inputs. For this, QIF requires an assumption about the distribution of the secret inputs. Existing techniques either consider the worstcase over a (sub)set of all input distributions and thereby overapproximate the amount of leaked information; or they are tailored to reasoning about uniformly distributed inputs and are hence not directly applicable to nonuniform usecases; or they deal with explicitly represented distributions, for which suitable abstraction techniques are only now emerging. In this paper we propose a novel approach for a precise QIF with respect to nonuniform input distributions: We present a reduction technique that transforms the problem of QIF w.r.t. nonuniform distributions into the problem of QIF for the uniform case. This reduction enables us to directly apply existing techniques for uniform QIF to the nonuniform case. We furthermore show that quantitative information flow is robust with respect to variations of the input distribution. This result allows us to perform QIF based on approximate input distributions, which can significantly simplify the analysis. Finally, we perform a case study where we illustrate our techniques by using them to analyze an integrity check on nonuniformly distributed PINs, as they are used for banking.
From Bayesian notation to pure Racket, via measuretheoretic probability in λZFC
 In: Implementation and Application of Functional Languages
, 2010
"... Abstract. Bayesian practitioners build models of the world without regarding how difficult it will be to answer questions about them. When answering questions, they put off approximating as long as possible, and usually must write programs to compute converging approximations. Writing the programs i ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Bayesian practitioners build models of the world without regarding how difficult it will be to answer questions about them. When answering questions, they put off approximating as long as possible, and usually must write programs to compute converging approximations. Writing the programs is distracting, tedious and errorprone, and we wish to relieve them of it by providing languages and compilers. Their style constrains our work: the tools we provide cannot approximate early. Our approach to meeting this constraint is to 1) determine their notation’s meaning in a suitable theoretical framework; 2) generalize our interpretation in an uncomputable, exact semantics; 3) approximate the exact semantics and prove convergence; and 4) implement the approximating semantics in Racket (formerly PLT Scheme). In this way, we define languages with at least as much exactness as Bayesian practitioners have in mind, and also put off approximating as long as possible. In this paper, we demonstrate the approach using our preliminary work on discrete (countably infinite) Bayesian models.
Modeling Genome Evolution with a DSEL for Probabilistic Programming
 In 8th Int. Symp. on Practical Aspects of Declarative Languages
, 2006
"... Abstract. Many scientific applications benefit from simulation. However, programming languages used in simulation, such as C++ or Matlab, approach problems from a deterministic procedural view, which seems to differ, in general, from many scientists ’ mental representation. We apply a domainspecifi ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. Many scientific applications benefit from simulation. However, programming languages used in simulation, such as C++ or Matlab, approach problems from a deterministic procedural view, which seems to differ, in general, from many scientists ’ mental representation. We apply a domainspecific language for probabilistic programming to the biological field of gene modeling, showing how the mentalmodel gap may be bridged. Our system assisted biologists in developing a model for genome evolution by separating the concerns of model and simulation and providing implicit probabilistic nondeterminism.
Toward Interactive Statistical Modeling
"... When solving machine learning problems, there is currently little automated support for easily experimenting with alternative statistical models or solution strategies. This is because this activity often requires expertise from several different fields (e.g., statistics, optimization, linear algebr ..."
Abstract
 Add to MetaCart
(Show Context)
When solving machine learning problems, there is currently little automated support for easily experimenting with alternative statistical models or solution strategies. This is because this activity often requires expertise from several different fields (e.g., statistics, optimization, linear algebra), and the level of formalism required for automation is much higher than for a human solving problems on paper. We present a system toward addressing these issues, which we achieve by (1) formalizing a type theory for probability and optimization, and (2) providing an interactive rewrite system for applying problem reformulation theorems. Automating solution strategies this way enables not only manual experimentation but also higherlevel, automated activities, such as autotuning. Keywords: machine learning, algorithm derivation, interactive modeling, type theory
Stochastic Reasoning in Hybrid Linear Logic
, 2009
"... Ordinary linear implication can represent unconstrained state transition, but stateful systems often operate under temporal and stochastic constraints which impede the use of linear logic as a framework for representing stateful computations. We propose a general modal extension of linear logic wher ..."
Abstract
 Add to MetaCart
Ordinary linear implication can represent unconstrained state transition, but stateful systems often operate under temporal and stochastic constraints which impede the use of linear logic as a framework for representing stateful computations. We propose a general modal extension of linear logic where the worlds represent the constraints, and hybrid connectives combine constraint reasoning with ordinary logical reasoning. Among the merits of this logic is a generic focused sequent calculus that can be used to internalize the rules of particular stateful systems; we illustrate this with a simple adequate encoding of the synchronous stochastic picalculus. 1
Abstract
"... In this paper, we present a simple way to write Zstyle specifications of randomized programs. To interpret the written specifications, we propose to use a constructive set theory, called CZ set theory, instead of the classical set theory Z. Since CZ has an interpretation in MartinLöf’s theory of t ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper, we present a simple way to write Zstyle specifications of randomized programs. To interpret the written specifications, we propose to use a constructive set theory, called CZ set theory, instead of the classical set theory Z. Since CZ has an interpretation in MartinLöf’s theory of types, it enables us to derive probabilistic programs from correctness proofs of the written specifications. In this work, we give the elementary ideas, and the details will be covered in future work. 1
General Terms
"... There has been great interest in creating probabilistic programming languages to simplify the coding of statistical tasks; however, there still does not exist a formal language that simultaneously provides (1) continuous probability distributions, (2) the ability to naturally express custom probabil ..."
Abstract
 Add to MetaCart
(Show Context)
There has been great interest in creating probabilistic programming languages to simplify the coding of statistical tasks; however, there still does not exist a formal language that simultaneously provides (1) continuous probability distributions, (2) the ability to naturally express custom probabilistic models, and (3) probability density functions (PDFs). This collection of features is necessary for mechanizing fundamental statistical techniques. We formalize the first probabilistic language that exhibits these features, and it serves as a foundational framework for extending the ideas to more general languages. Particularly novel are our type system for absolutely continuous (AC) distributions (those which permit PDFs) and our PDF calculation procedure, which calculates PDFs for a large class of AC distributions. Our formalization paves the way toward the rigorous encoding of powerful statistical reformulations.
A ModelLearner Pattern for Bayesian Reasoning Andrew D. Gordon (Microsoft Research and University of Edinburgh) Mihhail Aizatulin (Open University)
"... A Bayesian model is based on a pair of probability distributions, known as the prior and sampling distributions. A wide range of fundamental machine learning tasks, including regression, classification, clustering, and many others, can all be seen as Bayesian models. We propose a new probabilistic ..."
Abstract
 Add to MetaCart
A Bayesian model is based on a pair of probability distributions, known as the prior and sampling distributions. A wide range of fundamental machine learning tasks, including regression, classification, clustering, and many others, can all be seen as Bayesian models. We propose a new probabilistic programming abstraction, a typed Bayesian model, based on a pair of probabilistic expressions for the prior and sampling distributions. A sampler for a model is an algorithm to compute synthetic data from its sampling distribution, while a learner for a model is an algorithm for probabilistic inference on the model. Models, samplers, and learners form a generic programming pattern for modelbased inference. They support the uniform expression of common tasks including model testing, and generic compositions such as mixture models, evidencebased model averaging, and mixtures of experts. A formal semantics supports reasoning about model equivalence and implementation correctness. By developing a series of examples and three learner implementations based on exact inference, factor graphs, and Markov chain Monte Carlo, we demonstrate the broad applicability of this new programming pattern.