Results 1 
9 of
9
Bisimulation for Labelled Markov Processes
 Information and Computation
, 1997
"... In this paper we introduce a new class of labelled transition systems  Labelled Markov Processes  and define bisimulation for them. ..."
Abstract

Cited by 139 (23 self)
 Add to MetaCart
In this paper we introduce a new class of labelled transition systems  Labelled Markov Processes  and define bisimulation for them.
Probabilistic Game Semantics
 Computer Science Society
, 2000
"... A category of HO/Nstyle games and probabilistic strategies is developedwhere the possible choices of a strategy are quantified so as to give a measure of the likelihood of seeing a given play. A 2sided die is shown to be universal in this category, in the sense that any strategy breaks down into a ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
A category of HO/Nstyle games and probabilistic strategies is developedwhere the possible choices of a strategy are quantified so as to give a measure of the likelihood of seeing a given play. A 2sided die is shown to be universal in this category, in the sense that any strategy breaks down into a composition between some deterministic strategy and that die. The interpretative power of the category is then demonstrated by delineating a Cartesian closed subcategory which provides a fully abstract model of a probabilistic extension of Idealized Algol.
A probabilistic language based upon sampling functions
 In Conference Record of the 32nd Annual ACM Symposium on Principles of Programming Languages
, 2005
"... As probabilistic computations play an increasing role in solving various problems, researchers have designed probabilistic languages which treat probability distributions as primitive datatypes. Most probabilistic languages, however, focus only on discrete distributions and have limited expressive p ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
As probabilistic computations play an increasing role in solving various problems, researchers have designed probabilistic languages which treat probability distributions as primitive datatypes. Most probabilistic languages, however, focus only on discrete distributions and have limited expressive power. This paper presents a probabilistic language, called λ○, whose expressive power is beyond discrete distributions. Rich expressiveness of λ ○ is due to its use of sampling functions, i.e., mappings from the unit interval (0.0, 1.0] to probability domains, in specifying probability distributions. As such, λ ○ enables programmers to formally express and reason about sampling methods developed in simulation theory. The use of λ ○ is demonstrated with three applications in robotics: robot localization, people tracking, and robotic mapping. All experiments have been carried out with real robots.
A Monadic Probabilistic Language
 In Proceedings of the 2003 ACM SIGPLAN international workshop on Types in languages design and implementation
, 2003
"... Motivated by many practical applications that have to compute in the presence of uncertainty, we propose a monadic probabilistic language based upon the mathematical notion of sampling function. Our language provides a unified representation scheme for probability distributions, enjoys rich expressi ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
Motivated by many practical applications that have to compute in the presence of uncertainty, we propose a monadic probabilistic language based upon the mathematical notion of sampling function. Our language provides a unified representation scheme for probability distributions, enjoys rich expressiveness, and o#ers high versatility in encoding probability distributions. We also develop a novel style of operational semantics called a horizontal operational semantics, under which an evaluation returns not a single outcome but multiple outcomes. We have preliminary evidence that the horizontal operational semantics improves the ordinary operational semantics with respect to both execution time and accuracy in representing probability distributions.
A probabilistic language based on sampling functions
 ACM Transactions on Programming Languages and Systems
, 2006
"... As probabilistic computations play an increasing role in solving various problems, researchers have designed probabilistic languages which treat probability distributions as primitive datatypes. Most probabilistic languages, however, focus only on discrete distributions and have limited expressive p ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
As probabilistic computations play an increasing role in solving various problems, researchers have designed probabilistic languages which treat probability distributions as primitive datatypes. Most probabilistic languages, however, focus only on discrete distributions and have limited expressive power. This article presents a probabilistic language, called λ○, whose expressive power is beyond discrete distributions. Rich expressiveness of λ ○ is due to its use of sampling functions, that is, mappings from the unit interval (0.0, 1.0] to probability domains, in specifying probability distributions. As such, λ ○ enables programmers to formally express and reason about sampling methods developed in simulation theory. The use of λ ○ is demonstrated with three applications in robotics: robot localization, people tracking, and robotic mapping. All experiments have been carried out with real robots.
Approximating Continuous Markov Processes
, 2000
"... Markov processes with continuous state spaces arise in the analysis of stochastic physical systems or stochastic hybrid systems. The standard logical and algorithmic tools for reasoning about discrete (finitestate) systems are, of course, inadequate for reasoning about such systems. In this work we ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Markov processes with continuous state spaces arise in the analysis of stochastic physical systems or stochastic hybrid systems. The standard logical and algorithmic tools for reasoning about discrete (finitestate) systems are, of course, inadequate for reasoning about such systems. In this work we develop three related ideas for making such reasoning principles applicable to continuous systems. ffl We show how to approximate continuous systems by a countable family of finitestate probabilistic systems, we can reconstruct the full system from these finite approximants, ffl we define a metric between processes and show that the approximants converge in this metric to the full process, ffl we show that reasoning about properties definable in a rich logic can be carried out in terms of the approximants. The systems that we consider are Markov processes where the state space is continuous but the time steps are discrete. We allow such processes to interact with the environment by syn...
Measure Transformer Semantics for Bayesian Machine Learning
"... Abstract. The Bayesian approach to machine learning amounts to inferring posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expres ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract. The Bayesian approach to machine learning amounts to inferring posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a core functional calculus with primitives for sampling prior distributions and observing variables. We define combinators for measure transformers, based on theorems in measure theory, and use these to give a rigorous semantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid measures, and, in particular, for observations of zeroprobability events. We compile our core language to a small imperative language that has a straightforward semantics via factor graphs, data structures that enable many efficient inference algorithms. We use an existing inference engine for efficient approximate inference of posterior marginal distributions, treating thousands of observations per second for large instances of realistic models. 1
A Programming Language for Probabilistic Computation
, 2005
"... As probabilistic computations play an increasing role in solving various problems, researchers have designed probabilistic languages to facilitate their modeling. Most of the existing probabilistic languages, however, focus only on discrete distributions, and there has been little effort to develop ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
As probabilistic computations play an increasing role in solving various problems, researchers have designed probabilistic languages to facilitate their modeling. Most of the existing probabilistic languages, however, focus only on discrete distributions, and there has been little effort to develop probabilistic languages whose expressive power is beyond discrete distributions. This dissertation presents a probabilistic language, called PTP (ProbabilisTic Programming), which supports all kinds of probability distributions.
Bayesian Machine Learning
, 2011
"... Abstract. The Bayesian approach to machine learning amounts to inferring posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expres ..."
Abstract
 Add to MetaCart
Abstract. The Bayesian approach to machine learning amounts to inferring posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a core functional calculus with primitives for sampling prior distributions and observing variables. We define combinators for measure transformers, based on theorems in measure theory, and use these to give a rigorous semantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid measures, and, in particular, for observations of zeroprobability events. We compile our core language to a small imperative language that in addition to the measure transformer semantics also has a straightforward semantics via factor graphs, data structures that enable many efficient inference algorithms. We use an existing inference engine for efficient approximate inference of posterior marginal distributions, treating thousands of observations per second for large instances of realistic models. 1