Results 1  10
of
34
Temporal Concurrent Constraint Programming: Denotation, Logic and Applications
, 2002
"... The tcc model is a formalism for reactive concurrent constraint programming. We present a model of temporal concurrent constraint programming which adds to tcc the capability of modeling asynchronous and nondeterministic timed behavior. We call this tcc extension the ntcc calculus. We also give a d ..."
Abstract

Cited by 88 (30 self)
 Add to MetaCart
(Show Context)
The tcc model is a formalism for reactive concurrent constraint programming. We present a model of temporal concurrent constraint programming which adds to tcc the capability of modeling asynchronous and nondeterministic timed behavior. We call this tcc extension the ntcc calculus. We also give a denotational semantics for the strongestpostcondition of ntcc processes and, based on this semantics, we develop a proof system for lineartemporal properties of these processes. The expressiveness of ntcc is illustrated by modeling cells, timed systems such as RCX controllers, multiagent systems such as the Predator /Prey game, and musical applications such as generation of rhythms patterns and controlled improvisation. 1
Using Hybrid Concurrent Constraint Programming to Model Dynamic Biological Systems
 18th International Conference on Logic Programming
, 2002
"... Systems biology is a new area in biology that aims at achieving a systemslevel understanding of biological systems. While current genome projects provide a huge amount of data on genes or proteins, lots of research is still necessary to understand how the dierent parts of a biological system in ..."
Abstract

Cited by 37 (0 self)
 Add to MetaCart
Systems biology is a new area in biology that aims at achieving a systemslevel understanding of biological systems. While current genome projects provide a huge amount of data on genes or proteins, lots of research is still necessary to understand how the dierent parts of a biological system interact in order to perform complex biological functions.
A probabilistic language based upon sampling functions
 In Conference Record of the 32nd Annual ACM Symposium on Principles of Programming Languages
, 2005
"... As probabilistic computations play an increasing role in solving various problems, researchers have designed probabilistic languages which treat probability distributions as primitive datatypes. Most probabilistic languages, however, focus only on discrete distributions and have limited expressive p ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
(Show Context)
As probabilistic computations play an increasing role in solving various problems, researchers have designed probabilistic languages which treat probability distributions as primitive datatypes. Most probabilistic languages, however, focus only on discrete distributions and have limited expressive power. This paper presents a probabilistic language, called λ○, whose expressive power is beyond discrete distributions. Rich expressiveness of λ ○ is due to its use of sampling functions, i.e., mappings from the unit interval (0.0, 1.0] to probability domains, in specifying probability distributions. As such, λ ○ enables programmers to formally express and reason about sampling methods developed in simulation theory. The use of λ ○ is demonstrated with three applications in robotics: robot localization, people tracking, and robotic mapping. All experiments have been carried out with real robots.
A probabilistic language based on sampling functions
 ACM Transactions on Programming Languages and Systems
, 2006
"... As probabilistic computations play an increasing role in solving various problems, researchers have designed probabilistic languages which treat probability distributions as primitive datatypes. Most probabilistic languages, however, focus only on discrete distributions and have limited expressive p ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
As probabilistic computations play an increasing role in solving various problems, researchers have designed probabilistic languages which treat probability distributions as primitive datatypes. Most probabilistic languages, however, focus only on discrete distributions and have limited expressive power. This article presents a probabilistic language, called λ○, whose expressive power is beyond discrete distributions. Rich expressiveness of λ ○ is due to its use of sampling functions, that is, mappings from the unit interval (0.0, 1.0] to probability domains, in specifying probability distributions. As such, λ ○ enables programmers to formally express and reason about sampling methods developed in simulation theory. The use of λ ○ is demonstrated with three applications in robotics: robot localization, people tracking, and robotic mapping. All experiments have been carried out with real robots.
Measure Transformer Semantics for Bayesian Machine Learning
"... Abstract. The Bayesian approach to machine learning amounts to inferring posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expres ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
Abstract. The Bayesian approach to machine learning amounts to inferring posterior distributions of random variables from a probabilistic model of how the variables are related (that is, a prior distribution) and a set of observations of variables. There is a trend in machine learning towards expressing Bayesian models as probabilistic programs. As a foundation for this kind of programming, we propose a core functional calculus with primitives for sampling prior distributions and observing variables. We define combinators for measure transformers, based on theorems in measure theory, and use these to give a rigorous semantics to our core calculus. The original features of our semantics include its support for discrete, continuous, and hybrid measures, and, in particular, for observations of zeroprobability events. We compile our core language to a small imperative language that has a straightforward semantics via factor graphs, data structures that enable many efficient inference algorithms. We use an existing inference engine for efficient approximate inference of posterior marginal distributions, treating thousands of observations per second for large instances of realistic models. 1
A Monadic Probabilistic Language
 In Proceedings of the 2003 ACM SIGPLAN international workshop on Types in languages design and implementation
, 2003
"... Motivated by many practical applications that have to compute in the presence of uncertainty, we propose a monadic probabilistic language based upon the mathematical notion of sampling function. Our language provides a unified representation scheme for probability distributions, enjoys rich expressi ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
(Show Context)
Motivated by many practical applications that have to compute in the presence of uncertainty, we propose a monadic probabilistic language based upon the mathematical notion of sampling function. Our language provides a unified representation scheme for probability distributions, enjoys rich expressiveness, and o#ers high versatility in encoding probability distributions. We also develop a novel style of operational semantics called a horizontal operational semantics, under which an evaluation returns not a single outcome but multiple outcomes. We have preliminary evidence that the horizontal operational semantics improves the ordinary operational semantics with respect to both execution time and accuracy in representing probability distributions.
Approximating Continuous Markov Processes
, 2000
"... Markov processes with continuous state spaces arise in the analysis of stochastic physical systems or stochastic hybrid systems. The standard logical and algorithmic tools for reasoning about discrete (finitestate) systems are, of course, inadequate for reasoning about such systems. In this work we ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
Markov processes with continuous state spaces arise in the analysis of stochastic physical systems or stochastic hybrid systems. The standard logical and algorithmic tools for reasoning about discrete (finitestate) systems are, of course, inadequate for reasoning about such systems. In this work we develop three related ideas for making such reasoning principles applicable to continuous systems. ffl We show how to approximate continuous systems by a countable family of finitestate probabilistic systems, we can reconstruct the full system from these finite approximants, ffl we define a metric between processes and show that the approximants converge in this metric to the full process, ffl we show that reasoning about properties definable in a rich logic can be carried out in terms of the approximants. The systems that we consider are Markov processes where the state space is continuous but the time steps are discrete. We allow such processes to interact with the environment by syn...
Formalization of Continuous Probability Distributions
 In Conference on Automated Deduction, volume 4603 of LNAI
, 2007
"... In order to overcome the limitations of stateoftheart simulation based probabilistic analysis, we propose to perform probabilistic analysis within the environment of a higherorderlogic theorem prover. The foremost requirement for conducting such analysis is the formalization of probability dist ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
(Show Context)
In order to overcome the limitations of stateoftheart simulation based probabilistic analysis, we propose to perform probabilistic analysis within the environment of a higherorderlogic theorem prover. The foremost requirement for conducting such analysis is the formalization of probability distributions. In this report, we present a methodology for the formalization of continuous probability distributions for which the inverse of the cumulative distribution function can be expressed in a closed mathematical form. Our methodology is primarily based on the formalization of the Standard Uniform random variable, cumulative distribution function properties and the Inverse Transform method. The report presents all this formalization using the HOL theorem prover. In order to illustrate the practical effectiveness of our methodology, the formalization of a few continuous probability distributions has also been included. 1 1
Formalization of Standard Uniform Random Variable
 Theoretical Computer Science
, 2006
"... Continuous random variables are widely used to mathematically describe random phenomenon in engineering and physical sciences. In this paper, we present a higherorder logic formalization of the Standard Uniform random variable. We show the correctness of this specification by proving the correspond ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
Continuous random variables are widely used to mathematically describe random phenomenon in engineering and physical sciences. In this paper, we present a higherorder logic formalization of the Standard Uniform random variable. We show the correctness of this specification by proving the corresponding probability distribution properties within the HOL theorem prover and the proof steps have been summarized. This formalized Standard Uniform random variable can be transformed to formalize other continuous random variables, such as Uniform, Exponential, Normal, etc., by using various Nonuniform random number generation techniques. The formalization of these continuous random variables will enable us to perform error free probabilistic analysis of systems within the framework of a higherorderlogic theorem prover. For illustration purposes, we present the formalization of the Continuous Uniform random variable based on our Standard Uniform random variable and then utilize it to perform a simple probabilistic analysis of roundoff error in HOL. 1 1
Boosting probabilistic choice operators
 In Proceedings of Principles and Practices of Constraint Programming, Springer Verlag, LNCS 4741
, 2007
"... Abstract. Probabilistic Choice Operators (PCOs) are convenient tools to model uncertainty in CP. They are useful to implement randomized algorithms and stochastic processes in the concurrent constraint framework. Their implementation is based on the random selection of a value inside a finite domain ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
Abstract. Probabilistic Choice Operators (PCOs) are convenient tools to model uncertainty in CP. They are useful to implement randomized algorithms and stochastic processes in the concurrent constraint framework. Their implementation is based on the random selection of a value inside a finite domain according to a given probability distribution. Unfortunately, the probabilistic choice of a PCO is usually delayed until the probability distribution is completely known. This is inefficient and penalizes their broader adoption in realworld applications. In this paper, we associate to PCO a filtering algorithm that prunes the variation domain of its random variable during constraint propagation. Our algorithm runs in O(n) where n denotes the size of the domain of the probabilistic choice. Experimental results show the practical interest of this approach. 1