Results 1 
6 of
6
Probabilistic Constraint Handling Rules
, 2002
"... Classical Constraint Handling Rules (CHR) provide a powerful tool for specifying and implementing constraint solvers and programs. The rules of CHR rewrite constraints (nondeterministically) into simpler ones until they are solved. In this paper we introduce an extension of Constraint Handling Rule ..."
Abstract

Cited by 21 (5 self)
 Add to MetaCart
Classical Constraint Handling Rules (CHR) provide a powerful tool for specifying and implementing constraint solvers and programs. The rules of CHR rewrite constraints (nondeterministically) into simpler ones until they are solved. In this paper we introduce an extension of Constraint Handling Rules (CHR), namely Probabilistic CHRs (PCHR). These allow the probabilistic “weighting ” of rules, specifying the probability of their application. In this way we are able to formalise various randomised algorithms such as for example Simulated Annealing. The implementation is based on sourcetosource transformation (STS). Using a recently developed prototype for STS for CHR, we could implement probabilistic CHR in a concise way with a few lines of code in less than one hour.
Concurrent Constraint Programming: Towards Probabilistic Abstract Interpretation
 Proc. of the 23rd International Symposium on Mathematical Foundations of Computer Science, MFCS'98, Lecture Notes in Computer Science
, 2000
"... We present a method for approximating the semantics of probabilistic programs to the purpose of constructing semanticsbased analyses of such programs. The method resembles the one based on Galois connection as developed in the Cousot framework for abstract interpretation. The main difference betwee ..."
Abstract

Cited by 20 (8 self)
 Add to MetaCart
We present a method for approximating the semantics of probabilistic programs to the purpose of constructing semanticsbased analyses of such programs. The method resembles the one based on Galois connection as developed in the Cousot framework for abstract interpretation. The main difference between our approach and the standard theory of abstract interpretation is the choice of linear space structures instead of ordertheoretic ones as semantical (concrete and abstract) domains. We show that our method generates "best approximations" according to an appropriate notion of precision defined in terms of a norm. Moreover, if recasted in a ordertheoretic setting these approximations are correct in the sense of classical abstract interpretation theory. We use Concurrent ...
Probabilistic KLAIM
 In Proc. of 7th International Conference on Coordination Models and Languages (Coordination 04), volume 2949 of LNCS
, 2004
"... We introduce a probabilistic extension of KLAIM, where the behaviour of networks and individual nodes is determined by a probabilistic scheduler for processes and probabilistic allocation environments which describe the logical neighbourhood of each node. The resulting language has two variants ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
We introduce a probabilistic extension of KLAIM, where the behaviour of networks and individual nodes is determined by a probabilistic scheduler for processes and probabilistic allocation environments which describe the logical neighbourhood of each node. The resulting language has two variants which are modelled respectively as discrete and continuous time Markov processes. We suggest that Poisson processes are a natural probabilistic model for the coordination of discrete processes asynchronously communicating in continuous time and we use them to de ne the operational semantics of the continuous time variant. This framework allows for the implementation of networks with independent clocks on each site.
Probabilistic Lindabased Coordination Languages
"... Abstract. Coordination languages are intended to simplify the development of complex software systems by separating the coordination aspects of an application from its computation aspects. Coordination refers to the ways the independent active pieces of a program (e.g. a process, a task, a thread, e ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract. Coordination languages are intended to simplify the development of complex software systems by separating the coordination aspects of an application from its computation aspects. Coordination refers to the ways the independent active pieces of a program (e.g. a process, a task, a thread, etc.) communicate and synchronise with each other. We review various approaches to introducing probabilistic or stochastic features in coordination languages. The main objective of such a study is to develop a semantic basis for a quantitative analysis of systems of interconnected or interacting components, which allows us to address not only the functional (qualitative) aspects of a system behaviour but also its nonfunctional aspects, typically considered in the realm of performance modelling and evaluation.
Quantum Constraint Programming
"... Abstract Quantum computers are hypothetical machines which can perform many calculations simultaneously based on quantummechanical principles that allows a single bit to coexist in many states at once. This enormous potential of quantum computing has attracted substantial interest, especially durin ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract Quantum computers are hypothetical machines which can perform many calculations simultaneously based on quantummechanical principles that allows a single bit to coexist in many states at once. This enormous potential of quantum computing has attracted substantial interest, especially during the last decade, and initiated a whole new field of research. As a contribution to this research we address the problem of the design of high level languages for programming quantum computers, and the definition of an appropriate formal semantics for such languages. To this purpose we consider the Constraint Programming paradigm and we show how computations in this paradigm can be seen as physical processes obeying the laws of quantum mechanics. 1 Introduction Quantum Computing is currently one of the hottest topics in computer science, physics and engineering. One of the reason for such an interest is the dramatic miniaturisation in computer technology over the past 40 years, which will inevitably force (if the trend continues) the use of quantum physics to describe the elementary operations of a computer. The by now already flourishing research in quantum computing has arisen in anticipation of this inevitable step. If technological issues have started the studies of the new computational model, the discovery of its potential has made the interest continuously increasing, especially after the presentation of Shor's quantum factorisation algorithm [30], and Grover's search algorithm [14]. These algorithms heavily exploits quantum mechanical phenomena. Thanks to quantum effects like superposition and entanglement, a function can be evaluated on a quantum machine so as all the outputs are computed in the time taken to evaluate just one output classically. Although a measurement of the final superposed state can yield only one output, it is still possible to obtain certain joint properties of all the outputs. As a consequence of this quantum parallelism, Shor and Grover's algorithms are faster than any known classical algorithm designed for solving the same problems.
Corresponding Author:
"... We address the problem of characterising the security of a program against unauthorised information flows. Classical approaches are based on noninterference models which depend ultimately on the notion of process equivalence. In these models confidentiality is an absolute property stating the absen ..."
Abstract
 Add to MetaCart
We address the problem of characterising the security of a program against unauthorised information flows. Classical approaches are based on noninterference models which depend ultimately on the notion of process equivalence. In these models confidentiality is an absolute property stating the absence of any illegal information flow. We present a model in which the notion of noninterference is approximated in the sense that it allows for some exactly quantified leakage of information. This is characterised via a notion of process similarity which replaces the indistinguishability of processes by a quantitative measure of their behavioural difference. Such a quantity is related to the number of statistical tests needed to distinguish two behaviours. We also present two semanticsbased analyses of approximate noninterference and we show that one is a correct abstraction of the other. 1