Results 1  10
of
25
Learning Stochastic Logic Programs
, 2000
"... Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder r ..."
Abstract

Cited by 1181 (79 self)
 Add to MetaCart
(Show Context)
Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic contextfree grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a firstorder rangerestricted definite clause. This paper summarises the syntax, distributional semantics and proof techniques for SLPs and then discusses how a standard Inductive Logic Programming (ILP) system, Progol, has been modied to support learning of SLPs. The resulting system 1) nds an SLP with uniform probability labels on each definition and nearmaximal Bayes posterior probability and then 2) alters the probability labels to further increase the posterior probability. Stage 1) is implemented within CProgol4.5, which differs from previous versions of Progol by allowing userdefined evaluation functions written in Prolog. It is shown that maximising the Bayesian posterior function involves nding SLPs with short derivations of the examples. Search pruning with the Bayesian evaluation function is carried out in the same way as in previous versions of CProgol. The system is demonstrated with worked examples involving the learning of probability distributions over sequences as well as the learning of simple forms of uncertain knowledge.
Recursively Enumerable Reals and Chaitin Ω Numbers
"... A real is called recursively enumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay [23] and Chaitin [10] we say that an r.e. real dominates an r.e. real if from a good approximation of from below one can compute a good approximation of from b ..."
Abstract

Cited by 38 (3 self)
 Add to MetaCart
A real is called recursively enumerable if it is the limit of a recursive, increasing, converging sequence of rationals. Following Solovay [23] and Chaitin [10] we say that an r.e. real dominates an r.e. real if from a good approximation of from below one can compute a good approximation of from below. We shall study this relation and characterize it in terms of relations between r.e. sets. Solovay's [23]like numbers are the maximal r.e. real numbers with respect to this order. They are random r.e. real numbers. The halting probability ofa universal selfdelimiting Turing machine (Chaitin's Ω number, [9]) is also a random r.e. real. Solovay showed that any Chaitin Ω number islike. In this paper we show that the converse implication is true as well: any Ωlike real in the unit interval is the halting probability of a universal selfdelimiting Turing machine.
Relations between varieties of Kolmogorov complexity
 Mathematical Systems Theory
, 1996
"... Abstract. There are several sorts of Kolmogorov complexity, better to say several Kolmogorov complexities: decision complexity, simple complexity, prefix complexity, monotonic complexity, a priori complexity. The last three can and the first two cannot be used for defining randomness of an infinite ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
(Show Context)
Abstract. There are several sorts of Kolmogorov complexity, better to say several Kolmogorov complexities: decision complexity, simple complexity, prefix complexity, monotonic complexity, a priori complexity. The last three can and the first two cannot be used for defining randomness of an infinite binary sequence. All those five versions of Kolmogorov complexity were considered, from a unified point of view, in a paper by the first author which appeared in Watanabe’s book [23]. Upper and lower bounds for those complexities and also for their differences were announced in that paper without proofs. (Some of those bounds are mentioned in Section 4.4.5 of [16].) The purpose of this paper (which can be read independently of [23]) is to give proofs for the bounds from [23]. The terminology used in this paper is somehow nonstandard: we call “Kolmogorov entropy ” what is usually called “Kolmogorov complexity. ” This is a Moscow tradition suggested by Kolmogorov himself. By this tradition the term “complexity ” relates to any mode of description and “entropy ” is the complexity related to an optimal mode (i.e., to a mode that, roughly speaking, gives the shortest descriptions).
From Heisenberg to Gödel via Chaitin
, 2008
"... In 1927 Heisenberg discovered that the “more precisely the position is determined, the less precisely the momentum is known in this instant, and vice versa”. Four years later Gödel showed that a finitely specified, consistent formal system which is large enough to include arithmetic is incomplete. A ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
In 1927 Heisenberg discovered that the “more precisely the position is determined, the less precisely the momentum is known in this instant, and vice versa”. Four years later Gödel showed that a finitely specified, consistent formal system which is large enough to include arithmetic is incomplete. As both results express some kind of impossibility it is natural to ask whether there is any relation between them, and, indeed, this question has been repeatedly asked for a long time. The main interest seems to have been in possible implications of incompleteness to physics. In this note we will take interest in the converse implication and will offer a positive answer to the question: Does uncertainty imply incompleteness? We will show that algorithmic randomness is equivalent to a “formal uncertainty principle ” which implies Chaitin’s informationtheoretic incompleteness. We also show that the derived uncertainty relation, for many computers, is physical. This fact supports the conjecture that uncertainty implies randomness not only in mathematics, but also in physics.
and local complexity in weakly chaotic dynamical systems, ArXiv math.DS/0210378
"... In a topological dynamical system the complexity of an orbit is a measure of the amount of information (algorithmic information content) that is necessary to describe the orbit. This indicator is invariant up to topological conjugation. We consider this indicator of local complexity of the dynamics ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
In a topological dynamical system the complexity of an orbit is a measure of the amount of information (algorithmic information content) that is necessary to describe the orbit. This indicator is invariant up to topological conjugation. We consider this indicator of local complexity of the dynamics and provide different examples of its behavior, showing how it can be useful to characterize various kind of weakly chaotic dynamics. We also provide criteria to find systems with non trivial orbit complexity (systems where the description of the whole orbit requires an infinite amount of information). We consider also a global indicator of the complexity of the system. This global indicator generalizes the topological entropy, taking into account systems were the number of essentially different orbits increases less than exponentially. Then we prove that if the system is constructive (roughly speaking: if the map can be defined up to any given accuracy using a
Compression and diffusion: a joint approach to detect complexity
 Chaos, Solitons & Fractals
, 2003
"... The adoption of the KolmogorovSinai (KS) entropy is becoming a popular research tool among physicists, especially when applied to a dynamical system fitting the conditions of validity of the Pesin theorem. The study of time series that are a manifestation of system dynamics whose rules are either u ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
(Show Context)
The adoption of the KolmogorovSinai (KS) entropy is becoming a popular research tool among physicists, especially when applied to a dynamical system fitting the conditions of validity of the Pesin theorem. The study of time series that are a manifestation of system dynamics whose rules are either unknown or too complex for a mathematical treatment, is still a challenge since the KS entropy is not computable, in general, in that case. Here we present a plan of action based on the joint action of two procedures, both related to the KS entropy, but compatible with computer implementation through fast and efficient programs. The former procedure, called Compression Algorithm Sensitive To Regularity (CASToRe), establishes the amount of order by the numerical evaluation of algorithmic compressibility. The latter, called Complex Analysis of Sequences via Scaling AND Randomness Assessment (CASSANDRA), establishes the complexity degree through the numerical evaluation of the
Orbit complexity, initial data sensitivity and weakly chaotic dynamical systems
, 2008
"... We give a definition of generalized indicators of sensitivity to initial conditions and orbit complexity (a measure of the information that is necessary to describe the orbit of a given point). The well known RuellePesin and BrinKatok theorems, combined with Brudno’s theorem give a relation betwee ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
We give a definition of generalized indicators of sensitivity to initial conditions and orbit complexity (a measure of the information that is necessary to describe the orbit of a given point). The well known RuellePesin and BrinKatok theorems, combined with Brudno’s theorem give a relation between initial data sensitivity and orbit complexity that is generalized in the present work. The generalized relation implies that the set of points where the sensitivity to initial conditions is more than exponential in all directions is a 0 dimensional set. The generalized relation is then applied to the study of an important example of weakly chaotic dynamics: the Manneville map.
The Manneville map: topological, metric and algorithmic entropy
, 2001
"... We study the Manneville map f(x) = x + x z (mod 1), with z> 1, from a computational point of view, studying the behaviour of the Algorithmic Information Content. In particular, we consider a family of piecewise linear maps that gives examples of algorithmic behaviour ranging from the fully to th ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
(Show Context)
We study the Manneville map f(x) = x + x z (mod 1), with z> 1, from a computational point of view, studying the behaviour of the Algorithmic Information Content. In particular, we consider a family of piecewise linear maps that gives examples of algorithmic behaviour ranging from the fully to the mildly chaotic, and show that the Manneville map is a member of this family.
Measure of uncertainty and information
 Imprecise Probability Project,1999 (http://ippserv.rug.ac.be/home/ipp.html
, 1999
"... Abstract. This contribution overviews the approaches, results and history of attempts at measuring uncertainty and information in the various theories of imprecise probabilities. The main focus, however, is on the theory of belief functions (or the DempsterShafer theory) [62] and the possibility th ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract. This contribution overviews the approaches, results and history of attempts at measuring uncertainty and information in the various theories of imprecise probabilities. The main focus, however, is on the theory of belief functions (or the DempsterShafer theory) [62] and the possibility theory [7] as most of the development so far has happened there. Due to the limited space I am focusing on the main ideas and point to references for details. There are
P.: Vortex dynamics in evolutive flows: A weakly chaotic phenomenon
 Phys. Rev. E
, 2003
"... ..."
(Show Context)