Results 11  20
of
29
Sequential Decision Making Based on Direct Search
, 2001
"... Credit Assignment Hierarchical learning of macros and reusable subprograms is of interest but limited. Often there are nonhierarchical (nevertheless exploitable) regularities in solution space. For instance, suppose we can obtain solution B by replacing every action "turn(right)" in solution A by " ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Credit Assignment Hierarchical learning of macros and reusable subprograms is of interest but limited. Often there are nonhierarchical (nevertheless exploitable) regularities in solution space. For instance, suppose we can obtain solution B by replacing every action "turn(right)" in solution A by "turn(left)." B will then be regular in the sense that it conveys little additional conditional algorithmic information, given A (Solomono, 1964; Kolmogorov, 1965; Chaitin, 1969; Li & Vitanyi, 1993), that is, there is a short algorithm computing B from A. Hence B should not be hard to learn by a smart RL system that already found A. While DPRL cannot exploit such regularities in any obvious manner, DS in general algorithm spaces does not encounter any fundamental problems in this context. For instance, all that is necessary to nd B may be a modication of the parameter \right" of a single instruction \turn(right)" in a repetitive loop computing A (Schmidhuber et al., 1997b). 2.5 DS Advant...
A Kolmogorov Complexitybased Genetic Programming Tool for String Compression
 in Proceedings of the Genetic and Evolutionary Computation Conference (GECCO2000), Darrell Whitley, David Goldberg, Erick CantuPaz, Lee Spector, Ian Parmee, and HansGeorg Beyer, Eds., Las Vegas
, 2000
"... By following the guidelines set in one of our previous papers, in this paper we face the problem of Kolmogorov complexity estimate for binary strings by making use of a Genetic Programming approach. This consists in evolving a population of Lisp programs looking for the "optimal" program that genera ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
By following the guidelines set in one of our previous papers, in this paper we face the problem of Kolmogorov complexity estimate for binary strings by making use of a Genetic Programming approach. This consists in evolving a population of Lisp programs looking for the "optimal" program that generates a given string. By taking into account several target binary strings belonging to different formal languages, we show the effectiveness of our approach in obtaining an approximation from the above of the Kolmogorov complexity function. Moreover, the adequate choice of "similar" target strings allows our system to show very interesting computational strategies. Experimental results indicate that our tool achieves promising compression rates for binary strings belonging to formal languages. Furthermore, even for more complicated strings our method can work, provided that some degree of loss is accepted. These results constitute a first step in using Kolmogorov complexit...
Very simple Chaitin machines for concrete AIT
 Fundamenta Informaticae
, 2005
"... Abstract. In 1975, Chaitin introduced his celebrated Omega number, the halting probability of a universal Chaitin machine, a universal Turing machine with a prefixfree domain. The Omega number’s bits are algorithmically random—there is no reason the bits should be the way they are, if we define “re ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Abstract. In 1975, Chaitin introduced his celebrated Omega number, the halting probability of a universal Chaitin machine, a universal Turing machine with a prefixfree domain. The Omega number’s bits are algorithmically random—there is no reason the bits should be the way they are, if we define “reason ” to be a computable explanation smaller than the data itself. Since that time, only two explicit universal Chaitin machines have been proposed, both by Chaitin himself. Concrete algorithmic information theory involves the study of particular universal Turing machines, about which one can state theorems with specific numerical bounds, rather than include terms like O(1). We present several new tiny Chaitin machines (those with a prefixfree domain) suitable for the study of concrete algorithmic information theory. One of the machines, which we call Keraia, is a binary encoding of lambda calculus based on a curried lambda operator. Source code is included in the appendices. We also give an algorithm for restricting the domain of blankendmarker machines to a prefixfree domain over an alphabet that does not include the endmarker; this allows one to take many universal Turing machines and construct universal Chaitin machines from them. 1.
SUBJECTIVITY IN INDUCTIVE INFERENCE
, 2009
"... This paper examines circumstances under which subjectivity enhances the effectiveness of inductive reasoning. We consider a game in which Fate chooses a data generating process and agents are characterized by inference rules that may be purely objective (or databased) or may incorporate subjective ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper examines circumstances under which subjectivity enhances the effectiveness of inductive reasoning. We consider a game in which Fate chooses a data generating process and agents are characterized by inference rules that may be purely objective (or databased) or may incorporate subjective considerations. The basic intuition is that agents who invoke no subjective considerations are doomed to “overfit” the data and therefore engage in ineffective learning. The analysis places no computational or memory limitations on the agents—the role for subjectivity emerges in the presence of unlimited reasoning powers. 1 We thank Daron Acemoglu, Ken Binmore, and Arik Roginsky for discussions, comments,
Algorithmic Information Theory and Machine Learning
, 2000
"... this paper we only consider the context of concept learning : Let X be a set called the instance space. A concept is a subset of X . Usually concepts are identied with their indicating function (by abuse of notations c(x) = 1 x 2 c) A concept class is a set C 2 ..."
Abstract
 Add to MetaCart
this paper we only consider the context of concept learning : Let X be a set called the instance space. A concept is a subset of X . Usually concepts are identied with their indicating function (by abuse of notations c(x) = 1 x 2 c) A concept class is a set C 2
Extensions of Linear Independent Component Analysis: Neural and InformationTheoretic Methods
"... Obtaining information from measured data is a general problem which is encountered in numerous applications and fields of science. A goal of many data analysis methods is to transform the observed data into a representation which reveals the information contained in the data. Methods for obtaining s ..."
Abstract
 Add to MetaCart
Obtaining information from measured data is a general problem which is encountered in numerous applications and fields of science. A goal of many data analysis methods is to transform the observed data into a representation which reveals the information contained in the data. Methods for obtaining such representations include principal component analysis, projection pursuit, cluster analysis, and neural unsupervised learning methods.
Recent Progress in the Fields of Universal Learning Algorithms and Optimal Search
"... We briefly review recent results in the field of theoretically optimal algorithms for prediction, search, decision making, and reinforcement learning in environments of a very general type. The results may be relevant not only for computer science but also for physics. ..."
Abstract
 Add to MetaCart
We briefly review recent results in the field of theoretically optimal algorithms for prediction, search, decision making, and reinforcement learning in environments of a very general type. The results may be relevant not only for computer science but also for physics.
Geometric Complexity and Minimum Description Length Principle
, 1999
"... The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. Quantitative methods of selecting among models have been advanced over the years without an underlying theoretical framework to guide the enterprise and evaluate new developmen ..."
Abstract
 Add to MetaCart
The question of how one should decide among competing explanations of data is at the heart of the scientific enterprise. Quantitative methods of selecting among models have been advanced over the years without an underlying theoretical framework to guide the enterprise and evaluate new developments. In this paper, we show that differential geometry provides a unified understanding of the model selection problem. Foremost among its contributions is a reconceptualization of the problem as one of counting probability distributions. This reconceptualization naturally leads to development of a "geometric" complexity measure, which turns out to be equal to the Minimum Description Length (MDL) complexity measure Rissanen (1996) recently proposed. We demonstrate an application of the geometric complexity measure to model selection in cognitive psychology, with models of cognitive modeling in three different areas (psychophysics, information integration, categorization).
Pages 1–5 Inferring genegene interactions from microarray data series
"... using unbiased pattern detection ..."