Results 1  10
of
12
Dynamic Algorithm Portfolios
 ANNALS OF MATHEMATICS AND ARTIFICIAL INTELLIGENCE
, 2006
"... Traditional MetaLearning requires long training times, and is often focused on optimizing performance quality, neglecting computational complexity. Algorithm Portfolios are more robust, but present similar limitations. We reformulate algorithm selection as a time allocation problem: all candidate a ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
(Show Context)
Traditional MetaLearning requires long training times, and is often focused on optimizing performance quality, neglecting computational complexity. Algorithm Portfolios are more robust, but present similar limitations. We reformulate algorithm selection as a time allocation problem: all candidate algorithms are run in parallel, and their relative priorities are continually updated based on runtime information, with the aim of minimizing the time to reach a desired performance level. Each algorithm's priority is set based on its current time to solution, estimated according to a parametric model that is trained and used while solving a sequence of problems, gradually increasing its impact on the priority attribution. The use of
Learning dynamic algorithm portfolios
 ANN MATH ARTIF INTELL (2006) 47:295–328
, 2006
"... Algorithm selection can be performed using a model of runtime distribution, learned during a preliminary training phase. There is a tradeoff between the performance of modelbased algorithm selection, and the cost of learning the model. In this paper, we treat this tradeoff in the context of bandi ..."
Abstract

Cited by 23 (1 self)
 Add to MetaCart
(Show Context)
Algorithm selection can be performed using a model of runtime distribution, learned during a preliminary training phase. There is a tradeoff between the performance of modelbased algorithm selection, and the cost of learning the model. In this paper, we treat this tradeoff in the context of bandit problems. We propose a fully dynamic and online algorithm selection technique, with no separate training phase: all candidate algorithms are run in parallel, while a model incrementally learns their runtime distributions. A redundant set of time allocators uses the partially trained model to propose machine time shares for the algorithms. A bandit problem solver mixes the modelbased shares with a uniform share, gradually increasing the impact of the best time allocators as the model improves. We present experiments with a set of SAT solvers on a mixed SATUNSAT benchmark; and with a set of solvers for the Auction Winner Determination problem.
Machine Learning for Digital Document Processing: From Layout Analysis To Metadata Extraction
"... Summary. In the last years, the spread of computers and the Internet caused a significant amount of documents to be available in digital format. Collecting them in digital repositories raised problems that go beyond simple acquisition issues, and cause the need to organize and classify them in order ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
(Show Context)
Summary. In the last years, the spread of computers and the Internet caused a significant amount of documents to be available in digital format. Collecting them in digital repositories raised problems that go beyond simple acquisition issues, and cause the need to organize and classify them in order to improve the effectiveness and efficiency of the retrieval procedure. The success of such a process is tightly related to the ability of understanding the semantics of the document components and content. Since the obvious solution of manually creating and maintaining an updated index is clearly infeasible, due to the huge amount of data under consideration, there is a strong interest in methods that can provide solutions for automatically acquiring such a knowledge. This work presents a framework that intensively exploits intelligent techniques to support different tasks of automatic document processing from acquisition to indexing, from categorization to storing and retrieval. The prototypical version of the system DOMINUS is presented, whose main characteristic is the use of a Machine Learning Server, a suite of different inductive learning methods and systems, among which the more suitable for each specific document processing phase is chosen and applied. The core system is the incremental firstorder logic learner INTHELEX. Thanks to incrementality, it can continuously update and refine the learned theories, dynamically extending its knowledge to handle even completely new classes of documents. Since DOMINUS is general and flexible, it can be embedded as a document management engine into many different Digital Library systems. Experiments in a realworld domain scenario, scientific conference management, confirmed the good performance of the proposed prototype. 1
Adaptive Online Time Allocation to Search Algorithms
 MACHINE LEARNING: ECML 2004. PROCEEDINGS OF THE 15TH EUROPEAN CONFERENCE ON MACHINE LEARNING
, 2004
"... Given is a search problem or a sequence of search problems, as well as a set of potentially useful search algorithms. We propose a general framework for online allocation of computation time to search algorithms based on experience with their performance so far. In an example instantiation, we use s ..."
Abstract

Cited by 7 (6 self)
 Add to MetaCart
Given is a search problem or a sequence of search problems, as well as a set of potentially useful search algorithms. We propose a general framework for online allocation of computation time to search algorithms based on experience with their performance so far. In an example instantiation, we use simple linear extrapolation of performance for allocating time to various simultaneously running genetic algorithms characterized by different parameter values. Despite the large number of searchers tested in parallel, on various tasks this rather general approach compares favorably to a more specialized stateoftheart heuristic; in one case it is nearly two orders of magnitude faster.
Algorithmic Probability, Heuristic Programming and AGI
"... This paper is about Algorithmic Probability (ALP) and Heuristic Programming and how they can be combined to achieve AGI. It is an update of a 2003 report describing a system of this kind (Sol03). We first describe ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
This paper is about Algorithmic Probability (ALP) and Heuristic Programming and how they can be combined to achieve AGI. It is an update of a 2003 report describing a system of this kind (Sol03). We first describe
Stochastic Grammar Based Incremental Machine Learning Using Scheme
"... Gigamachine is our initial implementation of an Artificial General Intelligence (AGI system) in the O’Caml language with the goal of building Solomonoff’s “Phase 1 machine” that he proposed as the basis of a quite powerful incremental ..."
Abstract
 Add to MetaCart
Gigamachine is our initial implementation of an Artificial General Intelligence (AGI system) in the O’Caml language with the goal of building Solomonoff’s “Phase 1 machine” that he proposed as the basis of a quite powerful incremental
Deep Knowledge: Inductive Programming as an Answer
, 2013
"... Inductive programming has focussed on problems where data are not necessarily big, but representation and patterns may be deep (including recursion and complex structures). In this context, we will discuss what really makes some problems hard and whether this difficulty is related to what humans con ..."
Abstract
 Add to MetaCart
(Show Context)
Inductive programming has focussed on problems where data are not necessarily big, but representation and patterns may be deep (including recursion and complex structures). In this context, we will discuss what really makes some problems hard and whether this difficulty is related to what humans consider hard. We will highlight the relevance of background knowledge in this difficulty and how this has influence on a preference of inferring small hypotheses that are added incrementally. When dealing with the techniques to acquire, maintain, revise and use this knowledge, we argue that symbolic approaches (featuring powerful construction, abstraction and/or higherorder features) have several advantages over nonsymbolic approaches, especially when knowledge becomes complex. Also, inductive programming hypotheses (in contrast to many other machine learning paradigms) are usually related to the solutions that humans would find for the same problem, as the constructs that are given as background knowledge are explicit and shared by users and the inductive programming system. This makes inductive programming a very appropriate paradigm for addressing and better understanding many challenging problems humans can solve but machines are still struggling with. Some important issues for the discussion will be the relevance of pattern intelligibility, and the concept of scalability in terms of incrementality, learning to learn, constructive induction, bias, etc.
Raymond J. Solomonoff 19262009
, 2010
"... Ray Solomonoff, the first inventor of some of the fundamental ideas of Algorithmic Information Theory, died in December, 2009. His original ideas helped start the thriving research areas of algorithmic information theory and algorithmic inductive inference. His scientific legacy is enduring and impo ..."
Abstract
 Add to MetaCart
Ray Solomonoff, the first inventor of some of the fundamental ideas of Algorithmic Information Theory, died in December, 2009. His original ideas helped start the thriving research areas of algorithmic information theory and algorithmic inductive inference. His scientific legacy is enduring and important. He was also a highly original, colorful personality, warmly remembered by everybody whose life he touched. We outline his contributions, placing it into its historical context, and the context of other research in algorithmic information theory.
A Comparison of Three Fitness Prediction Strategies for Interactive Genetic Algorithms *
"... The human fatigue problem is one of the most significant problems encountered by interactive genetic algorithms (IGA). Different strategies have been proposed to address this problem, such as easing evaluation methods, accelerating IGA convergence via speedup algorithms, and fitness prediction. This ..."
Abstract
 Add to MetaCart
(Show Context)
The human fatigue problem is one of the most significant problems encountered by interactive genetic algorithms (IGA). Different strategies have been proposed to address this problem, such as easing evaluation methods, accelerating IGA convergence via speedup algorithms, and fitness prediction. This paper studies the performance of fitness prediction strategies. Three prediction schemes, the neural network (NN), the Bayesian learning algorithm (BLA), and a novel prediction method based on algorithmic probability (ALP), are examined. Numerical simulations are performed in order to compare the performances of these three schemes.
Inductive Inference on Noisy Data by Genetic Programming
"... Abstract. In this paper a Genetic Programming algorithm based on Solomonoff probabilistic induction concepts is designed and used to face an Inductive Inference task, i.e. symbolic regression. To this aim, Schwefel function is dressed with increasing levels of additive noise and the algorithm is emp ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. In this paper a Genetic Programming algorithm based on Solomonoff probabilistic induction concepts is designed and used to face an Inductive Inference task, i.e. symbolic regression. To this aim, Schwefel function is dressed with increasing levels of additive noise and the algorithm is employed to denoise the resulting function and recover the starting one. The proposed algorithm is compared against a classical parsimony–based GP. The earliest results seem to show a superiority of the Solomonoff–based approach.