Results 1  10
of
17
Optimal Ordered Problem Solver
, 2002
"... We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the ..."
Abstract

Cited by 73 (20 self)
 Add to MetaCart
(Show Context)
We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the space of domainspecific algorithms, but also the space of search algorithms. Essentially we extend the principles of optimal nonincremental universal search to build an incremental universal learner that is able to improve itself through experience.
J.P.: Ranking learning algorithms: Using IBL and metalearning on accuracy and time results
 Machine Learning
, 2003
"... Abstract. We present a metalearning method to support selection of candidate learning algorithms. It uses a kNearest Neighbor algorithm to identify the datasets that are most similar to the one at hand. The distance between datasets is assessed using a relatively small set of data characteristics, ..."
Abstract

Cited by 69 (7 self)
 Add to MetaCart
Abstract. We present a metalearning method to support selection of candidate learning algorithms. It uses a kNearest Neighbor algorithm to identify the datasets that are most similar to the one at hand. The distance between datasets is assessed using a relatively small set of data characteristics, which was selected to represent properties that affect algorithm performance. The performance of the candidate algorithms on those datasets is used to generate a recommendation to the user in the form of a ranking. The performance is assessed using a multicriteria evaluation measure that takes not only accuracy, but also time into account. As it is not common in Machine Learning to work with rankings, we had to identify and adapt existing statistical techniques to devise an appropriate evaluation methodology. Using that methodology, we show that the metalearning method presented leads to significantly better rankings than the baseline ranking method. The evaluation methodology is general and can be adapted to other ranking problems. Although here we have concentrated on ranking classification algorithms, the metalearning framework presented can provide assistance in the selection of combinations of methods or more complex problem solving strategies.
Aggregate Nearest Neighbor Queries in Spatial Databases
 TODS
, 2005
"... Given two spatial datasets P (e.g., facilities) and Q (queries), an aggregate nearest neighbor (ANN) query retrieves the point(s) of P with the smallest aggregate distance(s) to points in Q. Assuming, for example, n users at locations q1,... qn,anANN query outputs the facility p ∈ P that minimizes t ..."
Abstract

Cited by 59 (6 self)
 Add to MetaCart
Given two spatial datasets P (e.g., facilities) and Q (queries), an aggregate nearest neighbor (ANN) query retrieves the point(s) of P with the smallest aggregate distance(s) to points in Q. Assuming, for example, n users at locations q1,... qn,anANN query outputs the facility p ∈ P that minimizes the sum of distances pqi  for 1 ≤ i ≤ n that the users have to travel in order to meet there. Similarly, another ANN query may report the point p ∈ P that minimizes the maximum distance that any user has to travel, or the minimum distance from some user to his/her closest facility. If Q fits in memory and P is indexed by an Rtree, we develop algorithms for aggregate nearest neighbors that capture several versions of the problem, including weighted queries and incremental reporting of results. Then, we analyze their performance and propose cost models for query optimization. Finally, we extend our techniques for diskresident queries and approximate ANN retrieval. The efficiency of the algorithms and the accuracy of the cost models are evaluated through extensive experiments with real and synthetic datasets.
Ultimate Cognition à la Gödel
 COGN COMPUT
, 2009
"... "All life is problem solving," said Popper. To deal with arbitrary problems in arbitrary environments, an ultimate cognitive agent should use its limited hardware in the "best" and "most efficient" possible way. Can we formally nail down this informal statement, and der ..."
Abstract

Cited by 30 (12 self)
 Add to MetaCart
"All life is problem solving," said Popper. To deal with arbitrary problems in arbitrary environments, an ultimate cognitive agent should use its limited hardware in the "best" and "most efficient" possible way. Can we formally nail down this informal statement, and derive a mathematically rigorous blueprint of ultimate cognition? Yes, we can, using Kurt Gödel’s celebrated selfreference trick of 1931 in a new way. Gödel exhibited the limits of mathematics and computation by creating a formula that speaks about itself, claiming to be unprovable by an algorithmic theorem prover: either the formula is true but unprovable, or math itself is flawed in an algorithmic sense. Here we describe an agentcontrolling program that speaks about itself, ready to rewrite itself in arbitrary fashion once it has found a proof that the rewrite is useful according to a userdefined utility function. Any such a rewrite is necessarily globally optimal—no local maxima!—since this proof necessarily must have demonstrated the uselessness of continuing the proof search for even better rewrites. Our selfreferential program will optimally speed up its proof searcher and other program parts, but only if the speed up’s utility is indeed provable—even ultimate cognition has limits of the Gödelian kind.
Gödel Machines: SelfReferential Universal Problem Solvers Making Provably Optimal SelfImprovements
, 2003
"... An old dream of computer scientists is to build an optimally efficient universal problem solver. We show how to solve arbitrary computational problems in an optimal fashion inspired by Kurt Gödel's celebrated selfreferential formulas (1931). Our Gödel machine's initial software includes ..."
Abstract

Cited by 19 (8 self)
 Add to MetaCart
(Show Context)
An old dream of computer scientists is to build an optimally efficient universal problem solver. We show how to solve arbitrary computational problems in an optimal fashion inspired by Kurt Gödel's celebrated selfreferential formulas (1931). Our Gödel machine's initial software includes an axiomatic description of: the Gödel machine's hardware, the problemspecific utility function (such as the expected future reward of a robot), known aspects of the environment, costs of actions and computations, and the initial software itself (this is possible without introducing circularity). It also includes a typically suboptimal initial problemsolving policy and an asymptotically optimal proof searcher searching the space of computable proof techniques  that is, programs whose outputs are proofs. Unlike previous approaches, the selfreferential Gödel machine will rewrite any part of its software, including axioms and proof searcher, as soon as it has found a proof that this will improve its future performance, given its typically limited computational resources. We show that selfrewrites are globally optimal  no local minima!since provably none of all the alternative rewrites and proofs (those that could be found by continuing the proof search) are worth waiting for.
The New AI: General & Sound & Relevant for Physics
 ARTIFICIAL GENERAL INTELLIGENCE (ACCEPTED 2002)
, 2003
"... Most traditional artificial intelligence (AI) systems of the past 50 years are either very limited, or based on heuristics, or both. The new millennium, however, has brought substantial progress in the field of theoretically optimal and practically feasible algorithms for prediction, search, induct ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
Most traditional artificial intelligence (AI) systems of the past 50 years are either very limited, or based on heuristics, or both. The new millennium, however, has brought substantial progress in the field of theoretically optimal and practically feasible algorithms for prediction, search, inductive inference based on Occam’s razor, problem solving, decision making, and reinforcement learning in environments of a very general type. Since inductive inference is at the heart of all inductive sciences, some of the results are relevant not only for AI and computer science but also for physics, provoking nontraditional predictions based on Zuse’s thesis of the computergenerated universe.
Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets
, 2002
"... The Long ShortTerm Memory (LSTM) network trained by gradient descent solves difficult problems which traditional recurrent neural networks in general cannot. We have recently observed that the decoupled extended Kalman filter training algorithm allows for even better performance, reducing significa ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
The Long ShortTerm Memory (LSTM) network trained by gradient descent solves difficult problems which traditional recurrent neural networks in general cannot. We have recently observed that the decoupled extended Kalman filter training algorithm allows for even better performance, reducing significantly the number of training steps when compared to the original gradient descent training algorithm. In this paper we present a set of experiments which are unsolvable by classical recurrent networks but which are solved elegantly and robustly and quickly by LSTM combined with Kalman filters.
Probabilistic Group Nearest Neighbor Queries in Uncertain Databases
"... Abstract—The importance of query processing over uncertain data has recently arisen due to its wide usage in many realworld applications. In the context of uncertain databases, previous works have studied many query types such as nearest neighbor query, range query, topk query, skyline query, and ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Abstract—The importance of query processing over uncertain data has recently arisen due to its wide usage in many realworld applications. In the context of uncertain databases, previous works have studied many query types such as nearest neighbor query, range query, topk query, skyline query, and similarity join. In this paper, we focus on another important query, namely, probabilistic group nearest neighbor (PGNN) query, in the uncertain database, which also has many applications. Specifically, given a set, Q, of query points, a PGNN query retrieves data objects that minimize the aggregate distance (e.g., sum, min, and max) to query set Q. Due to the inherent uncertainty of data objects, previous techniques to answer group nearest neighbor (GNN) query cannot be directly applied to our PGNN problem. Motivated by this, we propose effective pruning methods, namely, spatial pruning and probabilistic pruning, to reduce the PGNN search space, which can be seamlessly integrated into our PGNN query procedure. Extensive experiments have demonstrated the efficiency and effectiveness of our proposed approach, in terms of the wall clock time and the speedup ratio against linear scan. Index Terms—Probabilistic group nearest neighbor queries, uncertain database. 1
The principle of presence: A heuristic for growing knowledge structured neural networks
 In Proceedings of the NeuroSymbolic Workshop at IJCAI (NeSy’05
, 2005
"... Fully connected neural networks such as multilayer perceptrons can approximate any given bounded function provided they have sufficient time. But this time grows quickly with the number of connections. In lifelong learning, the agent must acquire more and more knowledge in order to solve problems ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Fully connected neural networks such as multilayer perceptrons can approximate any given bounded function provided they have sufficient time. But this time grows quickly with the number of connections. In lifelong learning, the agent must acquire more and more knowledge in order to solve problems growing in complexity. In this purpose, it does not sound reasonable to fully connect huge networks. By applying the point of view of locality, we hypothesize that memorization only takes what one perceives and thinks into account. Based on this principle of presence, a neural network is constructed for structuring knowledge online. Advantages and limitations are discussed. 1
3Institute of Biology I 4Institute of Biology III
"... When we have learned a motor skill, such as cycling or iceskating, we can rapidly generalize to novel tasks, such as motorcycling or rollerblading [1–8]. Such facilitation of learning could arise through two distinct mechanisms by which the motor system might adjust its control parameters. First, fa ..."
Abstract
 Add to MetaCart
(Show Context)
When we have learned a motor skill, such as cycling or iceskating, we can rapidly generalize to novel tasks, such as motorcycling or rollerblading [1–8]. Such facilitation of learning could arise through two distinct mechanisms by which the motor system might adjust its control parameters. First, fast learning could simply be a consequence of the proximity of the original and final settings of the control parameters. Second, by structural learning [9–14], the motor system could constrain the parameter adjustments to conform to the control parameters ’ covariance structure. Thus, facilitation of learning would rely on the novel task parameters ’ lying on the structure of a lowerdimensional subspace that can be explored more efficiently. To test between these two hypotheses, we exposed subjects to randomly varying visuomotor tasks of fixed structure.