Results 11  20
of
70
Tuning & Simplifying Heuristical Optimization
, 2010
"... This thesis is about the tuning and simplification of blackbox (directsearch, derivativefree) optimization methods, which by definition do not use gradient information to guide their search for an optimum but merely need a fitness (cost, error, objective) measure for each candidate solution to th ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
This thesis is about the tuning and simplification of blackbox (directsearch, derivativefree) optimization methods, which by definition do not use gradient information to guide their search for an optimum but merely need a fitness (cost, error, objective) measure for each candidate solution to the optimization problem. Such optimization methods often have parameters that influence their behaviour and efficacy. A MetaOptimization technique is presented here for tuning the behavioural parameters of an optimization method by employing an additional layer of optimization. This is used in a number of experiments on two popular optimization methods, Differential Evolution and Particle Swarm Optimization, and unveils the true performance capabilities of an optimizer in different usage scenarios. It is found that stateoftheart optimizer variants with their supposedly adaptive behavioural parameters do not have a general and consistent performance advantage but are outperformed in several cases by simplified optimizers, if only the behavioural parameters are tuned properly.
New games related to old and new sequences
 Heinz (Eds.), Proc 10th Advances in Computer Games Conference (ACG10
, 2003
"... We define an infinite class of 2pile subtraction games, where the amount that can be subtracted from both piles simultaneously, is a function f of the size of the piles. Wythoff’s game is a special case. For each game, the 2nd player winning positions are a pair of complementary sequences, some of ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We define an infinite class of 2pile subtraction games, where the amount that can be subtracted from both piles simultaneously, is a function f of the size of the piles. Wythoff’s game is a special case. For each game, the 2nd player winning positions are a pair of complementary sequences, some of which are related to wellknown sequences, but most are new. The main result is a theorem giving necessary and sufficient conditions on f so that the sequences are 2nd player winning positions. Sample games are presented, strategy complexity questions are discussed, and possible further studies are indicated.
A glimpse at the metaphysics of Bongard problems
, 2000
"... Bongard problems present an outstanding challenge to artificial intelligence. They consist of visual pattern understanding problems on which the task of the pattern perceiver is to find an abstract aspect of distinction between two classes of figures. This paper examines the philosophical question o ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Bongard problems present an outstanding challenge to artificial intelligence. They consist of visual pattern understanding problems on which the task of the pattern perceiver is to find an abstract aspect of distinction between two classes of figures. This paper examines the philosophical question of whether objects in Bongard problems can be ascribed an a priori, metaphysical, existence  the ontological question of whether objects, and their boundaries, come predefined, independently of any understanding or context. This is an essential issue, because it determines whether a priori symbolic representations can be of use for solving Bongard problems. The resulting conclusion of this analysis is that in the case of Bongard problems there can be no units ascribed an a priori existence  and thus the objects dealt with in any specific problem must be found by solution methods (rather than given to them). This view ultimately leads to the emerging alternatives to the philosophical doc...
Coordination of Distributed Knowledge Networks Using Contract Net Protocol
 in IEEE Information Technology Conference
, 1998
"... Tools for selective proactive as well as reactive information retrieval, information extraction, information organization and assimilation, knowledge discovery using heterogeneous, distributed knowledge and data sources constitute some of the key enabling technologies for managing the data overload ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Tools for selective proactive as well as reactive information retrieval, information extraction, information organization and assimilation, knowledge discovery using heterogeneous, distributed knowledge and data sources constitute some of the key enabling technologies for managing the data overload and translating recent advances in automated data acquisition, digital storage, computers and communications into advances in decision support, scientific discovery and related applications. Such distributed knowledge networks (DKN) have to be able to effectively utilize multiple autonomous, often independently owned and operated information systems. Given the complexity of the such systems and the need for autonomy of the components, multiagent systems, because of their modularity, offer an attractive framework for the design of DKN. In such multiagent systems, satisfactory completion of the tasks at hand depend critically on effective communication and coordination among the agents. This...
Semantic Referencing – Determining Context Weights for Similarity Measurement
"... Abstract. Semantic similarity measurement is a key methodology in various domains ranging from cognitive science to geographic information retrieval on the Web. Meaningful notions of similarity, however, cannot be determined without taking additional contextual information into account. One way to m ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. Semantic similarity measurement is a key methodology in various domains ranging from cognitive science to geographic information retrieval on the Web. Meaningful notions of similarity, however, cannot be determined without taking additional contextual information into account. One way to make similarity measures contextaware is by introducing weights for specific characteristics. Existing approaches to automatically determine such weights are rather limited or require application specific adjustments. In the past, the possibility to tweak similarity theories until they fit a specific use case has been one of the major criticisms for their evaluation. In this work, we propose a novel approach to semiautomatically adapt similarity theories to the user’s needs and hence make them contextaware. Our methodology is inspired by the process of georeferencing images in which known control points between the image and geographic space are used to compute a suitable transformation. We propose to semiautomatically calibrate weights to compute interinstance and interconcept similarities by allowing the user to adjust precomputed similarity rankings. These known control similarities are then used to reference other similarity values. Keywords: Semantic Similarity, GeoSemantics, Information Retrieval 1
Order and chaos in Hofstadter's Q(n) sequence
 Complexity
, 1999
"... A number of observations are made on Hofstadter’s integer sequence defined by Q(n) = Q(n − Q(n − 1)) + Q(n − Q(n − 2)), for n> 2, and Q(1) = Q(2) = 1. On short scales the sequence looks chaotic. It turns out, however, that the Q(n) can be grouped into a sequence of generations. The kth genera ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
A number of observations are made on Hofstadter’s integer sequence defined by Q(n) = Q(n − Q(n − 1)) + Q(n − Q(n − 2)), for n> 2, and Q(1) = Q(2) = 1. On short scales the sequence looks chaotic. It turns out, however, that the Q(n) can be grouped into a sequence of generations. The kth generation has 2 k members which have “parents” mostly in generation k − 1, and a few from generation k − 2. In this sense the sequence becomes Fibonacci type on a logarithmic scale. The variance of S(n) = Q(n) − n/2, averaged over generations, is ≃ 2 α k, with exponent α = 0.88(1). The probability distribution p ∗ (x) of x = R(n) = S(n)/n α, n>> 1, is well defined and strongly nonGaussian, with tails well described by the error function erfc. The probability distribution of xm = R(n) − R(n − m) is given by In his famous book Gödel, Escher, Bach: an Eternal Golden Braid