Results 1  10
of
19
Conquering hierarchical difficulty by explicit chunking: Substructural chromosome compression
 In Proceedings of the 2006 Genetic and Evolutionary Computation Conference (GECCO 2006) Workshops: International Workshop on Learning Classifier Systems
, 2006
"... This paper proposes a chromosome compression scheme which represents subsolutions by the most expressive schemata. The proposed chromosome compression scheme is combined with the dependency structure matrix genetic algorithm and the restricted tournament replacement to create a scalable optimization ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
This paper proposes a chromosome compression scheme which represents subsolutions by the most expressive schemata. The proposed chromosome compression scheme is combined with the dependency structure matrix genetic algorithm and the restricted tournament replacement to create a scalable optimization tool which optimizes problems via hierarchical decomposition. One important feature of the proposed method is that at the end of the run, the problem structure obtained from the proposed method is comprehensible to human researchers and is reusable for largerscale problems. The empirical result shows that the proposed method scales subquadratically with the problem size on hierarchical problems and is able to capture the problem structures accurately.
A Matrix Approach for Finding Extrema: PROBLEMS WITH MODULARITY, HIERARCHY, AND OVERLAP
, 2006
"... Unlike most simple textbook examples, the real world is full with complex systems, and researchers in many different fields are often confronted by problems arising from such systems. Simple heuristics or even enumeration works quite well on small and easy problems; however, to efficiently solve lar ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Unlike most simple textbook examples, the real world is full with complex systems, and researchers in many different fields are often confronted by problems arising from such systems. Simple heuristics or even enumeration works quite well on small and easy problems; however, to efficiently solve large and difficult problems, proper decomposition according to the complex system is the key. In this research project, investigating and analyzing interactions between components of complex systems shed some light on problem decomposition. By recognizing three barebone types of interactions—modularity, hierarchy, and overlap, theories and models are developed to dissect and inspect problem decomposition in the context of genetic algorithms. This dissertation presents a research project to develop a competent optimization method to solve boundedly difficult problems with modularity, hierarchy, and overlap by explicit problem decomposition. The proposed genetic algorithm design utilizes a matrix representation of an interaction graph to analyze and decompose the problem. The results from this thesis should benefit research both technically and scientifically. Technically, this thesis develops an automated dependency structure matrix clustering technique and utilizes it to design a competent blackbox problem solver. Scientifically, the explicit interaction
Robust inference of trees
 IDSIA, Manno (Lugano), CH, 2003. Marcus Hutter is with the AI research institute IDSIA, Galleria 2, CH6928 MannoLugano, Switzerland. Email: marcus@idsia.ch, HP: http://www.idsia.ch/∼marcus/idsia
, 2003
"... Abstract. This paper is concerned with the reliable inference of optimal treeapproximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
(Show Context)
Abstract. This paper is concerned with the reliable inference of optimal treeapproximations to the dependency structure of an unknown distribution generating data. The traditional approach to the problem measures the dependency strength between random variables by the index called mutual information. In this paper reliability is achieved by Walley’s imprecise Dirichlet model, which generalizes Bayesian learning with Dirichlet priors. Adopting the imprecise Dirichlet model results in posterior interval expectation for mutual information, and in a set of plausible trees consistent with the data. Reliable inference about the actual tree is achieved by focusing on the substructure common to all the plausible trees. We develop an exact algorithm that infers the substructure in time O(m 4), m being the number of random variables. The new algorithm is applied to a set of data sampled from a known distribution. The method is shown to reliably infer edges of the actual tree even when the data are very scarce, unlike the traditional approach. Finally, we provide lower and upper credibility limits for mutual information under the imprecise Dirichlet model. These enable the previous developments to be extended to a full inferential method for trees.
On the use of mutual information in data analysis: An overview
 Proc Int Symp Appl Stochastic Models Data Anal
, 2005
"... ..."
Minimum Mutual Information and NonGaussianity Through the Maximum Entropy Method: Theory and Properties
, 2012
"... entropy ..."
(Show Context)
Modular Bayesian Inference and Learning of Decision Networks as StandAlone Mechanisms of the MABEL Model: Implications for Visualization, Comprehension, and PolicyMaking
 In
, 2006
"... This paper describes a modular component of the MABEL model agents ’ cognitive inference mechanism. The probabilistic and probabilogic representation of the agents’ environment and state space is coupled with a Bayesian belief and decision network functionality, which in fact holds Markovian semipar ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper describes a modular component of the MABEL model agents ’ cognitive inference mechanism. The probabilistic and probabilogic representation of the agents’ environment and state space is coupled with a Bayesian belief and decision network functionality, which in fact holds Markovian semiparametric properties. Different approaches to modeling multiagent systems are described and analyzed; problem, model, and knowledgedriven approaches to agent inference and learning are emphasized. The notion of modularity in agentbased modeling components is conceptualized. The modular architecture of the decision inference mechanism allows for a flexible architectural design that can be either endogenous or exogenous to the agentbased simulation model. A suite of decision support tools for modular network inference in the MABEL model is showcased; the emphasis is on the component object model versus interoperability development interfaces. These tools provide the complex functionality of developing “models within models, ” thus simplifying the need for extensive research support and for a highend level of knowledge acquisition from the endusers ’ perspective. Finally, the paper assesses the validity of visual modeling interfaces for data and knowledgeacquisition mechanisms that can provide an essential link between an in vitro research model, and the complex realities that are observed and processed by decisionmakers, policymakers, communities, and stakeholders. Keywords: Agentbased model, MABEL, Bayesian belief networks, Bayesian decision networks, visualization, decisiontheoretic inference, policy making
Bayesian and QuasiBayesian Estimators for Mutual Information from Discrete Data
, 2013
"... entropy ..."
(Show Context)
Exact NonParametric Bayesian Inference on Infinite Trees
, 2009
"... Given i.i.d. data from an unknown distribution, we consider the problem of predicting future items. An adaptive way to estimate the probability density is to recursively subdivide the domain to an appropriate datadependent granularity. In Bayesian inference one assigns a dataindependent prior prob ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Given i.i.d. data from an unknown distribution, we consider the problem of predicting future items. An adaptive way to estimate the probability density is to recursively subdivide the domain to an appropriate datadependent granularity. In Bayesian inference one assigns a dataindependent prior probability to “subdivide”, which leads to a prior over infinite(ly many) trees. We derive an exact, fast, and simple inference algorithm for such a prior, for the data evidence, the predictive distribution, the effective model dimension, moments, and other quantities. We prove asymptotic convergence and consistency results, and illustrate the behavior of our model on some prototypical functions.
Algorithms
, 2006
"... This paper presents a populationsizing model for the entropybased model building in genetic algorithms. Specifically, the population size required for building an accurate model is investigated. The effect of the selection pressure on population sizing is also incorporated. The proposed model indi ..."
Abstract
 Add to MetaCart
(Show Context)
This paper presents a populationsizing model for the entropybased model building in genetic algorithms. Specifically, the population size required for building an accurate model is investigated. The effect of the selection pressure on population sizing is also incorporated. The proposed model indicates that the population size required for building an accurate model scales as Θ(mlog m), where m is the number of substructures and proportional to the problem size. Experiments are conducted to verify the derivations, and the results agree with the proposed model. 1