Results 11  20
of
793
On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts  Towards Memetic Algorithms
, 1989
"... Short abstract, isn't it? P.A.C.S. numbers 05.20, 02.50, 87.10 1 Introduction Large Numbers "...the optimal tour displayed (see Figure 6) is the possible unique tour having one arc fixed from among 10 655 tours that are possible among 318 points and have one arc fixed. Assuming that one could ..."
Abstract

Cited by 186 (10 self)
 Add to MetaCart
Short abstract, isn't it? P.A.C.S. numbers 05.20, 02.50, 87.10 1 Introduction Large Numbers "...the optimal tour displayed (see Figure 6) is the possible unique tour having one arc fixed from among 10 655 tours that are possible among 318 points and have one arc fixed. Assuming that one could possibly enumerate 10 9 tours per second on a computer it would thus take roughly 10 639 years of computing to establish the optimality of this tour by exhaustive enumeration." This quote shows the real difficulty of a combinatorial optimization problem. The huge number of configurations is the primary difficulty when dealing with one of these problems. The quote belongs to M.W Padberg and M. Grotschel, Chap. 9., "Polyhedral computations", from the book The Traveling Salesman Problem: A Guided tour of Combinatorial Optimization [124]. It is interesting to compare the number of configurations of realworld problems in combinatorial optimization with those large numbers arising in Cosmol...
Metaheuristics in combinatorial optimization: Overview and conceptual comparison
 ACM COMPUTING SURVEYS
, 2003
"... The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for the scientific as well as the industrial world. We give a survey of the nowadays most important meta ..."
Abstract

Cited by 169 (14 self)
 Add to MetaCart
The field of metaheuristics for the application to combinatorial optimization problems is a rapidly growing field of research. This is due to the importance of combinatorial optimization problems for the scientific as well as the industrial world. We give a survey of the nowadays most important metaheuristics from a conceptual point of view. We outline the different components and concepts that are used in the different metaheuristics in order to analyze their similarities and differences. Two very important concepts in metaheuristics are intensification and diversification. These are the two forces that largely determine the behaviour of a metaheuristic. They are in some way contrary but also complementary to each other. We introduce a framework, that we call the I&D frame, in order to put different intensification and diversification components into relation with each other. Outlining the advantages and disadvantages of different metaheuristic approaches we conclude by pointing out the importance of hybridization of metaheuristics as well as the integration of metaheuristics and other methods for optimization.
Modelling gene expression data using dynamic bayesian networks
, 1999
"... Recently, there has been much interest in reverse engineering genetic networks from time series data. In this paper, we show that most of the proposed discrete time models — including the boolean network model [Kau93, SS96], the linear model of D’haeseleer et al. [DWFS99], and the nonlinear model of ..."
Abstract

Cited by 157 (1 self)
 Add to MetaCart
Recently, there has been much interest in reverse engineering genetic networks from time series data. In this paper, we show that most of the proposed discrete time models — including the boolean network model [Kau93, SS96], the linear model of D’haeseleer et al. [DWFS99], and the nonlinear model of Weaver et al. [WWS99] — are all special cases of a general class of models called Dynamic Bayesian Networks (DBNs). The advantages of DBNs include the ability to model stochasticity, to incorporate prior knowledge, and to handle hidden variables and missing data in a principled way. This paper provides a review of techniques for learning DBNs. Keywords: Genetic networks, boolean networks, Bayesian networks, neural networks, reverse engineering, machine learning. 1
Qualitative Simulation of Genetic Regulatory Networks Using PiecewiseLinear Models
, 2001
"... In order to cope with the large amounts of data that have become available in genomics, mathematical tools for the analysis of networks of interactions between genes, proteins, and other molecules are indispensable. We present a method for the qualitative simulation of genetic regulatory networks ..."
Abstract

Cited by 130 (21 self)
 Add to MetaCart
In order to cope with the large amounts of data that have become available in genomics, mathematical tools for the analysis of networks of interactions between genes, proteins, and other molecules are indispensable. We present a method for the qualitative simulation of genetic regulatory networks, based on a class of piecewiselinear (PL) differential equations that has been wellstudied in mathematical biology. The simulation method is welladapted to stateoftheart measurement techniques in genomics, which often provide qualitative and coarsegrained descriptions of genetic regulatory networks. Given a qualitative model of a genetic regulatory network, consisting of a system of PL differential equations and inequality constraints on the parameter values, the method produces a graph of qualitative states and transitions between qualitative states, summarizing the qualitative dynamics of the system. The qualitative simulation method has been implemented in Java in the computer tool Genetic Network Analyzer.
Sensitivity and specificity of inferring genetic regulatory interactions from microarray experiments with dynamic Bayesian networks
 Bioinformatics
, 2003
"... Motivation: Bayesian networks have been applied to infer genetic regulatory interactions from microarray gene expression data. This inference problem is particularly hard in that interactions between hundreds of genes have to be learned from very small data sets, typically containing only a few doze ..."
Abstract

Cited by 109 (3 self)
 Add to MetaCart
Motivation: Bayesian networks have been applied to infer genetic regulatory interactions from microarray gene expression data. This inference problem is particularly hard in that interactions between hundreds of genes have to be learned from very small data sets, typically containing only a few dozen time points during a cell cycle. Most previous studies have assessed the inference results on real gene expression data by comparing predicted genetic regulatory interactions with those known from the biological literature. This approach is controversial due to the absence of known gold standards, which renders the estimation of the sensitivity and specificity, that is, the true and (complementary) false detection rate, unreliable and difficult. The objective of the present study is to test the viability of the Bayesian network paradigm in a realistic simulation study. First, gene expression data are simulated from a realistic biological network involving DNAs, mRNAs, inactive protein monomers and active protein dimers. Then, interaction networks are inferred from these data in a reverse engineering approach, using Bayesian networks and Bayesian learning with Markov chain Monte Carlo.
Results: The simulation results are presented as receiver operator characteristics curves. This allows estimating the proportion of spurious gene interactions incurred for a specified target proportion of recovered true interactions. The findings demonstrate how the network inference performance varies with the training set size, the degree of inadequacy of prior assumptions, the experimental sampling strategy and the inclusion of further, sequencebased information.
The Artificial Life Roots of Artificial Intelligence
, 1993
"... Behaviororiented AI is a scientific discipline that studies how behavior of agents emerges and becomes intelligent and adaptive. Success of the field is defined in terms of success in building physical agents that are capable of maximising their own selfpreservation in interaction with a dynami ..."
Abstract

Cited by 101 (5 self)
 Add to MetaCart
Behaviororiented AI is a scientific discipline that studies how behavior of agents emerges and becomes intelligent and adaptive. Success of the field is defined in terms of success in building physical agents that are capable of maximising their own selfpreservation in interaction with a dynamically changing environment. The paper addresses this artificial life route towards artificial intelligence and reviews some of the results obtained so far. 1 Official reference: Steels, L. (1994) The artificial life roots of artificial intelligence. Artificial Life Journal, Vol 1,1. MIT Press, Cambridge. 1 Introduction For several decades, the field of Artificial Intelligence has been pursuing the study of intelligent behavior using the methodology of the artificial [104]. But the focus of this field, and hence the successes, have mostly been on higher order cognitive activities such as expert problem solving. The inspiration for AI theories has mostly come from logic and the cognitive...
From Boolean to Probabilistic Boolean Networks as Models of Genetic Regulatory Networks
 Proc. IEEE
, 2002
"... Mathematical and computational modeling of genetic regulatory networks promises to uncover the fundamental principles governing biological systems in an integrarive and holistic manner. It also paves the way toward the development of systematic approaches for effective therapeutic intervention in di ..."
Abstract

Cited by 84 (17 self)
 Add to MetaCart
Mathematical and computational modeling of genetic regulatory networks promises to uncover the fundamental principles governing biological systems in an integrarive and holistic manner. It also paves the way toward the development of systematic approaches for effective therapeutic intervention in disease. The central theme in this paper is the Boolean formalism as a building block for modeling complex, largescale, and dynamical networks of genetic interactions. We discuss the goals of modeling genetic networks as well as the data requirements. The Boolean formalism is justified from several points of view. We then introduce Boolean networks and discuss their relationships to nonlinear digital filters. The role of Boolean networks in understanding cell differentiation and cellular functional states is discussed. The inference of Boolean networks from real gene expression data is considered from the viewpoints of computational learning theory and nonlinear signal processing, touching on computational complexity of learning and robustness. Then, a discussion of the need to handle uncertainty in a probabilistic framework is presented, leading to an introduction of probabilistic Boolean networks and their relationships to Markov chains. Methods for quantifying the influence of genes on other genes are presented. The general question of the potential effect of individual genes on the global dynamical network behavior is considered using stochastic perturbation analysis. This discussion then leads into the problem of target identification for therapeutic intervention via the development of several computational tools based on firstpassage times in Markov chains. Examples from biology are presented throughout the paper. 1
Coevolution of A Backgammon Player
 Proceedings Artificial Life V
"... One of the persistent themes in Artificial Life research is the use of coevolutionary arms races in the development of specific and complex behaviors. However, other than Sims’s work on artificial robots, most of the work has attacked very simple games of prisoners dilemma or predator and prey. Fol ..."
Abstract

Cited by 78 (11 self)
 Add to MetaCart
One of the persistent themes in Artificial Life research is the use of coevolutionary arms races in the development of specific and complex behaviors. However, other than Sims’s work on artificial robots, most of the work has attacked very simple games of prisoners dilemma or predator and prey. Following Tesauro’s work on TDGammon, we used a 4000 parameter feedforward neural network to develop a competitive backgammon evaluation function. Play proceeds by a roll of the dice, application of the network to all legal moves, and choosing the move with the highest evaluation. However, no backpropagation, reinforcement
The calculi of emergence: Computation, dynamics, and induction
 Physica D
, 1994
"... Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analyzed in terms of how modelbuilding observers infer from measurements the computational capabilities embedded ..."
Abstract

Cited by 77 (14 self)
 Add to MetaCart
Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analyzed in terms of how modelbuilding observers infer from measurements the computational capabilities embedded in nonlinear processes. An observer’s notion of what is ordered, what is random, and what is complex in its environment depends directly on its computational resources: the amount of raw measurement data, of memory, and of time available for estimation and inference. The discovery of structure in an environment depends more critically and subtlely, though, on how those resources are organized. The descriptive power of the observer’s chosen (or implicit) computational model class, for example, can be an overwhelming determinant in finding regularity in data. This paper presents an overview of an inductive framework — hierarchicalmachine reconstruction — in which the emergence of complexity is associated with the innovation of new computational model classes. Complexity metrics for detecting structure and quantifying emergence, along with an analysis of the constraints on the dynamics of innovation, are outlined. Illustrative examples are drawn from the onset of unpredictability in nonlinear systems, finitary nondeterministic processes, and
OptimizationBased Reconstruction of a 3D Object From a Single Freehand Line Drawing
 ComputerAided Design
, 1996
"... This paper describes an optimizationbased algorithm for reconstructing a 3D model from a single, inaccurate, 2D edgevertex graph. The graph, which serves as input for the reconstruction process, is obtained from an inaccurate freehand sketch of a 3D wireframe object. Compared with traditional reco ..."
Abstract

Cited by 74 (9 self)
 Add to MetaCart
This paper describes an optimizationbased algorithm for reconstructing a 3D model from a single, inaccurate, 2D edgevertex graph. The graph, which serves as input for the reconstruction process, is obtained from an inaccurate freehand sketch of a 3D wireframe object. Compared with traditional reconstruction methods based on line labeling, the proposed approach is more tolerant of faults in handling both inaccurate vertex positioning and sketches with missing entities. Furthermore, the proposed reconstruction method supports a wide scope of general (manifold and nonmanifold) objects containing flat and cylindrical faces. Sketches of wireframe models usually include enough information to reconstruct the complete body. The optimization algorithm is discussed, and examples from a working implementation are given.