Results 1  10
of
80
Niching Methods for Genetic Algorithms
, 1995
"... Niching methods extend genetic algorithms to domains that require the location and maintenance of multiple solutions. Such domains include classification and machine learning, multimodal function optimization, multiobjective function optimization, and simulation of complex and adaptive systems. This ..."
Abstract

Cited by 232 (1 self)
 Add to MetaCart
Niching methods extend genetic algorithms to domains that require the location and maintenance of multiple solutions. Such domains include classification and machine learning, multimodal function optimization, multiobjective function optimization, and simulation of complex and adaptive systems. This study presents a comprehensive treatment of niching methods and the related topic of population diversity. Its purpose is to analyze existing niching methods and to design improved niching methods. To achieve this purpose, it first develops a general framework for the modelling of niching methods, and then applies this framework to construct models of individual niching methods, specifically crowding and sharing methods. Using a constructed model of crowding, this study determines why crowding methods over the last two decades have not made effective niching methods. A series of tests and design modifications results in the development of a highly effective form of crowding, called determin...
SEARCH, polynomial complexity, and the fast messy genetic algorithm
, 1995
"... Blackbox optimizationoptimization in presence of limited knowledge about the objective functionhas recently enjoyed a large increase in interest because of the demand from the practitioners. This has triggered a race for new high performance algorithms for solving large, difficult problems. Si ..."
Abstract

Cited by 58 (10 self)
 Add to MetaCart
Blackbox optimizationoptimization in presence of limited knowledge about the objective functionhas recently enjoyed a large increase in interest because of the demand from the practitioners. This has triggered a race for new high performance algorithms for solving large, difficult problems. Simulated annealing, genetic algorithms, tabu search are some examples. Unfortunately, each of these algorithms is creating a separate field in itself and their use in practice is often guided by personal discretion rather than scientific reasons. The primary reason behind this confusing situation is the lack of any comprehensive understanding about blackbox search. This dissertation takes a step toward clearing some of the confusion. The main objectives of this dissertation are: 1. present SEARCH (Search Envisioned As Relation & Class Hierarchizing)an alternate perspective of blackbox optimization and its quantitative analysis that lays the foundation essential for transcending the limits of random enumerative search; 2. design and testing of the fast messy genetic algorithm. SEARCH is a general framework for understanding blackbox optimization in terms of relations,
Rulebased Machine Learning Methods for Functional Prediction
 Journal of Artificial Intelligence Research
, 1995
"... We describe a machine learning method for predicting the value of a realvalued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation ..."
Abstract

Cited by 48 (3 self)
 Add to MetaCart
(Show Context)
We describe a machine learning method for predicting the value of a realvalued function, given the values of multiple input variables. The method induces solutions from samples in the form of ordered disjunctive normal form (DNF) decision rules. A central objective of the method and representation is the induction of compact, easily interpretable solutions. This rulebased decision model can be extended to search efficiently for similar cases prior to approximating function values. Experimental results on realworld data demonstrate that the new techniques are competitive with existing machine learning and statistical methods and can sometimes yield superior regression performance. 1. Introduction The problem of approximating the values of a continuous variable is described in the statistical literature as regression. Given samples of output (response) variable y and input (predictor) variables x = fx 1 :::x n g, the regression task is to find a mapping y = f(x). Relative to the spac...
Ariadne: a dynamic indoor signal map construction and localization system. InMobiSys ’06
"... Location determination of mobile users within a building has attracted much attention lately due to its many applications in mobile networking including network intrusion detection problems. However, it is challenging due to the complexities of the indoor radio propagation characteristics exacerbat ..."
Abstract

Cited by 41 (1 self)
 Add to MetaCart
(Show Context)
Location determination of mobile users within a building has attracted much attention lately due to its many applications in mobile networking including network intrusion detection problems. However, it is challenging due to the complexities of the indoor radio propagation characteristics exacerbated by the mobility of the user. A common practice is to mechanically generate a table showing the radio signal strength at different known locations in the building. A mobile user’s location at an arbitrary point in the building is determined by measuring the signal strength at the location in question and determining the location by referring to the above table using a LMSE (least mean square error) criterion. Obviously, this is a very tedious and time consuming task. This paper proposes a novel and automated location determination method called ARIADNE. Using a two dimensional construction floor plan and only a single actual signal strength measurement, ARIADNE generates an estimated signal strength map comparable to those generated manually by actual measurements. Given the signal measurements for a mobile, a proposed clustering algorithm searches that signal strength map to determine the current mobile’s location. The results from ARIADNE are comparable and may even be superior to those from existing localization schemes.
ON THE HARDWARESOFTWARE PARTITIONING PROBLEM: system modeling and partitioning techniques
 ACM Transactions on Design Automation of Electronic Systems
, 2003
"... This paper presents an indepth study of several system partitioning procedures. It is based on the appropriate formulation of a general system model, being therefore independent of either the particular codesign problem or the specific partitioning procedure. The techniques under study are a knowl ..."
Abstract

Cited by 27 (0 self)
 Add to MetaCart
This paper presents an indepth study of several system partitioning procedures. It is based on the appropriate formulation of a general system model, being therefore independent of either the particular codesign problem or the specific partitioning procedure. The techniques under study are a knowledgebased system and three classical circuit partitioning algorithms (Simulated Annealing, Kernighan&Lin and Hierarchical Clustering). The former has been entirely proposed by the authors in previous works while the later have been properly extended to deal with system level issues. We will show how the way the problem is solved biases the results obtained, regarding both quality and convergence rate. Consequently it is extremely important to choose the most suitable technique for the particular codesign problem that is being confronted.
Convex Sets Of Probabilities Propagation By Simulated Annealing
 In Proceedings of the Fith International Conference IPMU'94
, 1994
"... An approximated simulation algorithm is presented for the propagation of convex sets of probabilities. It is assumed that the graph is such that an exact probabilistic propagation is feasible. The algorithm is a simulated annealing procedure, which randomly selects probability distributions among th ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
(Show Context)
An approximated simulation algorithm is presented for the propagation of convex sets of probabilities. It is assumed that the graph is such that an exact probabilistic propagation is feasible. The algorithm is a simulated annealing procedure, which randomly selects probability distributions among the possible ones, performing at the same time an exact probabilistic propagation. The algorithm can be applied to general directed acyclic graphs and is carried out on a tree of cliques. Some experimental tests are shown. 1. Introduction One of the main problems with probabilistic propagation algoritms on graphical structures is the introduction of the initial exact conditional probabilities. A number of authors have tried to overcome this difficulty by allowing the use of intervals on the specified probabilities [5, 10, 11, 3, 14, 13, 19, 23, 2]. Some of these works [10, 11, 3, 13, 23] focus on the use of convex sets of probabilities. Convex sets are a more general tool for representing un...
3D face reconstruction from video using a generic model
 in International Conference on Multimedia and Expo
, 2002
"... Reconstructing a 3D model of a human face from a video sequence is an important problem in computer vision, with applications to recognition, surveillance, multimedia etc. However, the quality of 3D reconstructions using structure from motion (SfM) algorithms is often not satisfactory. One common me ..."
Abstract

Cited by 22 (3 self)
 Add to MetaCart
(Show Context)
Reconstructing a 3D model of a human face from a video sequence is an important problem in computer vision, with applications to recognition, surveillance, multimedia etc. However, the quality of 3D reconstructions using structure from motion (SfM) algorithms is often not satisfactory. One common method of overcoming this problem is to use a generic model of a face. Existing work using this approach initializes the reconstruction algorithm with this generic model. The problem with this approach is that the algorithm can converge to a solution very close to this initial value, resulting in a reconstruction which resembles the generic model rather than the particular face in the video which needs to be modeled. In this paper, we propose a method of 3D reconstruction of a human face from video in which the 3D reconstruction algorithm and the generic model are handled separately. A 3D estimate is obtained purely from the video sequence using SfM algorithms without use of the generic model. The final 3D model is obtained after combining the SfM estimate and the generic model using an energy function that corrects for the errors in the estimate by comparing local regions in the two models. The optimization is done using a Markov Chain Monte Carlo (MCMC) sampling strategy. The main advantage of our algorithm over others is that it is able to retain the specific features of the face in the video sequence even when these features are different from those of the generic model. The evolution of the 3D model through the various stages of the algorithm is presented. 1.
Improving Clustering Technique for Functional Approximation Problem Using Fuzzy Logic: ICFA algorithm
 Lecture Notes in Computer Science
, 2005
"... Abstract—To date, clustering techniques have always been oriented to solve classification and pattern recognition problems. However, some authors have applied them unchanged to construct initial models for function approximators. Nevertheless, classification and function approximation problems prese ..."
Abstract

Cited by 22 (13 self)
 Add to MetaCart
(Show Context)
Abstract—To date, clustering techniques have always been oriented to solve classification and pattern recognition problems. However, some authors have applied them unchanged to construct initial models for function approximators. Nevertheless, classification and function approximation problems present quite different objectives. Therefore it is necessary to design new clustering algorithms specialized in the problem of function approximation. This paper presents a new clustering technique, specially designed for function approximation problems, which improves the performance of the approximator system obtained, compared with other models derived from traditional classification oriented clustering algorithms and input–output clustering techniques. Index Terms—Clustering techniques, function approximation, model initialization. I.
Linear upper bounds for random walk on small density random 3cnfs
 IN PROC. 44TH IEEE SYMP. ON FOUND. OF COMP. SCIENCE
, 2003
"... We analyze the efficiency of the random walk algorithmon random 3CNF instances, and prove linear upper boundson the running time of this algorithm for small clause density, less than 1:63. Our upper bound matches the observedrunning time to within a multiplicative factor. This is the first subex ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
We analyze the efficiency of the random walk algorithmon random 3CNF instances, and prove linear upper boundson the running time of this algorithm for small clause density, less than 1:63. Our upper bound matches the observedrunning time to within a multiplicative factor. This is the first subexponential upper bound on the running time of alocal improvement algorithm on random instances. Our proof introduces a simple, yet powerful tool for analyzing such algorithms, which may be of further use. This object, called a terminator, is a weighted satisfying assignment. We show that any CNF having a good (small weight) terminator, is assured to be solved quickly by the randomwalk algorithm. This raises the natural question of the terminator threshold which is the maximal clause density forwhich such assignments exist (with high probability). We use the analysis of the pure literal heuristic presentedby Broder, Frieze and Upfal [12, 22] and show that for small clause densities good terminators exist. Thus we showthat the Pure Literal threshold ( ss 1:63) is a lower boundon the terminator threshold. (We conjecture the terminator threshold to be in fact higher). One nice property of terminators is that they can befound efficiently, via linear programming. This makes
Gene Expression and Fast Construction of Distributed Evolutionary Representation
 Evolutionary Computation
, 2001
"... The gene expression process in nature produces different proteins in different cells from different portions of the DNA. Since proteins control almost every important activity in a living organism, at an abstract level, gene expression can be viewed as a process that evaluates the merit or "fit ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
The gene expression process in nature produces different proteins in different cells from different portions of the DNA. Since proteins control almost every important activity in a living organism, at an abstract level, gene expression can be viewed as a process that evaluates the merit or "fitness" of the DNA. This distributed evaluation of the DNA would not be possible without a decomposed representation of the fitness function defined over the DNAs. This paper argues that, unless the living body was provided with such a representation, we have every reason to believe that it must have an efficient mechanism to construct this distributed representation. This paper demonstrates polynomialtime computability of such a representation by proposing a class of efficient algorithms. The main contribution of this paper is twofold. On the algorithmic side, it offers a way to scale up evolutionary search by detecting the underlying structure of the search space. On the biological side, it proves that the distributed representation of the evolutionary fitness function in gene expression can be computed in polynomialtime.