Results 1  10
of
55
Designing Efficient And Accurate Parallel Genetic Algorithms
, 1999
"... Parallel implementations of genetic algorithms (GAs) are common, and, in most cases, they succeed to reduce the time required to find acceptable solutions. However, the effect of the parameters of parallel GAs on the quality of their search and on their efficiency are not well understood. This insuf ..."
Abstract

Cited by 222 (5 self)
 Add to MetaCart
Parallel implementations of genetic algorithms (GAs) are common, and, in most cases, they succeed to reduce the time required to find acceptable solutions. However, the effect of the parameters of parallel GAs on the quality of their search and on their efficiency are not well understood. This insufficient knowledge limits our ability to design fast and accurate parallel GAs that reach the desired solutions in the shortest time possible. The goal of this dissertation is to advance the understanding of parallel GAs and to provide rational guidelines for their design. The research reported here considered three major types of parallel GAs: simple masterslave algorithms with one population, more sophisticated algorithms with multiple populations, and a hierarchical combination of the first two types. The investigation formulated simple models that predict accurately the quality of the solutions with different parameter settings. The quality predictors were transformed into populationsizing equations, which in turn were used to estimate the execution time of the algorithms.
Randomwalk computation of similarities between nodes of a graph, with application to collaborative recommendation
 IEEE Transactions on Knowledge and Data Engineering
, 2006
"... Abstract—This work presents a new perspective on characterizing the similarity between elements of a database or, more generally, nodes of a weighted and undirected graph. It is based on a Markovchain model of random walk through the database. More precisely, we compute quantities (the average comm ..."
Abstract

Cited by 116 (16 self)
 Add to MetaCart
Abstract—This work presents a new perspective on characterizing the similarity between elements of a database or, more generally, nodes of a weighted and undirected graph. It is based on a Markovchain model of random walk through the database. More precisely, we compute quantities (the average commute time, the pseudoinverse of the Laplacian matrix of the graph, etc.) that provide similarities between any pair of nodes, having the nice property of increasing when the number of paths connecting those elements increases and when the “length ” of paths decreases. It turns out that the square root of the average commute time is a Euclidean distance and that the pseudoinverse of the Laplacian matrix is a kernel matrix (its elements are inner products closely related to commute times). A principal component analysis (PCA) of the graph is introduced for computing the subspace projection of the node vectors in a manner that preserves as much variance as possible in terms of the Euclidean commutetime distance. This graph PCA provides a nice interpretation to the “Fiedler vector, ” widely used for graph partitioning. The model is evaluated on a collaborativerecommendation task where suggestions are made about which movies people should watch based upon what they watched in the past. Experimental results on the MovieLens database show that the Laplacianbased similarities perform well in comparison with other methods. The model, which nicely fits into the socalled “statistical relational learning ” framework, could also be used to compute document or word similarities, and, more generally, it could be applied to machinelearning and patternrecognition tasks involving a relational database. Index Terms—Graph analysis, graph and database mining, collaborative recommendation, graph kernels, spectral clustering, Fiedler vector, proximity measures, statistical relational learning. 1
Local characteristics, entropy and limit theorems for spanning trees and domino tilings via transferimpedances
, 1993
"... Let G be a finite graph or an infinite graph on which Z d acts with finite fundamental domain. If G is finite, let T be a random spanning tree chosen uniformly from all spanning trees of G; if G is infinite, methods from [Pem] show that this still makes sense, producing a random essential spanning f ..."
Abstract

Cited by 81 (0 self)
 Add to MetaCart
Let G be a finite graph or an infinite graph on which Z d acts with finite fundamental domain. If G is finite, let T be a random spanning tree chosen uniformly from all spanning trees of G; if G is infinite, methods from [Pem] show that this still makes sense, producing a random essential spanning forest of G. A method for calculating local characteristics (i.e. finitedimensional marginals) of T from the transferimpedance matrix is presented. This differs from the classical matrixtree theorem in that only small pieces of the matrix (ndimensional minors) are needed to compute small (ndimensional) marginals. Calculation of the matrix entries relies on the calculation of the Green’s function for G, which is not a local calculation. However, it is shown how the calculation of the Green’s function may be reduced to a finite computation in the case when G is an infinite graph admitting a Z daction with finite quotient. The same computation also gives the entropy of the law of T. These results are applied to the problem of tiling certain lattices by dominos – the socalled dimer problem. Another application of these results is to prove modified versions of conjectures of Aldous [Al2] on the limiting distribution of degrees of a vertex and on the local structure near a vertex of a uniform random spanning tree in a lattice whose dimension is going to infinity. Included is a generalization of moments to treevalued random variables and criteria for these generalized moments to determine a distribution.
Convergence rates of Markov chains
, 1995
"... this paper, we attempt to describe various mathematical techniques which have been used to bound such rates of convergence. In particular, we describe eigenvalue analysis, random walks on groups, coupling, and minorization conditions. Connections are made to modern areas of research wherever possibl ..."
Abstract

Cited by 62 (4 self)
 Add to MetaCart
this paper, we attempt to describe various mathematical techniques which have been used to bound such rates of convergence. In particular, we describe eigenvalue analysis, random walks on groups, coupling, and minorization conditions. Connections are made to modern areas of research wherever possible. Elements of linear algebra, probability theory, group theory, and measure theory are used, but efforts are made to keep the presentation elementary and accessible. Acknowledgements. I thank Eric Belsley for comments and corrections, and thank Persi Diaconis for introducing me to this subject and teaching me so much. 1. Introduction and motivation.
Comparison of perturbation bounds for the stationary distribution of a Markov chain
 IN PROCEEDINGS OF THE TWENTYSIXTH INTERNATIONAL CONFERENCE ON VERY LARGE DATABASES
, 2000
"... The purpose of this paper is to review and compare the existing perturbation bounds for the stationary distribution of a finite, irreducible, homogeneous Markov chain. ..."
Abstract

Cited by 34 (2 self)
 Add to MetaCart
The purpose of this paper is to review and compare the existing perturbation bounds for the stationary distribution of a finite, irreducible, homogeneous Markov chain.
Stochastic Sampling Algorithms for State Estimation of Jump Markov Linear Systems
 IEEE Transactions on Automatic Control
, 2000
"... Jump Markov linear systems are linear systems whose parameters evolve with time according to a finitestate Markov chain. Given a set of observations, our aim is to estimate the states of the finitestate Markov chain and the continuous (in space) states of the linear system. The computational cost ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
Jump Markov linear systems are linear systems whose parameters evolve with time according to a finitestate Markov chain. Given a set of observations, our aim is to estimate the states of the finitestate Markov chain and the continuous (in space) states of the linear system. The computational cost in computing conditional mean or maximum a posteriori (MAP) state estimates of the Markov chain or the state of the jump Markov linear system grows exponentially in the number of observations.
Time complexity of evolutionary algorithms for combinatorial optimization: A decade of results
 International Journal of Automation and Computing
, 2007
"... Abstract: Computational time complexity analyses of Evolutionary Algorithms (EAs) have been performed since the midnineties. The first results were related to very simple algorithms, such as the (1+1)EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different ..."
Abstract

Cited by 22 (10 self)
 Add to MetaCart
Abstract: Computational time complexity analyses of Evolutionary Algorithms (EAs) have been performed since the midnineties. The first results were related to very simple algorithms, such as the (1+1)EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyse the (1+1)EA on combinatorial optimization problems with practical applications and more realistic populationbased EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered. Keywords: Evolutionary algorithms, computational complexity, combinatorial optimization, evolutionary computation theory.
Simulated Annealing with Extended Neighbourhood
, 1991
"... Simulated Annealing (SA) is a powerful stochastic search method applicable to a wide range of problems for which little prior knowledge is available. It can produce very high quality solutions for hard combinatorial optimization problems. However, the computation time required by SA is very large. V ..."
Abstract

Cited by 21 (14 self)
 Add to MetaCart
Simulated Annealing (SA) is a powerful stochastic search method applicable to a wide range of problems for which little prior knowledge is available. It can produce very high quality solutions for hard combinatorial optimization problems. However, the computation time required by SA is very large. Various methods have been proposed to reduce the computation time, but they mainly deal with the careful tuning of SA's control parameters. This paper first analyzes the impact of SA's neighbourhood on SA's performance and shows that SA with a larger neighbourhood is better than SA with a smaller one. The paper also gives a general model of SA, which has both dynamic generation probability and acceptance probability, and proves its convergence. All variants of SA can be unified under such a generalization. Finally, a method of extending SA's neighbourhood is proposed, which uses a discrete approximation to some continuous probability function as the generation function in SA, and several impo...
Learning and Behavioral Stability  An Economic Interpretation of Genetic Algorithms
 Journal of Evolutionary Economics
, 1998
"... This article tries to connect two separate strands of literature concerning genetic algorithms. On the one hand, extensive research took place in mathematics and closely related sciences in order to find out more about the properties of genetic algorithms as stochastic processes. On the other han ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
This article tries to connect two separate strands of literature concerning genetic algorithms. On the one hand, extensive research took place in mathematics and closely related sciences in order to find out more about the properties of genetic algorithms as stochastic processes. On the other hand, recent economic literature uses genetic algorithms as a metaphor for social learning. This paper will face the question what an economist can learn from the mathematical branch of research, especially concerning the convergence and stability properties of the genetic algorithm. It is shown that genetic algorithm learning is a compound of three different learning schemes. First, every particular scheme is analyzed. Then it will be pointed out that it is the combination of the three schemes that gives genetic algorithm learning its special flair: A kind of stability somewhere in between asymptotic convergence and explosion. 1 Introduction As a consequence of the discussion concer...
A Novel Way of Computing Dissimilarities between Nodes of a Graph, with Application to Collaborative Filtering
, 2004
"... This work presents some general procedures for computing dissimilarities between elements of a database or, more generally, nodes of a weighted, undirected, graph. It is based on a Markovchain model of random walk through the database. The model assigns transition probabilities to the links betw ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
This work presents some general procedures for computing dissimilarities between elements of a database or, more generally, nodes of a weighted, undirected, graph. It is based on a Markovchain model of random walk through the database. The model assigns transition probabilities to the links between elements, so that a random walker can jump from element to element. A quantity, called the average firstpassage cost, computes the average cost incurred by a random walker for reaching element k for the first time when starting from element i.