Results 11  20
of
64
Irregular CoarseGrain Data Parallelism Under LPARX
 Journal of Scientific Programming
"... LPARX is a software development tool for implementing dynamic, irregular scientific applications, such as multilevel multilevel finite difference methods and particle methods, on high performance MIMD parallel architectures. It supports coarse grain data parallelism and gives the application complet ..."
Abstract

Cited by 17 (7 self)
 Add to MetaCart
LPARX is a software development tool for implementing dynamic, irregular scientific applications, such as multilevel multilevel finite difference methods and particle methods, on high performance MIMD parallel architectures. It supports coarse grain data parallelism and gives the application complete control over specifying arbitrary block decompositions. LPARX provides structural abstraction, representing data decompositions as firstclass objects that can be manipulated and modified at runtime. LPARX, implemented as a C++ class library, is currently running on diverse MIMD platforms, including the Intel Paragon, Cray C90, IBM SP2, and networks of workstations running under PVM. Software may be developed and debugged on a single processor workstation. 1 Introduction An outstanding problem in scientific computation is how to manage the complexity of converting mathematical descriptions of dynamic, irregular numerical algorithms into high performance applications software. Nonunifo...
Combining competent crossover and mutation operators: A probabilistic model building approach
 In
, 2005
"... This paper presents an approach to combine competent crossover and mutation operators via probabilistic model building. Both operators are based on the probabilistic model building procedure of the extended compact genetic algorithm (eCGA). The model sampling procedure of eCGA, which mimics the beha ..."
Abstract

Cited by 15 (9 self)
 Add to MetaCart
This paper presents an approach to combine competent crossover and mutation operators via probabilistic model building. Both operators are based on the probabilistic model building procedure of the extended compact genetic algorithm (eCGA). The model sampling procedure of eCGA, which mimics the behavior of an idealized recombination— where the building blocks (BBs) are exchanged without disruption—is used as the competent crossover operator. On the other hand, a recently proposed BBwise mutation operator—which uses the BB partition information to perform local search in the BB space—is used as the competent mutation operator. The resulting algorithm, called hybrid extended compact genetic algorithm (heCGA), makes use of the problem decomposition information for (1) effective recombination of BBs and (2) effective local search in the BB neighborhood. The proposed approach is tested on different problems that combine the core of three well known problem difficulty dimensions: deception, scaling, and noise. The results show that, in the absence of domain knowledge, the hybrid approach is more robust than either singleoperatorbased approach.
Analysis of the Numerical Effects of Parallelism on a Parallel Genetic Algorithm
, 1996
"... This paper examines the effects of relaxed synchronization on both the numerical and parallel efficiency of parallel genetic algorithms (GAs). We describe a coarsegrain geographically structured parallel genetic algorithm. Our experiments provide preliminary evidence that asynchronous versions of t ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
This paper examines the effects of relaxed synchronization on both the numerical and parallel efficiency of parallel genetic algorithms (GAs). We describe a coarsegrain geographically structured parallel genetic algorithm. Our experiments provide preliminary evidence that asynchronous versions of these algorithms have a lower run time than synchronous GAs. Our analysis shows that this improvement is due to (1) decreased synchronization costs and (2) high numerical efficiency (e.g. fewer function evaluations) for the asynchronous GAs. This analysis includes a critique of the utility of traditional parallel performance measures for parallel GAs. 1. Introduction Genetic algorithms (GAs) are stochastic search algorithms that have been successfully applied to a variety of optimization problems [5]. Unlike most other optimization procedures, GAs maintain a population of individuals (set of solutions) that are competitively selected to generate new candidates for the global optima. Parallel...
A Comparison of Global and Local Search Methods in Drug Docking
 In Proceedings of the Seventh International Conference on Genetic Algorithms
, 1997
"... Molecular docking software makes computational predictions of the interaction of molecules. This can be useful, for example, in evaluating the binding of candidate drug molecules to a target molecule from a virus. In the Autodock docking software (Morris et al. 1996), a physical model is used to eva ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
Molecular docking software makes computational predictions of the interaction of molecules. This can be useful, for example, in evaluating the binding of candidate drug molecules to a target molecule from a virus. In the Autodock docking software (Morris et al. 1996), a physical model is used to evaluate the energy of candidate docked configurations, and heuristic search is used to minimize this energy. Previous versions of Autodock used simulated annealing to do this heuristic search. We evaluate the use of the genetic algorithm with local search in Autodock. We investigate several GAlocal search (GALS) hybrids and compare results with those obtained from simulated annealing. This comparison is done in terms of optimization success, and absolute success in finding the true physical docked configuration. We use these results to test the energy function and evaluate the success of the application. 1 THE DOCKING PROBLEM When two molecules are in close proximity, it can be energeticall...
Resource Allocation for Steerable Parallel Parameter Searches: an Experimental Study
, 2002
"... Abstract. Computational Grids lend themselves well to parameter sweep applications, in which independent tasks calculate results for points in a parameter space. It is possible for a parameter space to become so large as to pose prohibitive system requirements. In these cases, userdirected steering ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Abstract. Computational Grids lend themselves well to parameter sweep applications, in which independent tasks calculate results for points in a parameter space. It is possible for a parameter space to become so large as to pose prohibitive system requirements. In these cases, userdirected steering promises to reduce overall computation time. In this paper, we address an interesting challenge posed by these userdirected searches: how should compute resources be allocated to application tasks as the overall computation is being steered by the user? We present a model for userdirected searches, and then propose a number of resource allocation strategies and evaluate them in simulation. We find that prioritizing the assignments of tasks to compute resources throughout the search can lead to substantial performance improvements. 1
A Study of the Lamarckian Evolution of Recurrent Neural Networks
, 1999
"... Many frustrating experiences have been encountered when the training of neural networks by local search methods becomes stagnant at local optima. This calls for the development of more satisfactory search methods such as evolutionary search. However, training by evolutionary search can require a lon ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
Many frustrating experiences have been encountered when the training of neural networks by local search methods becomes stagnant at local optima. This calls for the development of more satisfactory search methods such as evolutionary search. However, training by evolutionary search can require a long computation time. In certain situations, using Lamarckian evolution, local search and evolutionary search can complement each other to yield a better training algorithm. This paper demonstrates the potential of this evolutionarylearning synergy by applying it to train recurrent neural networks in an attempt to resolve a longterm dependency problem and the inverted pendulum problem. This work also aims at investigating the interaction between local search and evolutionary search when they are combined. It is found that the combinations are particularly efficient when the local search is simple. In the case where no teacher signal is available for the local search to learn the desired task...
Adding Learning to Cellular Genetic Algorithms for Training Recurrent Neural Networks
, 1998
"... This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GAs) for training recurrent neural networks (RNNs). Each weight of an RNN is encoded as a floating point number, and a concatenation of the numbers fo ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
This paper proposes a hybrid optimization algorithm which combines the efforts of local search (individual learning) and cellular genetic algorithms (GAs) for training recurrent neural networks (RNNs). Each weight of an RNN is encoded as a floating point number, and a concatenation of the numbers forms a chromosome. Reproduction takes place locally in a square grid with each grid point representing a chromosome. Two approaches, Lamarckian and Baldwinian mechanisms, for combining cellular GAs and learning have been compared. Different hillclimbing algorithms are incorporated into the cellular GAs as learning methods. These include the realtime recurrent learning (RTRL) and its simplified versions, and the delta rule. The RTRL algorithm has been successively simplified by freezing some of the weights to form simplified versions. The delta rule, which is the simplest form of learning, has been implemented by considering the RNNs as feedforward networks during learning. The hybrid algori...
An introduction and survey of estimation of distribution algorithms
 SWARM AND EVOLUTIONARY COMPUTATION
, 2011
"... ..."
SEARCH, Blackbox Optimization, And Sample Complexity
 In R.K. Belew & M. Vose (Eds.) Foundations of Genetic Algorithms 4
, 1997
"... The SEARCH (Search Envisioned As Relation & Class Hierarchizing) framework developed elsewhere (Kargupta, 1995) offered an alternate perspective toward blackbox optimization (BBO)optimization in presence of little domain knowledge. The SEARCH framework investigated the conditions essential for tr ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
The SEARCH (Search Envisioned As Relation & Class Hierarchizing) framework developed elsewhere (Kargupta, 1995) offered an alternate perspective toward blackbox optimization (BBO)optimization in presence of little domain knowledge. The SEARCH framework investigated the conditions essential for transcending the limits of random enumerative search using a framework developed in terms of relations, classes and partial ordering. This paper presents a summary of some of the main results of that work. A closed form bound on the sample complexity in terms of the cardinality of the relation space, class space, desired quality of the solution and the reliability is presented. The two primary lessons of this work are, a BBO (1) must search for appropriate relations and (2) can only solve the so called class of orderk delineable problems in polynomial sample complexity. These results are applicable to any blackbox search algorithms, including evolutionary optimization techniques. 1 Introducti...
An Indexed Bibliography of Distributed Genetic Algorithms
, 1999
"... s: Jan. 1995 { Sep. 1998 ACM: ACM Guide to Computing Literature: 1979  1993/4 BA: Biological Abstracts: July 1996  Aug. 1998 CA: Computer Abstracts: Jan. 1993 { Feb. 1995 CCA: Computer & Control Abstracts: Jan. 1992 { Apr. 1998 (except May95) ChA: Chemical Abstracts: Jan. 1997  Dec. 19 ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
s: Jan. 1995 { Sep. 1998 ACM: ACM Guide to Computing Literature: 1979  1993/4 BA: Biological Abstracts: July 1996  Aug. 1998 CA: Computer Abstracts: Jan. 1993 { Feb. 1995 CCA: Computer & Control Abstracts: Jan. 1992 { Apr. 1998 (except May95) ChA: Chemical Abstracts: Jan. 1997  Dec. 1998 CTI: Current Technology Index Jan./Feb. 1993 { Jan./Feb. 1994 DAI: Dissertation Abstracts International: Vol. 53 No. 1 { Vol. 56 No. 10 (Apr. 1996) EEA: Electrical & Electronics Abstracts: Jan. 1991 { Apr. 1998 EI A: The Engineering Index Annual: 1987  1992 EI M: The Engineering Index Monthly: Jan. 1993 { Apr. 1998 (except May 1997) N: Scientic and Technical Aerospace Reports: Jan. 1993  Dec. 1995 (except Oct. 1995) P: Index to Scientic & Technical Proceedings: Jan. 1986 { May 1998 (except Nov. 1994) PA: Physics Abstracts: Jan. 1997 { Sep. 1998 1.1 Your contributions erroneous or missing? The bibliography database is updated on a regular basis and certain...