Results 1  10
of
139
Connectionist Learning Procedures
 ARTIFICIAL INTELLIGENCE
, 1989
"... A major goal of research on networks of neuronlike processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way ..."
Abstract

Cited by 339 (6 self)
 Add to MetaCart
A major goal of research on networks of neuronlike processing units is to discover efficient learning procedures that allow these networks to construct complex internal representations of their environment. The learning procedures must be capable of modifying the connection strengths in such a way that internal units which are not part of the input or output come to represent important features of the task domain. Several interesting gradientdescent procedures have recently been discovered. Each connection computes the derivative, with respect to the connection strength, of a global measure of the error in the performance of the network. The strength is then adjusted in the direction that decreases the error. These relatively simple, gradientdescent learning procedures work well for small tasks and the new challenge is to find ways of improving their convergence rate and their generalization abilities so that they can be applied to larger, more realistic tasks.
The Use of Active Shape Models For Locating Structures in Medical Images
, 1994
"... This paper describes a technique for building compact models of the shape and appearance of flexible objects (such as organs) seen in 2D images. The models are derived from the statistics of sets of labelled images of examples of the objects. ..."
Abstract

Cited by 292 (23 self)
 Add to MetaCart
This paper describes a technique for building compact models of the shape and appearance of flexible objects (such as organs) seen in 2D images. The models are derived from the statistics of sets of labelled images of examples of the objects.
Strongly Typed Genetic Programming
 Evolutionary Computation
, 1994
"... Genetic programming is a powerful method for automatically generating computer programs via the process of natural selection [Koza 92]. However, it has the limitation known as "closure", i.e. that all the variables, constants, arguments for functions, and values returned from functions must be of ..."
Abstract

Cited by 233 (1 self)
 Add to MetaCart
Genetic programming is a powerful method for automatically generating computer programs via the process of natural selection [Koza 92]. However, it has the limitation known as "closure", i.e. that all the variables, constants, arguments for functions, and values returned from functions must be of the same data type. To correct this deficiency, we introduce a variation of genetic programming called "strongly typed" genetic programming(STGP). In STGP, variables, constants, arguments, and returned values can be of any data type with the provision that the data type for each such value be specified beforehand. This allows the initialization process and the genetic operators to only generate syntactically correct parse trees. Key concepts for STGP are generic functions, which are not true strongly typed functions but rather templates for classes of such functions, and generic data types, which are analogous. To illustrate STGP, we present four examples involving vector/matrix manip...
DomainIndependent Extensions to GSAT: Solving Large Structured Satisfiability Problems
 PROC. IJCAI93
, 1993
"... GSAT is a randomized local search procedure for solving propositional satisfiability problems (Selman et al. 1992). GSAT can solve hard, randomly generated problems that are an order of magnitude larger than those that can be handled by more traditional approaches such as the DavisPutnam proc ..."
Abstract

Cited by 215 (11 self)
 Add to MetaCart
GSAT is a randomized local search procedure for solving propositional satisfiability problems (Selman et al. 1992). GSAT can solve hard, randomly generated problems that are an order of magnitude larger than those that can be handled by more traditional approaches such as the DavisPutnam procedure. GSAT also efficiently solves encodings of graph coloring problems, Nqueens, and Boolean induction. However, GSAT does not perform as well on handcrafted encodings of blocksworld planning problems and formulas with a high degree of asymmetry. We present three strategies that dramatically improve GSAT's performance on such formulas. These strategies, in effect, manage to uncover hidden structure in the formula under considerations, thereby significantly extending the applicability of the GSAT algorithm.
On Evolution, Search, Optimization, Genetic Algorithms and Martial Arts  Towards Memetic Algorithms
, 1989
"... Short abstract, isn't it? P.A.C.S. numbers 05.20, 02.50, 87.10 1 Introduction Large Numbers "...the optimal tour displayed (see Figure 6) is the possible unique tour having one arc fixed from among 10 655 tours that are possible among 318 points and have one arc fixed. Assuming that one could ..."
Abstract

Cited by 187 (10 self)
 Add to MetaCart
Short abstract, isn't it? P.A.C.S. numbers 05.20, 02.50, 87.10 1 Introduction Large Numbers "...the optimal tour displayed (see Figure 6) is the possible unique tour having one arc fixed from among 10 655 tours that are possible among 318 points and have one arc fixed. Assuming that one could possibly enumerate 10 9 tours per second on a computer it would thus take roughly 10 639 years of computing to establish the optimality of this tour by exhaustive enumeration." This quote shows the real difficulty of a combinatorial optimization problem. The huge number of configurations is the primary difficulty when dealing with one of these problems. The quote belongs to M.W Padberg and M. Grotschel, Chap. 9., "Polyhedral computations", from the book The Traveling Salesman Problem: A Guided tour of Combinatorial Optimization [124]. It is interesting to compare the number of configurations of realworld problems in combinatorial optimization with those large numbers arising in Cosmol...
TestData Generation Using Genetic Algorithms
 Software Testing, Verification And Reliability
, 1999
"... This paper presents a technique that uses a genetic algorithm for automatic testdata generation. A genetic algorithm is a heuristic that mimics the evolution of natural species in searching for the optimal solution to a problem. In the testdata generation application, the solution sought by the ge ..."
Abstract

Cited by 128 (0 self)
 Add to MetaCart
This paper presents a technique that uses a genetic algorithm for automatic testdata generation. A genetic algorithm is a heuristic that mimics the evolution of natural species in searching for the optimal solution to a problem. In the testdata generation application, the solution sought by the genetic algorithm is test data that causes execution of a given statement, branch, path, or definitionuse pair in the program under test. The testdatageneration technique was implemented in a tool called TGen in which parallel processing was used to improve the performance of the search. To experiment with TGen, a random testdata generator, called Random, was also implemented. Both TGen and Random were used to experiment with the generation of testdata for statement and branch coverage of six programs.
Theoretical and Numerical ConstraintHandling Techniques used with Evolutionary Algorithms: A Survey of the State of the Art
, 2002
"... This paper provides a comprehensive survey of the most popular constrainthandling techniques currently used with evolutionary algorithms. We review approaches that go from simple variations of a penalty function, to others, more sophisticated, that are biologically inspired on emulations of the imm ..."
Abstract

Cited by 102 (21 self)
 Add to MetaCart
This paper provides a comprehensive survey of the most popular constrainthandling techniques currently used with evolutionary algorithms. We review approaches that go from simple variations of a penalty function, to others, more sophisticated, that are biologically inspired on emulations of the immune system, culture or ant colonies. Besides describing briefly each of these approaches (or groups of techniques), we provide some criticism regarding their highlights and drawbacks. A small comparative study is also conducted, in order to assess the performance of several penaltybased approaches with respect to a dominancebased technique proposed by the author, and with respect to some mathematical programming approaches. Finally, we provide some guidelines regarding how to select the most appropriate constrainthandling technique for a certain application, ad we conclude with some of the the most promising paths of future research in this area.
What Makes a Problem Hard for a Genetic Algorithm? Some Anomalous Results and Their Explanation
 Machine Learning
, 1993
"... Abstract. What makes a problem easy or hard for a genetic algorithm (GA)? This question has become increasingly important as people have tried to apply the GA to ever more diverse types of problems. Much previous work on this question has studied the relationship between GA performance and the stru ..."
Abstract

Cited by 102 (3 self)
 Add to MetaCart
Abstract. What makes a problem easy or hard for a genetic algorithm (GA)? This question has become increasingly important as people have tried to apply the GA to ever more diverse types of problems. Much previous work on this question has studied the relationship between GA performance and the structure of a given fitness function when it is expressed as a Walsh polynomial. The work of Bethke, Goldberg, and others has produced certain theoretical results about this relationship. In this article we review these theoretical results, and then discuss a number of seemingly anomalous experimental results reported by Tanese concerning the performance of the GA on a subclass of Walsh polynomials, some members of which were expected to be easy for the GA to optimize. Tanese found that the GA was poor at optimizing all functions in this subclass, that a partitioning of a single large population into a number of smaller independent populations seemed to improve performance, and that hillclimbing outperformed both the original and partitioned forms of the GA on these functions. These results seemed to contradict several commonly held expectations about GAs. We begin by reviewing schema processing in GAs. We then give an informal description of how Walsh analysis and Bethke's Walshschema transform relate to GA performance, and we discuss the relevance of this analysis for GA applications in optimization and machine learning. We then describe Tanese's surprising results, examine them experimentally and theoretically, and propose and evaluate some explanations. These explanations lead to a more fundamental question about GAs: what are the features of problems that determine the likelihood of successful GA performance?
Genetic Hybrids for the Quadratic Assignment Problem
 DIMACS Series in Mathematics and Theoretical Computer Science
, 1993
"... . A new hybrid procedure that combines genetic operators to existing heuristics is proposed to solve the Quadratic Assignment Problem (QAP). Genetic operators are found to improve the performance of both local search and tabu search. Some guidelines are also given to design good hybrid schemes. Thes ..."
Abstract

Cited by 89 (0 self)
 Add to MetaCart
. A new hybrid procedure that combines genetic operators to existing heuristics is proposed to solve the Quadratic Assignment Problem (QAP). Genetic operators are found to improve the performance of both local search and tabu search. Some guidelines are also given to design good hybrid schemes. These hybrid algorithms are then used to improve on the best known solutions of many test problems in the literature. 1. Introduction The quadratic assignment problem (QAP) can be stated as: min OE2P (n) n X i=1 n X j=1 a ij b OE(i)OE(j) ; where A = (a ij ) and B = (b kl ) are two n \Theta n matrices and P (n) is the set of all permutations of f1; :::; ng. Matrix A is often referred to as a distance matrix between sites, and B as a flow matrix between objects. In most cases, the matrices A and B are symmetrical with a null diagonal. A permutation may then be interpreted as an assignment of objects to sites with a quadratic cost associated to it. There are many applications that can be fo...
An Overview of Genetic Algorithms: Part 1, Fundamentals
, 1993
"... this article may be reproduced for commercial purposes. 1 Introduction ..."
Abstract

Cited by 81 (1 self)
 Add to MetaCart
this article may be reproduced for commercial purposes. 1 Introduction