Results 11  20
of
20
Learning in MultiAgent Systems
, 2000
"... There is an increased interest in multiagent systems (MASs) for computing robust solutions to complex real world problems. In this paper we analyze dierent aspects of multiagent systems, in particular multiagent architectures, multiagent problems, and optimization algorithms for MASs. Further ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
There is an increased interest in multiagent systems (MASs) for computing robust solutions to complex real world problems. In this paper we analyze dierent aspects of multiagent systems, in particular multiagent architectures, multiagent problems, and optimization algorithms for MASs. Furthermore, we present a scheme for mapping multiagent problems to architectures which can be used for solving them and a mapping from multiagent problem features to optimization algorithms. Finally, we review the solutions of previous work on many dierent multiagent problems. 1 Introduction Multiagent systems (MASs). The study of multiagent systems enables us to come up with robust solutions to complex problems. In the past many monolithic approaches have been constructed to solve such tasks. As problems have become more complex during the last decades, more modular systems have been developed for solving them. The study of multiagent systems (MASs) is becoming an active eld of res...
Improving Constrained Nonlinear Search Algorithms Through Constraint Relaxation
, 2001
"... In this thesis we study constraint relaxations of various nonlinear programming (NLP) algorithms in order to improve their performance. For both stochastic and deterministic algorithms, we study the relationship between the expected time to find a feasible solution and the constraint relaxation leve ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
In this thesis we study constraint relaxations of various nonlinear programming (NLP) algorithms in order to improve their performance. For both stochastic and deterministic algorithms, we study the relationship between the expected time to find a feasible solution and the constraint relaxation level, build an exponential model based on this relationship, and develop a constraint relaxation schedule in such a way that the total time spent to find a feasible solution for all the relaxation levels is of the same order of magnitude as the time spent for finding a solution of similar quality using the last relaxation level alone. When the
Optimizing Neural Networks For The Generation Of Block Designs
 JAGOTA A
, 1997
"... This work describes the evaluation of several search algorithms, based on optimizing neural networks, as applied to a family of problems : the generation of blockdesigns. Given a set (v; b; u) of parameters (v rows, b columns and u ones), a block design is any v 2 bbinary configuration that has th ..."
Abstract
 Add to MetaCart
This work describes the evaluation of several search algorithms, based on optimizing neural networks, as applied to a family of problems : the generation of blockdesigns. Given a set (v; b; u) of parameters (v rows, b columns and u ones), a block design is any v 2 bbinary configuration that has the following properties: u ones, r ones per row, k ones per column, and correlation between pairs of rows. The values [u; r; k; ] are called here the descriptors of the design and, since they have to be integers, they impose admissibility constraints on the independent parameters. Admissiblity, though, does not imply existence. An optimizing algorithm can be decomposed into a cost function, that conforms the search landscape, and a search strategy that defines the way to explore it. This work proposes a set of cost functions, based on the number of pairs as a measure of the distribution of each of the properties of a design. The resulting structure, then, is straightforwardly mapped onto a...
Improvement Strategies for theFRaceAlgorithm: Sampling Design and Iterative Refinement
"... Abstract. Finding appropriate values for the parameters of an algorithm is a challenging, important, and time consuming task. While typically parameters are tuned by hand, recent studies have shown that automatic tuning procedures can effectively handle this task and often find better parameter sett ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Finding appropriate values for the parameters of an algorithm is a challenging, important, and time consuming task. While typically parameters are tuned by hand, recent studies have shown that automatic tuning procedures can effectively handle this task and often find better parameter settings. FRace has been proposed specifically for this purpose and it has proven to be very effective in a number of cases. FRace is a racing algorithm that starts by considering a number of candidate parameter settings and eliminates inferior ones as soon as enough statistical evidence arises against them. In this paper, we propose two modifications to the usual way of applying FRace that on the one hand, make it suitable for tuning tasks with a very large number of initial candidate parameter settings and, on the other hand, allow a significant reduction of the number of function evaluations without any major loss in solution quality. We evaluate the proposed modifications on a number of stochastic local search algorithms and we show their effectiveness. 1
Using Optimal DependencyTrees for Combinatorial Optimization: Learning the Structure of the Search Space
, 1997
"... Many combinatorial optimization algorithms have no mechanism to capture interparameter dependencies. However, modeling such dependencies may allow an algorithm to concentrate its sampling more effectively on regions of the search space which have appeared promising in the past. We present an algori ..."
Abstract
 Add to MetaCart
(Show Context)
Many combinatorial optimization algorithms have no mechanism to capture interparameter dependencies. However, modeling such dependencies may allow an algorithm to concentrate its sampling more effectively on regions of the search space which have appeared promising in the past. We present an algorithm which incrementally learns secondorder probability distributions from good solutions seen so far, uses these statistics to generate optimal (in terms of maximum likelihood) dependency trees to model these distributions, and then stochastically generates new candidate solutions from these trees. We test this algorithm on a variety of optimization problems. Our results indicate superior performance over other tested algorithms that either (1) do not explicitly use these dependencies, or (2) use these dependencies to generate a more restricted class of dependency graphs. Scott Davies was supported by a Graduate Student Research Fellowship from the National Science Foundation. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied of the National Science Foundation. Keywords:
et de Développements en Intelligence Artificielle
, 2009
"... A comparison to progressive approximation, the aggregation approach, and simulated annealing ..."
Abstract
 Add to MetaCart
(Show Context)
A comparison to progressive approximation, the aggregation approach, and simulated annealing
Reinforcement Learning and Local Search: A Case Study
, 1997
"... We describe a reinforcement learningbased variation to the combinatorial optimization technique known as local search. The hillclimbing aspect of local search uses the problem's primary cost function to guide search via local neighborhoods to high quality solutions. In complicated optimization ..."
Abstract
 Add to MetaCart
(Show Context)
We describe a reinforcement learningbased variation to the combinatorial optimization technique known as local search. The hillclimbing aspect of local search uses the problem's primary cost function to guide search via local neighborhoods to high quality solutions. In complicated optimization problems, however, other problem characteristics can also help guide the search process. In this report we present an approach to constructing more general, derived, cost functions for combinatorial optimization problems using reinforcement learning. Such derived cost functions integrate a variety of problem characteristics into a single hillclimbing function. We illustrate our technique by developing several such functions for the DialARide Problem, a variant of the wellknown Traveling Salesman Problem. 1 Introduction Combinatorial optimization problems are fundemental in many areas of computer science, engineering, and operations research. Solving such problems involves searching a discrete...
Design Space Characterization in MicroArchitecture Design and Implementation
"... Abstract—Modern VLSI designs contain both microarchitecture parameters and implementation parameters. These can be used to facilitate verification and relaxed design specifications. We concentrate on extending prior work in understanding design parameterization and using those design knobs to make g ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—Modern VLSI designs contain both microarchitecture parameters and implementation parameters. These can be used to facilitate verification and relaxed design specifications. We concentrate on extending prior work in understanding design parameterization and using those design knobs to make global optimizations. This paper discusses the application of machine learning techniques to improve the efficiency and quality of the design space characterization and optimization. Specifically, we propose improvements to the circuit energy vs delay characterization.
Placement and Routing for 3DFPGAs using Reinforcement Learning and Support Vector Machines
"... The primary advantage of using 3DFPGA over 2DFPGA is that the vertical stacking of active layers reduce the Manhattan distance between the components in 3DFPGA than when placed on 2DFPGA. This results in a considerable reduction in total interconnect length. Reduced wire length eventually leads ..."
Abstract
 Add to MetaCart
(Show Context)
The primary advantage of using 3DFPGA over 2DFPGA is that the vertical stacking of active layers reduce the Manhattan distance between the components in 3DFPGA than when placed on 2DFPGA. This results in a considerable reduction in total interconnect length. Reduced wire length eventually leads to reduction in delay and hence improved performance and speed. Design of an efficient placement and routing algorithm for 3DFPGA that fully exploits the above mentioned advantage is a problem of deep research and commercial interest. In this paper, an efficient placement and routing algorithm is proposed for 3DFPGAs which yields better results in terms of total interconnect length and channelwidth. The proposed algorithm employs two important techniques, namely, Reinforcement Learning (RL) and Support Vector Machines (SVMs), to perform the placement. The proposed algorithm is implemented and tested on standard benchmark circuits and the results obtained are encouraging. This is one of the very few instances where reinforcement learning is used for solving a problem in the area of VLSI.