Results 1  10
of
61
Very LargeScale Neighborhood Search for the Quadratic Assignment Problem
 DISCRETE APPLIED MATHEMATICS
, 2002
"... The Quadratic Assignment Problem (QAP) consists of assigning n facilities to n locations so as to minimize the total weighted cost of interactions between facilities. The QAP arises in many diverse settings, is known to be NPhard, and can be solved to optimality only for fairly small size instances ..."
Abstract

Cited by 143 (13 self)
 Add to MetaCart
The Quadratic Assignment Problem (QAP) consists of assigning n facilities to n locations so as to minimize the total weighted cost of interactions between facilities. The QAP arises in many diverse settings, is known to be NPhard, and can be solved to optimality only for fairly small size instances (typically, n < 25). Neighborhood search algorithms are the most popular heuristic algorithms to solve larger size instances of the QAP. The most extensively used neighborhood structure for the QAP is the 2exchange neighborhood. This neighborhood is obtained by swapping the locations of two facilities and thus has size O(n²). Previous efforts to explore larger size neighborhoods (such as 3exchange or 4exchange neighborhoods) were not very successful, as it took too long to evaluate the larger set of neighbors. In this paper, we propose very largescale neighborhood (VLSN) search algorithms where the size of the neighborhood is very large and we propose a novel search procedure to heuristically enumerate good neighbors. Our search procedure relies on the concept of improvement graph which allows us to evaluate neighbors much faster than the existing methods. We present extensive computational results of our algorithms on standard benchmark instances. These investigations reveal that very largescale neighborhood search algorithms give consistently better solutions compared the popular 2exchange neighborhood algorithms considering both the solution time and solution accuracy.
VLSI cell placement techniques
 ACM Computing Surveys
, 1991
"... VLSI cell placement problem is known to be NP complete. A wide repertoire of heuristic algorithms exists in the literature for efficiently arranging the logic cells on a VLSI chip. The objective of this paper is to present a comprehensive survey of the various cell placement techniques, with emphasi ..."
Abstract

Cited by 88 (0 self)
 Add to MetaCart
VLSI cell placement problem is known to be NP complete. A wide repertoire of heuristic algorithms exists in the literature for efficiently arranging the logic cells on a VLSI chip. The objective of this paper is to present a comprehensive survey of the various cell placement techniques, with emphasis on standard ce11and macro
Dragon2000: StandardCell Placement Tool For Large Industry Circuits
 In Proc. Int. Conf. on Computer Aided Design
, 2000
"... In this paper, we develop a new standard cell placement tool, Dragon2000, to solve large scale placement problem effectively. A topdown hierarchical approach is used in Dragon2000. Stateoftheart partitioning tools are tightly integrated with wirelength minimization techniques to achieve superior ..."
Abstract

Cited by 84 (0 self)
 Add to MetaCart
(Show Context)
In this paper, we develop a new standard cell placement tool, Dragon2000, to solve large scale placement problem effectively. A topdown hierarchical approach is used in Dragon2000. Stateoftheart partitioning tools are tightly integrated with wirelength minimization techniques to achieve superior performance. We argue that netcut minimization is a good and important shortcut to solve the large scale placement problem. Experimental results show that minimizing netcut is more important than greedily obtain a wirelength optimal placement at intermediate hierarchical levels. We run Dragon2000 on recently released large benchmark suite ISPD98 as well as MCNC circuits. For circuits which have more than 100k cells, comparing to iTools1.4.0, Dragon2000 can produce slightly better placement results (1:4%) while spending much less amount of time (2\Theta speedup). This is also the first published placement result on the publicly available large industrial circuits. 1. INTRODUCTION Placeme...
Optimal Partitioners and Endcase Placers for Standardcell Layout
 IEEE TRANS. ON CAD
, 2000
"... We study alternatives to classic FMbased partitioning algorithms in the context of endcase processing for topdown standardcell placement. While the divide step in the topdown divide and conquer is usually performed heuristically, we observe that optimal solutions can be found for many su cientl ..."
Abstract

Cited by 57 (21 self)
 Add to MetaCart
We study alternatives to classic FMbased partitioning algorithms in the context of endcase processing for topdown standardcell placement. While the divide step in the topdown divide and conquer is usually performed heuristically, we observe that optimal solutions can be found for many su ciently small partitioning instances. Our main motivation is that small partitioning instances frequently contain multiple cells that are larger than the prescribed partitioning tolerance, and that cannot be moved iteratively while preserving the legality ofa solution. To sample the suboptimality of FMbased partitioning algorithms, we focus on optimal partitioning and placement algorithms based on either enumeration or branchandbound that are invoked for instances below prescribed size thresholds,
Improved Algorithms for Hypergraph Bipartitioning
 IN PROCEEDINGS OF THE ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE
, 2000
"... Multilevel FiducciaMattheyses (MLFM) hypergraph partitioning [3, 22, 24] is a fundamental optimization in VLSI CAD physical design. The leading implementation, hMetis [23], has since 1997 proved itself substantially superior in both runtime and solution quality to even very recent works (e.g., [13, ..."
Abstract

Cited by 57 (15 self)
 Add to MetaCart
Multilevel FiducciaMattheyses (MLFM) hypergraph partitioning [3, 22, 24] is a fundamental optimization in VLSI CAD physical design. The leading implementation, hMetis [23], has since 1997 proved itself substantially superior in both runtime and solution quality to even very recent works (e.g., [13, 17, 25]). In this work, we present two sets of results: (i) new techniques for flat FMbased hypergraph partitioning (which is the core of multilevel implementations), and (ii) a new multilevel implementation that offers leadingedge performance. Our new techniques for flat partitioning confirm the conjecture from [10], suggesting that specialized partitioning heuristics may be able to actively exploit fixed nodes in partitioning instances arising in the driving topdown placement context. Our FM variant is competitive with traditional FM on instances without terminals [1] and considerably superior on instances with fixed nodes (i.e., arising during topdown placement [8]). Our multilevel ...
Congestion Minimization During Placement
 In International Symposium on Physical Design
, 2000
"... Typical placement objectives involve reducing netcut cost or minimizing wirelength. Congestion minimization is the least understood, however, it models routability most accurately. In this paper, we study the congestion minimization problem during placement. First, we show that a global placement w ..."
Abstract

Cited by 54 (10 self)
 Add to MetaCart
(Show Context)
Typical placement objectives involve reducing netcut cost or minimizing wirelength. Congestion minimization is the least understood, however, it models routability most accurately. In this paper, we study the congestion minimization problem during placement. First, we show that a global placement with minimum wirelength has minimum total congestion. We show that minimizing wirelength may (and in general, will) create locally congested regions. We test seven different congestion minimization objectives. We also propose a post processing stage to minimize congestion. Our main contribution and results can be summarized as below: 1. Among a variety of cost functions and methods for congestion minimization (including several currently used in industry), wirelength alone followed by a post processing congestion minimization works the best and is one of the fastest. 2. Cost functions such as a hybrid length plus congestion (commonly believed to be very effective) do not always work very we...
Routability Driven White Space Allocation for FixedDie StandardCell Placement
 ISPD
, 2002
"... The use of white space in fixeddie standardcell placement is an effective way to improve routability. In this paper, we present a white space allocation approach that dynamically assigns white space according to the congestion distribution of the placement. In the topdown placement flow, white spa ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
(Show Context)
The use of white space in fixeddie standardcell placement is an effective way to improve routability. In this paper, we present a white space allocation approach that dynamically assigns white space according to the congestion distribution of the placement. In the topdown placement flow, white space is assigned to congested regions using a smooth allocating function. A post allocation optimization step is taken to further improve placement quality. Experimental results show that the proposed allocation approach, combined with a multilevel placement flow, significantly improves placement routability and layout quality.
Design tools for 3D integrated circuits
 in: Proceedings of the 2003 Asia and South Pacific Design Automation Conference, 2003
"... Abstract — We present a set of design tools for 3D integration. Using these tools – a 3D standardcell placement tool, global routing tool, and layout editor – we have targeted existing standardcell circuit netlists for fabrication using wafer bonding. We have analyzed the performance of several ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
(Show Context)
Abstract — We present a set of design tools for 3D integration. Using these tools – a 3D standardcell placement tool, global routing tool, and layout editor – we have targeted existing standardcell circuit netlists for fabrication using wafer bonding. We have analyzed the performance of several circuits using these tools and find that 3D integration provides significant benefits. For example, relative to singledie placement, we observe on average 28 % to 51 % reduction in total wire length. interlayer interconnect device layer 2 layertolayer bond I.
A very large scale neighborhood search algorithm for the quadratic assignment problem
 JOURNAL ON COMPUTING
, 2002
"... Many optimization problems of practical interest are computationally intractable. Therefore, a practical approach for solving such problems is to employ heuristic (approximation) algorithms that can find nearly optimal solutions within a reasonable amount of computation time. An improvement algorith ..."
Abstract

Cited by 23 (4 self)
 Add to MetaCart
Many optimization problems of practical interest are computationally intractable. Therefore, a practical approach for solving such problems is to employ heuristic (approximation) algorithms that can find nearly optimal solutions within a reasonable amount of computation time. An improvement algorithm generally starts with a feasible solution and iteratively tries to obtain a better solution. Neighborhood search algorithms (alternatively called local search algorithms) are a wide class of improvement heuristics where at each iteration an improving solution is found by searching the “neighborhood” of the current solution. A critical issue in the design of a neighborhood search approach is the choice of the neighborhood structure, that is, the manner in which the neighborhood is defined. As a rule of thumb, the larger the neighborhood, the better is the quality of the locally optimal solutions, and the greater is the accuracy of the final solution that is obtained. At the same time, the larger the neighborhood, the longer it takes to search the neighborhood at each iteration. For this reason a larger neighborhood does not necessarily produce a more effective heuristic unless one can search the larger neighborhood in a very efficient manner. This paper concentrates on neighborhood search algorithms where the size of the neighborhood is “very large” with respect to the size of the input data and in which the neighborhood is searched in an efficient manner. We survey three broad classes of very large scale neighborhood search (VLSN) algorithms: (1) variable depth
MinMax Placement for LargeScale Timing Optimization
 In ACM International Symposium on Physical Design
, 2002
"... With featuresizes below 0�25µm, interconnect delays account for over 40 % of worst delays [12]. Transitions to 0�18µm and 0�13µm further increase this figure, and thus the relative importance of timingdriven placement for VLSI. Our work introduces a novel minimization of maximal path delay that im ..."
Abstract

Cited by 23 (8 self)
 Add to MetaCart
(Show Context)
With featuresizes below 0�25µm, interconnect delays account for over 40 % of worst delays [12]. Transitions to 0�18µm and 0�13µm further increase this figure, and thus the relative importance of timingdriven placement for VLSI. Our work introduces a novel minimization of maximal path delay that improves upon previously known algorithms for timingdriven placement. Our placement algorithms have provable properties and are fast in practice. Our empirical validation is based on extending a scalable mincut placer with proven empirical record in wirelength and congestiondriven placement [4]. The overhead of timingdriven placement was within 50 % CPU time. We placed industrial circuits and evaluated the layouts with a commercial static timing analyzer.