Results 1  10
of
193
Parameterized blockbased statistical timing analysis with nonGaussian and nonlinear parameters
 Proc. Design Automation Conf
, 2005
"... Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do no ..."
Abstract

Cited by 69 (9 self)
 Add to MetaCart
(Show Context)
Variability of process parameters makes prediction of digital circuit timing characteristics an important and challenging problem in modern chip design. Recently, statistical static timing analysis (statistical STA) has been proposed as a solution. Unfortunately, the existing approaches either do not consider explicit gate delay dependence on process parameters [3] [6] or restrict analysis to linear Gaussian parameters only [1, [2]. Here we extend the capabilities of parameterized blockbased statistical STA [1] to handle nonlinear function of delays and nonGaussian parameters, while retaining maximum efficiency of processing linear Gaussian parameters. Our novel technique improves accuracy in predicting circuit timing characteristics and retains such benefits of parameterized blockbased statistical STA as an incremental mode of operation, computation of criticality probabilities and sensitivities to process parameter variations. We implemented our technique in an industrial statistical timing analysis tool. Our experiments with large digital blocks showed both efficiency and accuracy of the proposed technique. 1.
Statistical Timing Analysis Under Spatial Correlations
 IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems
, 2005
"... Abstract — Process variations are of increasing concern in today’s technologies, and can significantly affect circuit performance. We present an efficient statistical timing analysis algorithm that predicts the probability distribution of the circuit delay considering both interdie and intradie va ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
(Show Context)
Abstract — Process variations are of increasing concern in today’s technologies, and can significantly affect circuit performance. We present an efficient statistical timing analysis algorithm that predicts the probability distribution of the circuit delay considering both interdie and intradie variations, while accounting for the effects of spatial correlations of intradie parameter variations. The procedure uses a firstorder Taylor series expansion to approximate the gate and interconnect delays. Next, principal component analysis techniques are��and ��� are employed to transform the set of correlated parameters into an uncorrelated set. The statistical timing computation is then easily performed with a PERTlike circuit graph traversal. The runtime of our algorithm is linear in the number of gates and interconnects, as well as the number of varying parameters and grid partitions that are used to model spatial correlations. The accuracy of the method is verified with Monte Carlo simulation. On average, for 100nm technology, the errors of mean and standard deviation values computed by the proposed method respectively, and the errors of predicting the��and confidence point are ���and ���respectively. A testcase with about 17,800 gates was solved in about�seconds, with high accuracy as compared to a Monte Carlo simulation that required more than�hours.
CorrelationAware Statistical Timing Analysis with NonGaussian Delay Distributions
 In DAC ’05: Proceedings of the 42nd annual conference on Design automation
, 2005
"... Process variations have a growing impact on circuit performance for today’s integrated circuit (IC) technologies. The NonGaussian delay distributions as well as the correlations among delays make statistical timing analysis more challenging than ever. In this paper, we present an efficient blockba ..."
Abstract

Cited by 54 (0 self)
 Add to MetaCart
Process variations have a growing impact on circuit performance for today’s integrated circuit (IC) technologies. The NonGaussian delay distributions as well as the correlations among delays make statistical timing analysis more challenging than ever. In this paper, we present an efficient blockbased statistical timing analysis approach with linear complexity with respect to the circuit size, which can accurately predict NonGaussian delay distributions from realistic nonlinear gate and interconnect delay models. This approach accounts for all correlations, from manufacturing process dependence, to reconvergent circuit paths to produce more accurate statistical timing predictions. With this approach, circuit designers can have increased confidence in the variation estimates, at a low additional computation cost.
Statistical Gate Sizing for Timing Yield Optimization
 In ICCAD
, 2005
"... Abstract — Variability in the chip design process has been relatively increasing with technology scaling to smaller dimensions. Using worst case analysis for circuit optimization severely overconstrains the system and results in solutions with excessive penalties. Statistical timing analysis and op ..."
Abstract

Cited by 38 (9 self)
 Add to MetaCart
(Show Context)
Abstract — Variability in the chip design process has been relatively increasing with technology scaling to smaller dimensions. Using worst case analysis for circuit optimization severely overconstrains the system and results in solutions with excessive penalties. Statistical timing analysis and optimization have consequently emerged as a refinement of the traditional static timing approach for circuit design optimization. In this paper, we propose a statistical gate sizing methodology for timing yield improvement. We build statistical models for gate delays from library characterizations at multiple process corners and operating conditions. Statistical timing analysis is performed, which drives gate sizing for timing yield optimization. Experimental results are reported for the ISCAS and MCNC benchmarks. In addition, we provide insight into statistical properties of gate delays for a given technology library which intuitively explains when and why statistical optimization improves over static timing optimization. I.
Gate Sizing Using Incremental Parameterized Statistical Timing Analysis
 In ICCAD
, 2005
"... Abstract — As technology scales into the sub90nm domain, manufacturing variations become an increasingly significant portion of circuit delay. As a result, delays must be modeled as statistical distributions during both analysis and optimization. This paper uses incremental, parametric statistical ..."
Abstract

Cited by 36 (2 self)
 Add to MetaCart
(Show Context)
Abstract — As technology scales into the sub90nm domain, manufacturing variations become an increasingly significant portion of circuit delay. As a result, delays must be modeled as statistical distributions during both analysis and optimization. This paper uses incremental, parametric statistical static timing analysis (SSTA) to perform gate sizing with a required yield target. Both correlated and uncorrelated process parameters are considered by using a firstorder linear delay model with fitted process sensitivities. The fitted sensitivities are verified to be accurate with circuit simulations. Statistical information in the form of criticality probabilities are used to actively guide the optimization process which reduces runtime and improves area and performance. The gate sizing results show a significant improvement in worst slack at 99.86 % yield over deterministic optimization. I.
Thermal modeling, analysis, and management in VLSI circuits: principles and methods
 Proceedings of the IEEE
, 2006
"... The growing packing density and power consumption of VLSI circuits have made thermal effects one of the most important concerns of VLSI designers. The increasing variability of key process parameters in nanometer CMOS technologies has resulted in larger impact of the substrate and metal line tempera ..."
Abstract

Cited by 31 (3 self)
 Add to MetaCart
(Show Context)
The growing packing density and power consumption of VLSI circuits have made thermal effects one of the most important concerns of VLSI designers. The increasing variability of key process parameters in nanometer CMOS technologies has resulted in larger impact of the substrate and metal line temperatures on the reliability and performance of the devices and interconnections. Recent data shows that more than 50 % of all IC failures are related to thermal issues. This article presents a brief discussion of key sources of power dissipation and their temperature relation in CMOS VLSI circuits, and techniques for fullchip temperature calculation with especial attention to its implications on the design of highperformance, low power VLSI circuits. The article is concluded with an overview of techniques to improve the fullchip thermal integrity by means of offchip vs. onchip and static vs. adaptive methods.
JointDesignTime and PostSilicon Minimization of Parametric Yield Loss using Adjustable Robust Optimization
 In Proceedings of the IEEE/ACM International Conference on ComputerAided Design
, 2006
"... Parametric yield loss due to variability can be effectively reduced by both designtime optimization strategies and by adjusting circuit parameters to the realizations of variable parameters. The two levels of tuning operate within a single variability budget, and because their effectiveness depends ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
(Show Context)
Parametric yield loss due to variability can be effectively reduced by both designtime optimization strategies and by adjusting circuit parameters to the realizations of variable parameters. The two levels of tuning operate within a single variability budget, and because their effectiveness depends on the magnitude and the spatial structure of variability their joint cooptimization is required. In this paper we develop a formal optimization algorithm for such cooptimization and link it to the control and measurement overhead via the formal notions of measurement and control complexity. We describe an optimization strategy that unifies designtime gatelevel sizing and postsilicon adaptation using adaptive body bias at the chip level. The statistical formulation utilizes adjustable robust linear programming to derive the optimal policy for assigning body bias once the uncertain variables, such as gate length and threshold voltage, are known. Computational tractability is achieved by restricting optimal body bias selection policy to be an affine function of uncertain variables. We demonstrate good runtime and show that 535 % savings in leakage power across the benchmark circuits are possible. Dependence of results on measurement and control complexity is studied and points of diminishing returns for both metrics are identified.
Finding Deterministic Solution from Underdetermined Equation: LargeScale Performance Modeling by Least Angle Regression
, 2009
"... The aggressive scaling of IC technology results in highdimensional, stronglynonlinear performance variability that cannot be efficiently captured by traditional modeling techniques. In this paper, we adapt a novel L 1norm regularization method to address this modeling challenge. Our goal is to sol ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
(Show Context)
The aggressive scaling of IC technology results in highdimensional, stronglynonlinear performance variability that cannot be efficiently captured by traditional modeling techniques. In this paper, we adapt a novel L 1norm regularization method to address this modeling challenge. Our goal is to solve a large number of (e.g., 10 4 ~10 6) model coefficients from a small set of (e.g., 10 2 ~10 3) sampling points without overfitting. This is facilitated by exploiting the underlying sparsity of model coefficients. Namely, although numerous basis functions are needed to span the highdimensional, stronglynonlinear variation space, only a few of them play an important role for a given performance of interest. An efficient algorithm of least angle regression (LAR) is applied to automatically select these important basis functions based on a limited number of simulation samples. Several circuit examples designed in a commercial 65nm process demonstrate that LAR achieves up to 25 × speedup compared with the traditional leastsquares fitting.
Defining statistical sensitivity for timing optimization of logic circuits with largescale process and environmental variations,” Docket MC06172004P, Filed with the US Patent Office
, 2005
"... The largescale process and environmental variations for today’s nanoscale ICs are requiring statistical approaches for timing analysis and optimization. Significant research has been recently focused on developing new statistical timing analysis algorithms, but often without consideration for how o ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
The largescale process and environmental variations for today’s nanoscale ICs are requiring statistical approaches for timing analysis and optimization. Significant research has been recently focused on developing new statistical timing analysis algorithms, but often without consideration for how one should interpret the statistical timing results for optimization. In this paper [1] we demonstrate why the traditional concepts of slack and critical path become ineffective under largescale variations, and we propose a novel sensitivitybased metric to assess the “criticality ” of each path and/or arc in the statistical timing graph. We define the statistical sensitivities for both paths and arcs, and theoretically prove that our path sensitivity is equivalent to the probability that a path is critical, and our arc sensitivity is equivalent to the probability that an arc sits on the critical path. An efficient algorithm with incremental analysis capability is described for fast sensitivity computation that has a linear runtime complexity in circuit size. The efficacy of the proposed sensitivity analysis is demonstrated on both standard benchmark circuits and large industry examples. 1.
Criticality computation in parameterized statistical timing
 in Proc. Design Automation Conf
, 2006
"... Chips manufactured in 90 nm technology have shown large parametric variations, and a worsening trend is predicted. These parametric variations make circuit optimization difficult since different paths are frequencylimiting in different parts of the multidimensional process space. Therefore, it is ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
(Show Context)
Chips manufactured in 90 nm technology have shown large parametric variations, and a worsening trend is predicted. These parametric variations make circuit optimization difficult since different paths are frequencylimiting in different parts of the multidimensional process space. Therefore, it is desirable to have a new diagnostic metric for robust circuit optimization. This paper presents a novel algorithm to compute the criticality probability of every edge in the timing graph of a design with linear complexity in the circuit size. Using industrial benchmarks, we verify the correctness of our criticality computation via Monte Carlo simulation. We also show that for large industrial designs with 442,000 gates, our algorithm computes all edge criticalities in less than 160 seconds.