Results 1  10
of
15
Firstorder incremental blockbased statistical timing analysis
 In DAC
, 2004
"... Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel lineartime blockbased statistical timing algorithm is emplo ..."
Abstract

Cited by 128 (4 self)
 Add to MetaCart
Variability in digital integrated circuits makes timing verification an extremely challenging task. In this paper, a canonical first order delay model is proposed that takes into account both correlated and independent randomness. A novel lineartime blockbased statistical timing algorithm is employed to propagate timing quantities like arrival times and required arrival times through the timing graph in this canonical form. At the end of the statistical timing, the sensitivities of all timing quantities to each of the sources of variation are available. Excessive sensitivities can then be targeted by manual or automatic optimization methods to improve the robustness of the design. The statistical timing analysis is incremental, and is therefore suitable for use in the inner loop of physical synthesis or other optimization programs. The second novel contribution of this paper is the computation of local and global criticality probabilities. For a very small cost in CPU time, the probability of each edge or node of the timing graph being critical is computed. These criticality probabilities provide additional useful diagnostics to synthesis, optimization, test generation and path enumeration programs. Numerical results are presented on industrial ASIC chips with over two million logic gates. 1.
A General Framework for Accurate Statistical Timing Analysis Considering Correlations
 In DAC
, 2005
"... The impact of parameter variations on timing due to process and environmental variations has become significant in recent years. With each new technology node this variability is becoming more prominent. In this work, we present a general Statistical Timing Analysis (STA) framework that captures spa ..."
Abstract

Cited by 28 (6 self)
 Add to MetaCart
The impact of parameter variations on timing due to process and environmental variations has become significant in recent years. With each new technology node this variability is becoming more prominent. In this work, we present a general Statistical Timing Analysis (STA) framework that captures spatial correlations between gate delays. Our technique does not make any assumption about the distributions of the parameter variations, gate delay and arrival times. We propose a Taylorseries expansion based polynomial representation of gate delays and arrival times which is able to e#ectively capture the nonlinear dependencies that arise due to increasing parameter variations. In order to reduce the computational complexity introduced due to polynomial modeling during STA, we propose an e#cient linearmodeling driven polynomial STA scheme. On an average the degree2 polynomial scheme had a 7.3x speedup as compared to Monte Carlo with 0.049 units of rms error w.r.t Monte Carlo. Our technique is generic and can be applied to arbitrary variations in the underlying parameters.
Gate Sizing Using Incremental Parameterized Statistical Timing Analysis
 In ICCAD
, 2005
"... Abstract — As technology scales into the sub90nm domain, manufacturing variations become an increasingly significant portion of circuit delay. As a result, delays must be modeled as statistical distributions during both analysis and optimization. This paper uses incremental, parametric statistical ..."
Abstract

Cited by 25 (1 self)
 Add to MetaCart
Abstract — As technology scales into the sub90nm domain, manufacturing variations become an increasingly significant portion of circuit delay. As a result, delays must be modeled as statistical distributions during both analysis and optimization. This paper uses incremental, parametric statistical static timing analysis (SSTA) to perform gate sizing with a required yield target. Both correlated and uncorrelated process parameters are considered by using a firstorder linear delay model with fitted process sensitivities. The fitted sensitivities are verified to be accurate with circuit simulations. Statistical information in the form of criticality probabilities are used to actively guide the optimization process which reduces runtime and improves area and performance. The gate sizing results show a significant improvement in worst slack at 99.86 % yield over deterministic optimization. I.
A New Statistical Max Operation for Propagating Skewness in Statistical Timing Analysis
"... Statistical static timing analysis (SSTA) is emerging as a solution for predicting the timing characteristics of digital circuits under process variability. For computing the statistical max of two arrival time probability distributions, existing analytical SSTA approaches use the results given by C ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Statistical static timing analysis (SSTA) is emerging as a solution for predicting the timing characteristics of digital circuits under process variability. For computing the statistical max of two arrival time probability distributions, existing analytical SSTA approaches use the results given by Clark in [8]. These analytical results are exact when the two operand arrival time distributions have jointly Gaussian distributions. Due to the nonlinear max operation, arrival time distributions are typically skewed. Furthermore, nonlinear dependence of gate delays and nongaussian process parameters also make the arrival time distributions asymmetric. Therefore, for computing the max accurately, a new approach is required that accounts for the inherent skewness in arrival time distributions. In this work, we present analytical solution for computing the statistical max operation. 1 First, the skewness in arrival time distribution is modeled by matching its first three moments to a socalled skewed normal distribution. Then by extending Clark’s work to handle skewed normal distributions we derive analytical expressions for computing the moments of the max. We then show using initial simulations results that using a skewness based max operation has a significant potential to improve the accuracy of the statistical max operation in SSTA while retaining its computational efficiency. 1.
Set of Gaussian Random Variables
"... Abstract—This paper quantifies the approximation error when results obtained by Clark (Oper. Res., vol. 9, p. 145, 1961) are employed to compute the maximum (max) of Gaussian random variables, which is a fundamental operation in statistical timing. We show that a finite lookup table can be used to s ..."
Abstract
 Add to MetaCart
Abstract—This paper quantifies the approximation error when results obtained by Clark (Oper. Res., vol. 9, p. 145, 1961) are employed to compute the maximum (max) of Gaussian random variables, which is a fundamental operation in statistical timing. We show that a finite lookup table can be used to store these errors. Based on the error computations, approaches to different orderings for pairwise max operations on a set of Gaussians are proposed. Experimental results show accuracy improvements in the computation of the max of multiple Gaussians, in comparison to the traditional approach. In addition, we present an approach to compute the tightness probabilities of Gaussian random variables with dynamic runtimeaccuracy tradeoff options. We replace required numerical computations for their estimations by closed form expressions based on Taylor series expansion that involve table lookup and a few fundamental arithmetic operations. Experimental results demonstrate an average speedup of 2 × using our approach for computing the maximum of two Gaussians, in comparison to the traditional approach, without any accuracy penalty. Index Terms—Computeraided design (CAD), Gaussian approximation, statistical timing, very largescale integration
Construction of SkewNormal Random Variables: Are They Linear Combinations of Normal and HalfNormal?
"... Skewnormal distributions extend the normal distributions through a shape parameter α; they reduce to the standard normal random variable Z for α = 0 and to Z  or the halfnormal when α → ∞. In spite of the skewness they (dis)inherit some properties of normal random variables: Square of a skewnor ..."
Abstract
 Add to MetaCart
Skewnormal distributions extend the normal distributions through a shape parameter α; they reduce to the standard normal random variable Z for α = 0 and to Z  or the halfnormal when α → ∞. In spite of the skewness they (dis)inherit some properties of normal random variables: Square of a skewnormal random variable has a chisquare distribution with one degree of freedom, but the sum of two independent skewnormal random variables is not generally skewnormal. We review and explain this lack of closure and other properties of skewnormal random variables via their representations as a special linear combination of independent normal and halfnormal random variables. Analogues of such representations are used to define multivariate skewnormal distributions with a closure property similar to that of multivariate normal distributions.
Gaming and Strategic Ambiguity in Incentive Provision ∗
"... A central tenet of economics is that people respond to incentives. While an appropriately crafted incentive scheme can achieve the secondbest optimum in the presence of moral hazard, the principal must be very well informed about the environment (e.g. the agent’s preferences and the production tech ..."
Abstract
 Add to MetaCart
A central tenet of economics is that people respond to incentives. While an appropriately crafted incentive scheme can achieve the secondbest optimum in the presence of moral hazard, the principal must be very well informed about the environment (e.g. the agent’s preferences and the production technology) in order to achieve this. Indeed it is often suggested that incentive schemes can be gamed by an agent with superior knowledge of the environment, and furthermore that lack of transparency about the nature of the incentive scheme can reduce gaming. We provide a formal theory of these phenomena. We show that random or ambiguous incentive schemes induce more balanced efforts from an agent who performs multiple tasks and who is better informed about the environment than the principal is. On the other hand, such random schemes impose more risk on the agent per unit of effort induced. By identifying settings in which random schemes are especially effective in inducing balanced efforts, we show that, if tasks are sufficiently complementary for the principal, random incentive schemes can dominate the best deterministic scheme. (JEL L13, L22)
Notes on Using Control Variates for Estimation with Reversible MCMC Samplers
, 2008
"... A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of the control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal co ..."
Abstract
 Add to MetaCart
A general methodology is presented for the construction and effective use of control variates for reversible MCMC samplers. The values of the coefficients of the optimal linear combination of the control variates are computed, and adaptive, consistent MCMC estimators are derived for these optimal coefficients. All methodological and asymptotic arguments are rigorously justified. Numerous MCMC simulation examples from Bayesian inference applications demonstrate that the resulting variance reduction can be quite dramatic.
European Central Bank
, 2007
"... We propose and compare procedures for inference about mean squared prediction error (MSPE) when comparing a benchmark model against a small number of alternatives that nest the benchmark. We evaluate two procedures that adjust MSPE differences in accordance with Clark and West (2007); one examines t ..."
Abstract
 Add to MetaCart
We propose and compare procedures for inference about mean squared prediction error (MSPE) when comparing a benchmark model against a small number of alternatives that nest the benchmark. We evaluate two procedures that adjust MSPE differences in accordance with Clark and West (2007); one examines the maximum tstatistic, the other computes a chisquared statistic. We also examine two procedures that do not adjust the MSPE differences: a chisquared statistic, and White’s (2000) reality check. In our simulations, the two statistics that adjust MSPE differences have most accurate size, and the procedure that looks at the maximum tstatistic has best power. We illustrate our procedures by comparing forecasts of different models for U.S. inflation.