Results 1  10
of
22
Statistical Timing Analysis Under Spatial Correlations
 IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems
, 2005
"... Abstract — Process variations are of increasing concern in today’s technologies, and can significantly affect circuit performance. We present an efficient statistical timing analysis algorithm that predicts the probability distribution of the circuit delay considering both interdie and intradie va ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
Abstract — Process variations are of increasing concern in today’s technologies, and can significantly affect circuit performance. We present an efficient statistical timing analysis algorithm that predicts the probability distribution of the circuit delay considering both interdie and intradie variations, while accounting for the effects of spatial correlations of intradie parameter variations. The procedure uses a firstorder Taylor series expansion to approximate the gate and interconnect delays. Next, principal component analysis techniques are��and ��� are employed to transform the set of correlated parameters into an uncorrelated set. The statistical timing computation is then easily performed with a PERTlike circuit graph traversal. The runtime of our algorithm is linear in the number of gates and interconnects, as well as the number of varying parameters and grid partitions that are used to model spatial correlations. The accuracy of the method is verified with Monte Carlo simulation. On average, for 100nm technology, the errors of mean and standard deviation values computed by the proposed method respectively, and the errors of predicting the��and confidence point are ���and ���respectively. A testcase with about 17,800 gates was solved in about�seconds, with high accuracy as compared to a Monte Carlo simulation that required more than�hours.
Applicationlevel correctness and its impact on fault tolerance
 In Proceedings of the 13th International Symposium on High Performance Computer Architecture
, 2007
"... Traditionally, fault tolerance researchers have required architectural state to be numerically perfect for program execution to be correct. However, in many programs, even if execution is not 100 % numerically correct, the program can still appear to execute correctly from the user’s perspective. He ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
Traditionally, fault tolerance researchers have required architectural state to be numerically perfect for program execution to be correct. However, in many programs, even if execution is not 100 % numerically correct, the program can still appear to execute correctly from the user’s perspective. Hence, whether a fault is unacceptable or benign may depend on the level of abstraction at which correctness is evaluated, with more faults being benign at higher levels of abstraction, i.e. at the user or application level, compared to lower levels of abstraction, i.e. at the architecture level. The extent to which programs are more fault resilient at higher levels of abstraction is application dependent. Programs that produce inexact and/or approximate outputs can be very resilient at the application level. We call such programs soft computations, and we find they are common in multimedia workloads, as well as artificial intelligence (AI) workloads. Programs that compute exact numerical outputs offer less error resilience at the application level. However, we find all programs studied in this paper exhibit some enhanced fault resilience at the application level, including those that are traditionally considered exact computations– e.g., SPECInt CPU2000. This paper investigates definitions of program correctness that view correctness from the application’s standpoint rather than the architecture’s standpoint. Under applicationlevel correctness, a program’s execution is deemed correct as long as the result it produces is acceptable to the user. To quantify user satisfaction, we rely on applicationlevel fidelity metrics that capture userperceived program solution quality. We conduct a detailed fault susceptibility study that measures how much more fault resilient programs are when defining correctness at the
Statistical timing analysis of flipflops considering codependent setup and hold times
 Proc. of Great Lakes Symposium on VLSI
, 2008
"... Statistical static timing analysis (SSTA) plays a key role in determining performance of the VLSI circuits implemented in stateoftheart CMOS technology. A prerequisite for employing SSTA is the characterization of the setup and hold times of the latches and flipflops in the cell library. This p ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
Statistical static timing analysis (SSTA) plays a key role in determining performance of the VLSI circuits implemented in stateoftheart CMOS technology. A prerequisite for employing SSTA is the characterization of the setup and hold times of the latches and flipflops in the cell library. This paper presents a methodology to exploit the statistical codependence of the setup and hold times. The approach comprises of three steps. In the first step, probability mass function (pmf) of codependent setup and hold time (CSHT) contours are approximated with piecewise linear curves by considering the probability density functions of sources of variability. In the second step, pmf of the required setup and hold times for each flipflop in the design are computed. Finally, these pmf values are used to compute the probability of individual flipflops in the design passing the timing constraints and to report the overall pass probability of the flipflops in the design as a histogram. We applied the proposed method to true single phase clocking flipflops to generate the piecewise linear curves for CSHT. The characterized flipflops were instantiated in an example design, on which timing verification was successfully performed.
Statistical Verification of Power Grids Considering ProcessInduced Leakage Current Variations
 In Proceeding of ICCAD2003
, 2003
"... Transistor threshold voltages (V th ) have been reduced as part of ongoing technology scaling. The smaller V th values feature increased variations due to underlying process variations, with a strong withindie component. Correspondingly, given the exponential dependence of leakage on V th , circui ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
Transistor threshold voltages (V th ) have been reduced as part of ongoing technology scaling. The smaller V th values feature increased variations due to underlying process variations, with a strong withindie component. Correspondingly, given the exponential dependence of leakage on V th , circuit leakage currents are increasing significantly and have strong withindie statistical variations. With these leakage currents loading the power grid, the grid develops correspondingly large statistical voltage drops. This leakageinduced voltage drop is an unavoidable background level of noise on the grid. Any additional nonleakage currents due to circuit activity will lead to voltage drop which is to be added to this background noise. We propose a technique for checking whether the statistical voltage drop on every node is within userspecified bounds, given userspecified statistics of the leakage currents.
Reducing the Impact of IntraCore Process Variability with CriticalityBased Resource Allocation and Prefetching
"... We develop architectural techniques for mitigating the impact of process variability. Our techniques hide the performance effects of slow components—including registers, functional units, and L1I and L1D cache frames—without slowing the clock frequency or pessimistically assuming that all components ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We develop architectural techniques for mitigating the impact of process variability. Our techniques hide the performance effects of slow components—including registers, functional units, and L1I and L1D cache frames—without slowing the clock frequency or pessimistically assuming that all components are slow. Using ideas previously developed for other purposes—criticalitybased allocation of resources, prefetching, and prefetch buffering—we allow design engineers to aggressively set the clock frequency without worrying about the subset of components that cannot meet this frequency. Our techniques outperform speed binning, because clock frequency benefits outweigh slight losses in IPC.
Clustering Based Pruning for Statistical Criticality Computation under Process Variations
"... Abstract — We present a new linear time technique to compute criticality information in a timing graph by dividing it into “zones”. Errors in using tightness probabilities for criticality computation are dealt with using a new clustering based pruning algorithm which greatly reduces the size of circ ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract — We present a new linear time technique to compute criticality information in a timing graph by dividing it into “zones”. Errors in using tightness probabilities for criticality computation are dealt with using a new clustering based pruning algorithm which greatly reduces the size of circuitlevel cutsets. Our clustering algorithm gives a 150X speedup compared to a pairwise pruning strategy in addition to ordering edges in a cutset to reduce errors due to Clark’s MAX formulation. The clustering based pruning strategy coupled with a localized sampling technique reduces errors to within 5 % of Monte Carlo simulations with large speedups in runtime. I. INTRODUCTION AND PREVIOUS WORK With scaling of technology, process parameter variations render the circuit delay as unpredictable [6], making signoff ineffective in assuring against chip failure. Recent works concerning Statistical Static Timing Analysis (SSTA) in [1], [9] deal with this issue by treating the delay of gates and
Exploiting ApplicationLevel Correctness for LowCost Fault Tolerance
"... Traditionally, fault tolerance researchers have required architectural state to be numerically perfect for program execution to be correct. However, in many programs, even if execution is not 100 % numerically correct, the program can still appear to execute correctly from the user’s perspective. He ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Traditionally, fault tolerance researchers have required architectural state to be numerically perfect for program execution to be correct. However, in many programs, even if execution is not 100 % numerically correct, the program can still appear to execute correctly from the user’s perspective. Hence, whether a fault is unacceptable or benign may depend on the level of abstraction at which correctness is evaluated, with more faults being benign at higher levels of abstraction, i.e. at the user or application level, compared to lower levels of abstraction, i.e. at the architecture level. The extent to which programs are more fault resilient at higher levels of abstraction is application dependent. Programs that produce inexact and/or approximate outputs can be very resilient at the application level. We call such programs soft computations, and we find they are common in multimedia workloads, as well as artificial intelligence (AI) workloads. Programs that compute exact numerical outputs offer less error resilience at the application level. However, we find all programs studied in this paper exhibit some enhanced fault resilience at the application level, including those that are traditionally
A Case for Computer Architecture Performance Metrics that Reflect Process Variability
"... As computer architects, we frequently analyze the performance of systems, and we have developed wellunderstood metrics for reporting and comparing system performances. The dominant textbook in our field [7] is subtitled ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
As computer architects, we frequently analyze the performance of systems, and we have developed wellunderstood metrics for reporting and comparing system performances. The dominant textbook in our field [7] is subtitled
Analysis and Verification of Power Grids Considering ProcessInduced Leakage Current Variations
"... Abstract — The ongoing trends in technology scaling imply a reduction in the transistor threshold voltage (Vth). With smaller feature lengths and smaller parameters, variability becomes increasingly important, for ignoring it may lead to chip failure and assuming worstcase renders almost any design ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract — The ongoing trends in technology scaling imply a reduction in the transistor threshold voltage (Vth). With smaller feature lengths and smaller parameters, variability becomes increasingly important, for ignoring it may lead to chip failure and assuming worstcase renders almost any design nonachievable. This work presents a methodology for the analysis and verification of the power grid of integrated circuits considering variations in leakage currents. These variations are large due to the exponential relation between leakage current and transistor threshold voltage and appear as random background noise on the nodes of the grid. We propose a lognormal distribution to model the grid voltage drops, derive bounds on the voltage drop variances, and develop a numerical Monte Carlo method to estimate the variance of each node voltage on the grid. This model is used toward the solution of a statistical formulation of the power grid verification problem.
Regular Analog/RF Integrated Circuits Design Using Optimization With Recourse Including Ellipsoidal Uncertainty
, 2008
"... Abstract—Long design cycles due to the inability to predict silicon realities are a wellknown problem that plagues analog/RF integrated circuit product development. As this problem worsens for nanoscale IC technologies, the high cost of design and multiple manufacturing spins causes fewer products ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—Long design cycles due to the inability to predict silicon realities are a wellknown problem that plagues analog/RF integrated circuit product development. As this problem worsens for nanoscale IC technologies, the high cost of design and multiple manufacturing spins causes fewer products to have the volume required to support fullcustom implementation. Design reuse and analog synthesis make analog/RF design more affordable; however, the increasing process variability and lack of modeling accuracy remain extremely challenging for nanoscale analog/RF design. We propose a regular analog/RF IC using metalmask configurability design methodology Optimization with Recourse of Analog Circuits including Layout Extraction (ORACLE), which is a combination of reuse and shareduse by formulating the synthesis problem as an optimization with recourse problem. Using a twostage geometric programming with recourse approach, ORACLE solves for both the globally optimal shared and applicationspecific variables. Furthermore, robust optimization is proposed to treat the design with variability problem, further enhancing the ORACLE methodology by providing yield bound for each configuration of regular designs. The statistical variations of the process parameters are captured by a confidence ellipsoid. We demonstrate ORACLE for regular Low Noise Amplifier designs using metalmask configurability, where a range of applications share common underlying structure and applicationspecific customization is performed using the metalmask layers. Two RF oscillator design examples are shown to achieve robust designs with guaranteed yield bound. Index Terms—Configurable design, optimization with recourse, robustness, statistical optimization. I.