### Table 3. Estimation error (compared to Monte Carlo with 106 samples) and computation cost Linear MC (104 Runs) APEX

2004

"... In PAGE 7: ... E. Comparison of Accuracy and Speed Table3 compares the accuracy and speed for three different probability extraction approaches: linear regression, Monte Carlo analysis with 104 samples, and the proposed APEX approach. Several specific points on the cumulative distribution function are utilized for comparing the accuracy.... In PAGE 7: ... After the cumulative distribution function is explicitly obtained in the closed-form expression (10), the best- case delay, worst-case delay and any other specific points on CDF can be easily found using a binary search algorithm. The error values in Table3 are calculated against the exact CDF obtained by Monte Carlo simulation with 106 samples. Note from Table 3 that the linear regression approach has the largest error.... In PAGE 7: ... The error values in Table 3 are calculated against the exact CDF obtained by Monte Carlo simulation with 106 samples. Note from Table3 that the linear regression approach has the largest error. APEX achieves more than 200x speedup over the Monte Carlo analysis with 104 samples, while still providing better accuracy.... ..."

Cited by 16

### Table A.4: Monte Carlo Analysis of Measurement Error in Nonlinear Regressions.

### Table 1: Empirical size and power for T = 30 and N = 20 LL IPS UB LL IPS UB

2000

"... In PAGE 17: ...05. TABLE 1 ABOUT HERE Table1 presents the rejection frequencies for the di erent tests. For p gt; 0 the LL test turns out to be quite conservative.... ..."

Cited by 3

### Table 2: Monte Carlo power for Qn at 5% signi cance level

"... In PAGE 9: ... For sample sizes n = 25, 40 and 100, power numbers at the approximate 5% signi cance level, are obtained by generating m = 10; 000 samples from each alternative, computing Qn for each sample, and comparing with the approximate 95% quantile from Table 1. The results are displayed in Table2 , where we observe that our statistics o ers very good power against this set of alternatives (as compared, for in- stance, with the three statistics presented in Coles, 1989 Table 2), with the exception of the 2 1 alternative, which constitutes a rather di cult alternative in testing for the Weibull family. Although in this article the discussion has been centered on fully observed data (uncensored), we believe that it is possible to adapt the methodology proposed here to the context of type I right censored data.... In PAGE 9: ... For sample sizes n = 25, 40 and 100, power numbers at the approximate 5% signi cance level, are obtained by generating m = 10; 000 samples from each alternative, computing Qn for each sample, and comparing with the approximate 95% quantile from Table 1. The results are displayed in Table 2, where we observe that our statistics o ers very good power against this set of alternatives (as compared, for in- stance, with the three statistics presented in Coles, 1989 Table2 ), with the exception of the 2 1 alternative, which constitutes a rather di cult alternative in testing for the Weibull family. Although in this article the discussion has been centered on fully observed data (uncensored), we believe that it is possible to adapt the methodology proposed here to the context of type I right censored data.... ..."

### Table 1: Comparison of Monte Carlo and quasi-Monte Carlo methods used to value a coupon bond

1998

"... In PAGE 21: ... For random Monte Carlo, the constant c is the standard deviation, and = :5. Table1 summarizes the results. For each method, the estimated size of the error at N = 10000 (based on the linear t), the convergence rate , and the approximate computation time for one run with this N are given.... In PAGE 25: ... Figure 2 displays these results in terms of the estimated computation time. In Table1 it can be seen that there is in fact a computational advantage to using quasi-random sequences over random for this problem. This is due to the time required for sequence generation.... ..."

Cited by 15

### Table 7 Monte Carlo Analysis of Return Predictability

2003

"... In PAGE 21: ... For each replication, we estimate the unconstrained cross-sectionally restricted trivariate VAR(1) for returns, liquidity, and dividend yields using the pooled MLE methodology presented above. Table7 presents some relevant percentiles of the empirical distribution for the coefficients comprising the first row of A, and for their corresponding t-statistics. First we focus on the relation between returns and the lagged liquidity variable.... In PAGE 21: ... In sum, the Monte Carlo evidence shows that the impact of market liquidity on future returns is not a statistical artifact. Table7 also presents some relevant percentiles for the coefficient describing the relation- ship between returns and the lagged dividend yields. The median coefficient is 1.... ..."

Cited by 5

### Table 5: Monte Carlo Simulation Analysis

"... In PAGE 8: ... As stated above, the purpose of this analysis is to establish confidence levels of results. The summary in Table5 shows that significant percent re- turns on investment can be expected with high confidence over a broad range of assumptions. In particular, there is a 50% probability that the ROI will exceed 1440 percent, and a 90% probability that it will exceed 615%.... In PAGE 9: ... Table5 : All input parameters and nominal values set for the base case 100,000 300 10.6 26% $7.... ..."

### Table 1. Prediction errors and run-time for Monte-Carlo and for bounding algorithm (double entries are values for 20/50 paths).

"... In PAGE 5: ... The bounds generated by the algorithm follow the Monte Carlo distribution very closely (Figure 1 and 2a). Table1 shows the errors of both upper and lower bounds with respect to the exact distribution. The bounds were computed for N=20 and 50 longest paths.... In PAGE 5: ...enchmark circuits is 0.7%. Figure 2b demonstrates that accurately accounting for node delay correlations is crucial in predicting the shape of cdf: the mean value of an uncorrelated case is larger than that of the correlated one, while the spread is much smaller. Table1 also contains the evaluation of run time of the algorithm. The Monte Carlo (for 1000 samples) is substantially slower than our algorithm.... ..."

### Table 2: Computational complexity 6 Conclusion Iterative Monte Carlo algorithms are presented and studied. These algorithms can be used for solving inverse matrix problems. The following conclusion can be drawn: Every element of the inverse matrix A?1 can be evaluated independently from the other elements (this illustrates the inherent parallelism of the algorithms under con- sideration); Parallel computations of every column of the inverse matrix A?1 with di erent iter- ative procedures can be realized; It is possible to improve the algorithm using error estimate criterion quot;column by 19

1998

"... In PAGE 16: ... During the numerical tests we compute the Frobenius norm (17) of the residual matrix. Some of the numerical results are shown on Figures 1 { 8 and by Table2 . On all Figures the value of the Frobenius norm of the residual matrix E is denoted by F.... In PAGE 17: ... When the coarse stop criterion is used quot; = 0:0001. When the ne stop criterion is used, di erent values quot;1; : : : ; quot;7 of quot; are applied such that the computational complexity is smaller than in comparison with the case if the coarse stop criterion (see, also Table2 ). The values of the Frobenius norm for both cases when the number of realizations n is equal to 400 are also given.... In PAGE 18: ....N. 0.07305 coarse s. c. F.N. 0.05654 fine s. c. Figure 3 Non-controlled balance Figure 4 Controlled balance computational complexity when di erent values of quot; = quot;i; i = 1; : : : ; 7 is denoted by Rf we consider the case when Rc Rf 1: The results presented on Figures 3 and 4 show that apart from the less computational complexity Rf of the ne stop criterion algorithm it gives better results than the coarse stop criterion algorithm with complexity Rc (see Table2 ). This fact is observed in both cases - balanced (Figure 3) and non-balanced (Figure 4).... ..."

Cited by 2