### Table 1: GA parameters that lead to fast convergence

"... In PAGE 5: ... maximum tness vs. number iterations on problem 7 in Table1 for both. CIGAR replaces fteen percent (15%) of the population with individuals from the case base every four (4) generations.... ..."

### Table 4: Results for different converging algorithms using the fast-converging backoff. The invariance and values found for convergent profiling and random sampling are compared to a value profile of the complete execution of the program.

1999

"... In PAGE 30: ... The more times the instruction converges, the longer the backoff period. Table4 shows the performance of the convergent profiler, when using the upper and lower change in invari- ance bounds for determining convergence. The Prof column shows the percentage of executed load instructions profiled.... In PAGE 31: ... This sampling and random backoff continues until the program quits. Table4 shows that when using the Conv(Inc/Dec) heuristic, better results are achieved for the difference in invariance. However, both the convergent and random algorithm found all the top values when compared to the full length profile.... ..."

Cited by 68

### Table 6: Results for different converging algorithms using the fast-converging backoff. The invariance and values found for convergent profiling and random sampling are compared to a value profile of the complete execution of the program.

1999

"... In PAGE 28: ... The more times the instruction converges, the longer the backoff period. Table6 shows the performance of the convergent profiler, when using the upper and lower change in invariance bounds for determining convergence. The Prof column shows the percentage of executed load instructions that had profiling turned on.... In PAGE 29: ...s taken. Then profiling for the instruction is turned off a random amount of time using the two bounds. This sampling and random backoff continues until the program finishes execution. Table6 shows that when using the Conv(Inc/Dec) heuristic, better results are achieved for the difference in invariance. However, both the convergent and random algorithm found all the top values when compared to the full length profile.... In PAGE 29: ... It depends on the sampling regimen, and the amount of time the program is profiled. We actually examine several different backoff algorithms and different sampling periods, but only showed results in Table6 for a sampling configuration that gave good performance and the same profiling time as convergent profiling. Results for additional random sampling algorithms and random backoff thresholds can be found in [29].... ..."

Cited by 68

### Table 6: Results for di erent converging algorithms using the fast-converging backo . The invariance and values found for convergent pro ling and random sampling are compared to a value pro le of the complete execution of the program.

1999

"... In PAGE 31: ... The more times the instruction converges, the longer the backo period. Table6 shows the performance of the convergent pro ler, when using the upper and lower change in invariance bounds for determining convergence. The Prof column shows the percentage of executed load instructions that had pro ling turned on.... In PAGE 32: ... This sampling and random backo continues until the program nishes execution. Table6 shows that when using the Conv(Inc/Dec) heuristic, better results are achieved for the di erence in invariance. How- ever, both the convergent and random algorithm found all the top values when compared to the full length pro le.... In PAGE 32: ... It depends on the sampling regimen, and the amount of time the program is pro led. We actually examine several di erent backo algorithms and di erent sampling periods, but only showed results in Table6 for a sampling con guration that gave good performance and the same pro ling time as convergent pro ling. Results for additional random sampling algorithms and random backo thresholds can be found in [29].... ..."

Cited by 68

### Table 6.3 Example 1. Stable fast local convergence of Algorithm 3.2.

### Table 3 : Number of Iterations for Convergence Although in both cases, our PCG has an order of log(l) that is larger than that of the BGS in each iteration, the fast convergence rate of our method can compensate and overall do much better. We conclude that among all the methods and numerical examples, our preconditioner C gives both the fast convergence rate and least total cost of convergence. 6 Concluding Remarks In this paper, we study circulant preconditioners for common stochastic automata networks. The construction cost of our preconditioner is low and practical examples are given to demonstrate the fast convergence rate of our method. Further research will be done for more general stochastic 14

### Table 1 Comparison of the convergence rate for the fast Maxnet and the Maxnet Data set Fast Maxnet Maxnet ratio

"... In PAGE 4: ... 1. Comparison of the Convergence Rate If Maxi- mum is Uunique After both algorithms are applied, the results of uniform distributions are listed in Table1 . These re- sults include (1) the required iterations for conver- gence of both models, (2) their respective ratios.... In PAGE 5: ... The effects of mutual inhibitions decrease gradually. Furthermore, Table1 gives an- other indication that the convergent ratio (processor nodes / iterations) of the Maxnet decreases if the in- put data increases. So, the convergence rate is very slow if the input data are very numerous.... In PAGE 5: ... Comparison of Computational Steps The steps that must be performed, serially, at each iteration are 4 and 2 for the fast Maxnet and the Maxnet, respectively. The total computational steps of uniform distributions are listed in the parentheses of Table1 . The slow convergence rate of the Maxnet when the input data are very numerous is very conspicuous.... ..."

### Table 6: Results for di#0Berent converging algorithms using the fast-converging backo#0B. The invariance and values found for convergent pro#0Cling and random sampling are compared to a value pro#0Cle of the complete execution of the program.

"... In PAGE 31: ... The more times the instruction converges, the longer the backo#0B period. Table6 shows the performance of the convergent pro#0Cler, when using the upper and lower change in invariance bounds for determining convergence. The Prof column shows the percentage of executed load instructions that had pro#0Cling turned on.... In PAGE 32: ... This sampling and random backo#0B continues until the program #0Cnishes execution. Table6 shows that when using the Conv#28Inc#2FDec#29 heuristic, better results are achieved for the di#0Berence in invariance. How- ever, both the convergent and random algorithm found all the top values when compared to the full length pro#0Cle.... In PAGE 32: ... It depends on the sampling regimen, and the amount of time the program is pro#0Cled. We actually examine several di#0Berent backo#0B algorithms and di#0Berent sampling periods, but only showed results in Table6 for a sampling con#0Cguration that gave good performance and the same pro#0Cling time as convergent pro#0Cling. Results for additional random sampling algorithms and random backo#0B thresholds can be found in #5B29#5D.... ..."

### Table 2. NLMS algorithm is a special implementation of the LMS algorithm, which has more stable and fast converging properties. It takes into account the variation in the signal level of the input signals in selection of the normalized step size, . The convergence of the NLMS algorithm is guaranteed for a stationary process when 0 lt; lt; 2.

2005

"... In PAGE 21: ... Therefore, when the magnitude of x[n k] is large, the LMS algorithm su ers a gradient noise amplification [52, 53] problem. To overcome this problem, the normalized LMS (NLMS), shown in Table2 , was introduced. The NLMS al- gorithm is a special implementation of the LMS algorithm, which has a more stable and fast converging properties.... ..."

### Table 2 : Number of Iterations for PCG and BGS Although in both cases, our PCG has an order of log(l) more than the BGS in each iter- ation, the fast convergence rate of our method can compensate more than that. We conclude that among all the methods and numerical examples,our preconditioner C gives both the fast convergence rate and least total cost of convergence. 6 Concluding Remarks In this paper, we study circulant preconditioners for common stochastic automata networks. The construction cost of our preconditioner is low and practical examples are given to demonstrate the fast convergence rate of our method. Further research will be done for more general stochastic automata networks.

"... In PAGE 9: ... The preconditioned matrix has singular values clustered around one when h tends to in nity [9]. In our numerical examples, we let = 1; 1 = 3=2 and 2 = 3: The following Table2 the number of iterations required for convergence for each of the methods.... ..."