### Table 1: Lyapunov exponent estimates

in On the estimation of invariant measures and Lyapunov exponents arising from iid compositions of maps

"... In PAGE 16: ...0; 1; 2, where Dm(k) is an approximation of the action of Ak on RP1. Using (28) we produce estimates m of (0) = log(2)=3, shown in Table1 . These estimates are compared with plots of the partial sums SN on an iteration for iteration basis (N = 3m) in Figure 3.... In PAGE 16: ... These estimates are compared with plots of the partial sums SN on an iteration for iteration basis (N = 3m) in Figure 3. From log-log ts of columns 3 and 4 of Table1 , the error in our estimates appears to be O(m2), while we expect the errors from the random iteration method to be O(m1=2). This rst example is not really a fair comparison, as the invariant measure is easily approximated by the eigenvectors dm because of its simple structure.... ..."

Cited by 1

### Table 5.2: Error rates for the different architectures on the 20-Class Gaussian problem, partially disjoint data

2003

### Table 1. Sequentialization results

"... In PAGE 6: ...8 GHz PC run- ning Linux with 2 GB RAM. Table1 shows the results of sequentializing all the designs according to the algorithm described in Algorithm 2. For each design benchmark, we give the code size in LOC (line of codes), total number of behaviors and channels utilized, reported errors for dead- lock and race condition, and the total runtime in seconds.... In PAGE 6: ... This is for the symbolic simulation propose. The results of equiva- lence checking of the designs in Table1 are presented in Table 2. The first two columns presents a pair of designs for equivalence checking.... ..."

### Table 2 Error rates for the different architectures on the 20-class Gaussian problem

in www.elsevier.com/locate/patcog Adaptive fusion and co-operative training for classifier ensembles

2006

"... In PAGE 9: ....3. 20-class Gaussian problem For the 20-class Gaussian data, the training of each net- work was repeated 10 times with different initializations. Table2 represents the error rate of the various combining methods for the 20-class data. This table also summarizes the results for partially disjoint and disjoint training sub- sets.... In PAGE 10: ... This was probably due to the biased classifiers obtained from the dis- joint data, and could be in analogy to the poor performance attained by combining disjoint modular neural networks by averaging. Table2 also illustrates that trained approaches performed better than standard combining approaches. However, the differences between the various approaches within these two groups were marginal due to the highly correlated ensembles.... In PAGE 11: ... Clearly, both the detectors and aggregation modules contributed to the reduction of classification error. Table2 lists the average classification errors achieved by the best classifier in the ensemble. Note that the classifiers trained by the co-operative algorithm displayed a higher in- dividual error rate, yet the combination approaches yielded an improved performance.... ..."

### Table 1: Gaussian errors

"... In PAGE 25: ...25 The Monte Carlo results lead to four preliminary conclusions: i) The ARCH parameters (! and ) are very badly estimated by OLS. This in- e ciency is more and more striking when one goes from Table1 to Table 3. While the het- eroskedasticity parameter is underestimated by OLS by almost 20 percent in the gaussian case, it is underestimated by almost 50 percent in the gamma case, that is when both leptokurtosis and skewness are present.... ..."

### TABLE II THE PERFORMANCE OF PARTIAL CHANNEL SHARING SCHEME.

### TABLE 6 Partial Dialogue Function Determination in Words (Feedback)

### Table 6. Correlation Matrix, Wage Feedback Model (Standard Errors)

### Table 4 Feedback behavior of the simulated users with complete (user 0) and partial feedback (users 1 to 4).

1999

Cited by 2