### Table 4: Convergence table in Example 3 (plate with a hole). The convergence table displays the relative error in displacements (75) and the variation of plastic zones VPZ (76) per iteration step for various uniformly refined meshes.

in Newton-Like Solver for Elastoplastic Problems with Hardening and its Local Super-Linear Convergence

"... In PAGE 31: ... The displacement is multiplied by 100. Table4 reports on the convergence of the Newton-like method.... ..."

### Table 2. N FN MRV MRVF S BP RS

"... In PAGE 10: ... For E and E R the best possible result is 1; and larger values of indices indicate a better result. The results from Table 1 are summarized in Table2 . The new methods appear to be fairly competitive with the considered Newton-like methods.... ..."

### Table 3. Example 3, Scheme 1

1993

"... In PAGE 12: ...0 projecting on span fu0g. In (2.3) we set M1 = hs=jsjH ; iH. The results illustrating the behaviour of k n are given in Table3 . An interesting comparison can be made with the Newton-like method proposed in [19].... ..."

### Table 2. Exact entropy compression limits using greedy fill algorithm

"... In PAGE 3: ... These are the same test sets used for experiments in [Chandra 01ab], [Gonciari 03], and [Jas 03]. The compression values in Table2 are calculated from the exact values of minimum entropy that were generated using the greedy fill algorithm. As can be seen from the table, the percentage compression that can be achieved increases with the symbol length.... In PAGE 3: ... As can be seen from the table, the percentage compression that can be achieved increases with the symbol length. No fixed- to-fixed or fixed-to-variable code using these particular symbol lengths can achieve greater compression than the bounds shown in Table2 for these particular test sets. Note, however, that these entropy bounds would be different for a different test set for these circuits, e.... ..."

### Table 2. Illustration of cross-entropy calculation

1999

"... In PAGE 5: ... Crucially, given the hypothetical case above, System 1 would get much of the credit for assigning high probability, even if not the highest, to the correct sense. Just as crucially, an algorithm would be penalized heavily for assigning very low probability to the correct sense,2 as illustrated in Table2 . Optimal performance is achieved under this measure by systems that assign accurate probabilities, neither too conservative (System 3) nor too overcon dent (Systems 2 and 4).... ..."

Cited by 1

### Table 4 Minimum cost schedule algorithms.

2006

"... In PAGE 6: ... The computation time of our proposed al- gorithm is small even if LP can not handled the system of inequalities. In Table4 , the comparisons on computation time be- tween our proposed minimum cost schedule algorithm and LPs are shown. Our proposed algorithm obtains optimal so- lutions constantly faster than others.... ..."

### Table 1: Results showing the mutual information, high-band entropies, and the ratio between the mutual information and the high- band entropies for di erent sound classes and di erent high- band parameterizations.

"... In PAGE 74: ... Each pair of sentences were played twice and in random order to increase the statistical signi cance of the test. The results from the listening test are displayed in Table1 and show that our system is rated between equal and slightly worse compared to the original. CRoS Q.... In PAGE 74: ...94 +/- 0.12 Table1 : Mean scores together with 95% con dence intervals of the CCR listening test when comparing CRoS, Q.25, and Q.... In PAGE 92: ...6 4000 /uw/ 5 4.7 20000 Table1 : Estimated manifold dimension, di erential entropy (in bits), and the number of observations available for the di erent vowel classes. the linear prediction analysis a pre-emphasis lter ( nite impulse response lter with transfer function F(z) = 1 0:97z 1) was applied to the down- sampled signal.... In PAGE 92: ... For the experiment we used K = 10 random codebooks ranging in size from M1 = 2blog2(N 1) 2c to M10 = N 1, where b c denotes the rounding downwards to the nearest integer and N is the maximum number of available observations for each vowel class. From the results displayed in Table1 we observe that the dimensionality of the space varies, ranging from ve to seven. Thus, the maximum number of parameters needed to describe a vowel is according to our experiments seven if the non-linear approximately deterministic dependencies of the data are known.... In PAGE 93: ... The envelopes of the vowel spectra usually show three clear resonances (so-called formants) in the frequency range 300-3400Hz, and have a negative spectral tilt. The formants can be speci ed by their location in frequency together with their corresponding bandwidths, thus the number of degrees of freedom is similar to the estimates of intrinsic dimensionality shown in Table1 . However, the exact mechanism underlying this behavior is outside of the scope of this paper.... In PAGE 119: ... Finally, the estimate of the lower bound on the prediction error Pe is obtained using (2) with the estimate of H(Y jX). The procedure is summarized in Table1 . Note that the procedure does not a ect the error-rate estimates negatively when the class-conditional fea- ture spaces show no intrinsic dimensionality, since no noise would be added.... In PAGE 119: ... Finally, estimate the lower bound on the prediction error Pe using (2) and (3). Table1 : Estimation of error probability. 5 Experiments and results In this section we present three experiments.... In PAGE 120: ... In the second and third experiments we estimate the classi cation error probability for an arti cial and a real-world case, respectively. We use the estimation procedure outlined in Table1 for this purpose. The experiments show the importance of constraining the resolution of the class-conditional feature spaces when the feature spaces have an intrinsic dimensionality that is di erent from the extrinsic dimensionality (i.... In PAGE 122: ... space, we need to constrain the resolution of the space before estimating the mutual information between the features and the class labels. As discussed in Section 3, it is desirable to constrain the resolution as little as possible, and we use the automatic estimation procedure outlined in Table1 for this purpose. For the experiment we used the following con guration: number of ob- servations Ly = 20000, number of subsets M = 5, logarithm of the minimum number of observations for a class log2(Ly) = log2( Ly) 1, slope thresh- old = 0:1.... In PAGE 122: ... The level of noise was controlled by varying (starting from 0 and then incremented by 0.05 at step 5 in Table1 ). The uniformly distributed noise type simpli es the calculations of the true (constrained resolution) mutual information, useful for evaluating the accuracy of the estimated mutual information.... In PAGE 137: ... By selecting the quantization step-size in this manner, the entropy is equal to the di erential entropy for the high-band represented by the LER. The ratio between the mutual information and the entropy in the high band is presented in Table1 , showing in percentage the decrease in uncer- tainty of the high-band when we observe the narrow-band spectral envelope. The results show that the ratio between the mutual information and the perceived entropy of the high-band is fairly low regardless of sound class.... ..."

### Table 1. Equal Distribution of Funds

2006

"... In PAGE 8: ... They should hence expect an equal share of the CPUs. We, how- ever, note from the results summarized in Table1 that users 3-5 received a much lower quality of service, here defined as number of jobs that can be processed within a time unit, because the best response algorithm found it too expensive to fund more than a very low number of hosts. One possible solution to this issue would be to let the user hold back on submitting if not a threshold of minimum hosts to bid on is met.... ..."

Cited by 4

### Table 7: Comparison of Entropy of a B-frame with Di erent Wavelet Filters for Algorithm C{I.

"... In PAGE 28: ... It is therefore necessary to analyze the entropy of the B-frames along with that of the P-frames. Table7 shows average entropy of DRWs for a typical B-frame. It can be seen that the same two lters Harr and Spline, with almost equal-length lters, are the winners closely followed by Daubechies apos; 4-tap lter.... ..."

### Table 2: Examples of Rules Generated by the Minimum Entropy Method.

1994

"... In PAGE 23: ...In accord with the model procedures, we have implemented segmentation, feature extraction (Table 1) and rule generation-weight estimation on both sets of training data. Examples of rule bounds are shown in Table2 , using the minimum entropy technique, just for the unary features (U.x) as outlined in Table 1.... ..."

Cited by 8