### Table 3. Comparison of Eiy, amp;n calculated by Rayleigh-Schrtidinger perturbation theory and by the asymptotic formula

in i

### Table 5.4 Asymptotic convergence factors, bound predicted by theory, and apos;improvement apos; of observed over predicted for the plane- stress problem with h = 1=64.

### Table 1 Asymptotic properties.

1998

"... In PAGE 15: ... Such a simple example su ces to demonstrate the poorness of the asymptotic approximation. The results are given in Table1 , and it is seen that even for very large sample sizes the approximation is not good. In fact a rectangular weight function on [?1; 1] gave somewhat better results, the empirical sizes for ^ L2(M1), ^ L2(M0 1) and ^ L2(M00 1 ) being 0.... In PAGE 15: ... As in Hjellvik and Tj stheim (1995), a better nite sample t can be obtained by using a gamma distribution (or 2 distribution), but the problem of a very poor approximation to the location and scale parameters persists (cf. Table1 ). Better approximation in a spesial case using a xed experimental design have been reported by Poggi and Portier (1995).... In PAGE 17: ...1) with a = 0:5. By comparison to Table1 it is seen that the results obtained represent a vast improvement over those which could be achieved using asymptotic theory, where the size would be drastically underestimated for ^ L2(M1) and overestimated for ^ L2(M0 1) and ^ L2(M00 1 ). It is seen that ^ L0(M1) collapses when h 1:0, the other statistics seem to be quite independent of h.... In PAGE 37: ... The bandwidth is cross-validated according to Table 3, and the upper limit of the estimated order of the autoregressive t is ^ p = n=10. Table captions Table1 : The ratio between the asymptotic values given by Theorem 3.2 and simulated values for the mean and standard deviation of ^ L2(M1), ^ L2(M0 1) and ^ L2(M00 1 ), and the empirical sizes for these statistics when they have been centered by the asymptotic mean and scaled by the asymptotic standard deviation of Theorem 3.... ..."

Cited by 8

### Table 5.3: Accuracy of the R-C series network approximation versus the contrast real part, as predicted by the asymptotic theory in x3:2 Case A. We note the strong ow concentration around the saddle points of and C and the concentration at the minimum along the interface separating the resistive from the capacitive re- gion. However, around the magnitude of the current increases only along the y direction. Across the interface (x direction), the potential gradient is negligible so we only have weak ow concentration at . Thus, in the circuit approximation, the contribution of this region is equivalent to having a connecting wire of negligible impedance. The resistor-capacitor series network approximation was also tested for di erent locations and orientations of the saddle points in the resistive and capacitive regions. The results are not in uenced much, as expected, because location and orientation of the saddles does not a ect the network. 41

1998

Cited by 6

### Table 2: Predictions Computed from Decision Field Theory

2003

"... In PAGE 24: ... We assumed an equal probability of attending to each of the three dimensions, and the remaining parameters were the same as used to generate Figure 5. The asymptotic choice probability results, predicted the theory, are summarized in Table2 , below. ... ..."

### Table 2. Comparison of simulated and asymptotic results for the di erences between MSE and logarithmic loss for the Bayes and MLE procedures.

"... In PAGE 15: ...0026 .0001 Table2 shows the di erences between the MLE and Bayesian procedures as assessed by MSE and logarithmic loss, computed both from the simulated values in Table 1 and from the asymptotic theory. In this case the numerical agreement is not good, but the main point of the example is that the theoretical results correctly suggest that prior 2 is superior to prior 1.... ..."

### Table 2. Comparison of simulated and asymptotic results for the di erences between MSE and logarithmic loss for the Bayes and MLE procedures.

"... In PAGE 15: ... Similar good agreement between theoretical and simulated values of the CPB has been observed for other values of and for other simulations of this problem (Smith 1997). Table2 shows the di erences between the MLE and Bayesian procedures as assessed by MSE and logarithmic loss, computed both from the simulated values in Table 1 and from the asymptotic theory. In this case the numerical agreement is not good, but the main point of the example is that the theoretical results correctly suggest that prior 2 is superior to prior 1.... ..."

### Table 2. Comparison of simulated and asymptotic results for the di erences between MSE and logarithmic loss for the Bayes and MLE procedures.

"... In PAGE 15: ... Similar good agreement between theoretical and simulated values of the CPB has been observed for other values of and for other simulations of this problem (Smith 1997). Table2 shows the di erences between the MLE and Bayesian procedures as assessed by MSE and logarithmic loss, computed both from the simulated values in Table 1 and from the asymptotic theory. In this case the numerical agreement is not good, but the main point of the example is that the theoretical results correctly suggest that prior 2 is superior to prior 1.... ..."

### Table 1: The above table summarizes the relation between the theory of inverse problem and the theory of learning from examples. When the projection of the regression function is not in the range of the operator A the ideal solution fH does not exist. Nonetheless, in learning theory, if the ideal solution does not exist the asymptotic behavior can still be studied since we are looking for the residual.

2005

"... In PAGE 9: ... 5. Regularization, Stochastic Noise and Consistency Table1 compares the classical framework of inverse problems (see Section 3) with the formulation of learning proposed above. We note some differences.... ..."

Cited by 6

### Table 1: The above table summarizes the relation between the theory of inverse problem and the theory of learning from examples. When the projection of the regression function is not in the range of the operator A the ideal solution fH does not exist. Nonetheless, in learning theory, if the ideal solution does not exist the asymptotic behavior can still be studied since we are looking for the residual.

"... In PAGE 10: ... 5. Regularization, Stochastic Noise and Consistency Table1 compares the classical framework of inverse problems (see Section 3) with the formulation of learning proposed above. We note some di erences.... ..."