### Table 5 Financial intermediation and growth: dynamic panel regressions, system estimator

2000

"... In PAGE 23: ... In Table 4, only the results on the quot;nancial indicators are given. Table5 gives the full results from system dynamic-panel estimation. The analysis was conducted with two conditioning information sets.... In PAGE 23: ... The second uses the policy conditioning information set, and includes initial income, educational attain- ment, government size, openness to trade, in#ation, and the black market exchange rate premium.25 Table5 also presents (1) the Sargan test, where the null hypothesis is that the instrumental variables are uncorrelated with the residuals and (2) the serial correlation test, where the null hypothesis is that the errors in the di!erenced equation exhibit no second-order serial correlation. The three quot;nancial intermediary development indicators (LIQUID LIABILI- TIES, COMMERCIAL-CENTRAL BANK, and PRIVATE CREDIT) are sig- ni quot;cant at the 0.... ..."

Cited by 60

### Table 2. Convergence and conditioning for three Gaussian points, implicit second order formulation.

### Table 2 Comparison of iterations to convergence for rst- and second-order Robin transmission conditions with and without under-relaxation.

1997

"... In PAGE 6: ... It is also somewhat simpler to analyze. Is there a better choice for the initial solution? Table2 appears to indicate that the number of iterations is roughly linear in the number of vertical strips. While this general trend is observed in larger tests, the correlation seems to not be as strong as the results presented in Table 2 might suggest.... In PAGE 6: ... Is there a better choice for the initial solution? Table 2 appears to indicate that the number of iterations is roughly linear in the number of vertical strips. While this general trend is observed in larger tests, the correlation seems to not be as strong as the results presented in Table2 might suggest. This implies that the current implementation, with one element per subdomain, is not likely to be optimal for large problems.... ..."

Cited by 4

### Table 5 . Stability conditions for one-dimensional second-order schemes

### Table 1: First- and second-order Taylor approximation in (4.5), p = 1 + p

"... In PAGE 16: ... Table1 presents some numerical results re ecting the error of the rst- and second-order Taylor order expansion in (4.5) where p = 1 + p, k = 1; 2: ek( p) := max 0 t 1 jx(t; p) ? k X i=0 1 i! @ix @pi (t; p0)( pi)j : 5 Conclusion The second-order sensitivity result derived in this paper states that the op- timal solution of a nonlinear control problem is di erentiable with respect to parameters provided that the second-order su cient conditions (SSCs) hold for the unperturbed (nominal) problem.... ..."

### TABLE 4 First and second order neighbors of each measurement location.

"... In PAGE 14: ... TABLE 3 ARIMA model formulation and coefficient estimates for the fifty time series of the study. TABLE4 First and second order neighbors of each measurement location. TABLE 5 Parameter estimation (after diagnostic checks) and root mean square error for the two STARMA models.... ..."

### Table 1.2 Comparison of iterations to convergence for rst- and second-order Robin transmission conditions with and without under-relaxation.

1997

Cited by 4

### Table 1. Lie apos;s classi cation of invariant second-order ordinary di erential equations No. Equation Symmetry algebra

"... In PAGE 13: ... Surprisingly, it is possible to implement this approach to classifying second-order PDEs (1) by their second-order conditional symmetries in full generality. In Table1 we present the complete list of invariant real second-order ordinary di erential equations together with their maximal invariance algebras, obtained by Lie ([21, 22]). Note that a; k are arbitrary real parameters and f is an arbitrary function.... In PAGE 13: ... Note that a; k are arbitrary real parameters and f is an arbitrary function. As classi cation has been done to within an arbitrary reversible transformation of the variables x; y, the equations given in Table1 are representatives of the conjugacy classes of invariant ordinary di erential equations. Table 1.... In PAGE 14: ...Table1 , since the corresponding ordinary di erential equation is not integrable by quadratures. Next, since our nal aim is to exploit conditional symmetries for the description and reduction of initial value problems, it make no sense to consider case 4.... In PAGE 14: ... Next, since our nal aim is to exploit conditional symmetries for the description and reduction of initial value problems, it make no sense to consider case 4. This is because the symmetry group admitted by the corresponding ordinary di erential equation within the class (18) is the same as that of the more general equation given in case 3 of Table1 . The same argument applies to case 8.... In PAGE 14: ...quation given in case 3 of Table 1. The same argument applies to case 8. Consequently, we will deal only with the remaining cases 2, 3, 5{7, 9. We take as the function in operator (3) the expressions y00 ? f(x; y; y0), where f is one of the right-hand sides of equations listed in the second column of Table1 and make the replacements y ! u, y0 ! ux and y00 ! uxx. We classify PDEs of the form ut = uxx + F (t; x; u; ux) (27) admitting the corresponding Lie-Backlund vector elds.... ..."

### Table 2.1 Mesh size h, condition number , CPU time T and average base-2 error logarithms Lj for the second-order smooth rule with N random points.

1995

Cited by 8

### Table 1. Pointwise bias (up to O(h2)) and variance of bivariate Nadaraya-Watson and locally linear kernel estimators using second-order kernels.

"... In PAGE 4: ... Given standard conditions regarding the kernel, bandwidth, and data generating process, these estimators are consistent, and one is referred to Hardle (1990, pg 29) and Fan (1992) for details. Table1... In PAGE 5: ... Finally, this approach does not correct for boundary bias and is bias-reducing rather than bias-removing since it ignores all but the leading terms in the bias expansion. Higher order kernels can be used for curvature-based bias-reduction of Nadaraya-Watson estimators in this context since the leading terms in the bias expansion given in Table1... ..."