### Table 1. Norman OK site parameters during the period of 4-27 August 2000*

in 2004: 4DVAR assimilation of ground temperature for the estimation of soil moisture and temperature

"... In PAGE 5: ...e.g., surface dynamic roughness zo = 0.004 m). The seasonal mean surface-deep layer soil temperature dif- ference (Column 3 in Table1 ) is estimated using the method described by Ren and Xue (2003). Soil and vegetation properties at Norman site are also listed in ... ..."

Cited by 2

### Table 1. Estimates of the refractive indices (n1 is from the BRDF model, n2 is from the specular model), the surface roughness and the sum square error (SSE). Estimates have been made using the thermocouples and estimates for the blackbody radiance.

"... In PAGE 11: ... However, the combined model gives a large improvement over the specular model and the predictions are still usable. For the rest of the landmines and the sand background the model parameters and the root mean square (RMS) error is given in Table1 . The RMS error of both the specular model and the combined model is lower for the calculated blackbody radiance (i.... ..."

### TABLE 1. Sea ice and related parameters of importance in different operational and research areas apos;

1991

### Table 6 Comparison of input from nine dermatologists

2004

"... In PAGE 8: ... Those results are shown in Table 5. Table6... In PAGE 10: ... These current results suggest that the UMLS Metathesaurus contains sufficient concepts to index images in a biomedical domain, dermatology, which is highly demanding of visual descriptors. The results ( Table6 ) also suggest that domain experts without prior training in indexing, may be able to index multimedia objects for a database using a controlled vocabulary and that the amount of professional time required to do so is probably within an acceptable range. Only about half of the dermatologists who volunteered to take part in this study completed both tasks.... In PAGE 10: ...t al. [9] and since all were volunteers, we did not attempt to determine the reasons for non-participation. Examination of the original term input data files suggests to us that some of our volunteers were unable to master use of the online forms. The Time-on-Tasks data in Table6 suggests to us that non-participation was not due to that factor. In future efforts we will probably follow the example set by Humphreys and administer a pretest to qualify our volunteers.... ..."

### Table 1. MODIS spectral band number and central wavelength. Columns 3 and 4 indicate if the channel is used in the cloud masking and its primary application.

"... In PAGE 3: ... MODIS measures radiances in two visible bands at 250 m spatial resolution, five more visible bands at 500 m resolution, and the remaining 29 visible and infrared bands at 1000 m resolution. Radiances from 14 spectral bands ( Table1 ) are used in the MODIS cloud mask algorithm to estimate whether a given view of the earth surface is unobstructed by clouds or optically thick aerosol, and whether a clear scene is affected by cloud shadows. The operational processing of MODIS requires adequate CPU capability, large file sizes, and easy comprehension of the output cloud masks.... ..."

### Table 8: RMSEs of Deterministic Forecasts for Surface Temperature

2005

"... In PAGE 25: ... Evidently, sample climatology is less useful as a baseline for surface temperature than it is for sea-level pressure, given seasonal and topographic e ects. Table8 shows the RMSEs of the various deterministic forecasts. The ensemble mean is as good as the best single model forecast (again from AVN-MM5), and the BMA deterministic forecast has an RMSE that is 11% lower.... ..."

Cited by 17

### Table 4. Comparison of the Least Squared Errors

"... In PAGE 17: ...4 Improving the Approximations As enhancements to the RCEM approximation capability, the Artificial Neural Networks (ANN) and the combined RSM+ANN approach proposed in Section 3 are applied to the responses for which the RSM is not sufficient. A comparison of the accuracy for models created by different methods are provided in Table4 . The least squared error (LSE), an overall average variation of the estimated values from the actual values, is the metric for this comparison.... In PAGE 17: ... It is observed that the LSEs are much less for the improved approximation models (either the ANN or the combined RSM+ANN model) compared to that of the RSM. Insert Table4 . Comparison of the Least Squared Errors... In PAGE 22: ... The first issue is to verify whether the approximation models are enhanced using the ANN and the combined RSM+ANN techniques. The results shown in Table4 and the graphs in Figure 7 prove that there is indeed an improvement in the approximation. Further verifications of this approach are done in this section.... In PAGE 30: ... Noise Parameters for the Engine Design Table 3. R-coefficients of the Response Surface Models Table4 . Comparison of the Least Squared Errors Table 5.... ..."

### Table 2.2: Productions for Multiplication by Iterative Addition.

1998

"... In PAGE 8: ...1 Difficulties 113 7.2 Contributions 114 TABLES 117 Table 1: Equations 117 Table2 : Symbols 118 Table 3: Parameters 120 APPENDIX 121 REFERENCES... In PAGE 128: ... Table2 : Symbols Ai activation of chunk i in Activation and Match Equation assoc weighting of prior strength in Posterior Strength Equation BI base-level activation of chunk i in Activation, Base-Level Learning and Optimized Learning Equation c scaling constant in Rehearsal Ratio and Retrieval Odds Equation C estimated cost of achieving the goal using the production in Expected Gain Equation d decay rate in Base-Level Learning and Optimized Learning Equation E expected gain of production in Expected Gain Equation Eji empirical ratio in Posterior Strength and Empirical Ratio Equation f latency exponent (usually at its default value of 1) in Retrieval Time Equation F total frequency of past opportunities (productions matched) in Empirical Ration Equation; Also latency scale factor in Retrieval Time Equation F(Cj) frequency of source j being in the context in Posterior Strength and Empirical Ratio Equation F(Ni) frequency of chunk i being needed (retrieved) in Empirical Ratio Equation F(Ni amp;Cj) frequency of chunk i being needed (retrieved) when source j is in the context in Empirical Ration Equation G value of the goal in Expected Gain Equation L life of chunk i , i.e.... ..."

Cited by 7

### Table 2.2: Productions for Multiplication by Iterative Addition.

1998

"... In PAGE 8: ...1 Difficulties 113 7.2 Contributions 114 TABLES 117 Table 1: Equations 117 Table2 : Symbols 118 Table 3: Parameters 120 APPENDIX 121 REFERENCES... In PAGE 128: ... Table2 : Symbols Ai activation of chunk i in Activation and Match Equation assoc weighting of prior strength in Posterior Strength Equation BI base-level activation of chunk i in Activation, Base-Level Learning and Optimized Learning Equation c scaling constant in Rehearsal Ratio and Retrieval Odds Equation C estimated cost of achieving the goal using the production in Expected Gain Equation d decay rate in Base-Level Learning and Optimized Learning Equation E expected gain of production in Expected Gain Equation Eji empirical ratio in Posterior Strength and Empirical Ratio Equation f latency exponent (usually at its default value of 1) in Retrieval Time Equation F total frequency of past opportunities (productions matched) in Empirical Ration Equation; Also latency scale factor in Retrieval Time Equation F(Cj) frequency of source j being in the context in Posterior Strength and Empirical Ratio Equation F(Ni) frequency of chunk i being needed (retrieved) in Empirical Ratio Equation F(Ni amp;Cj) frequency of chunk i being needed (retrieved) when source j is in the context in Empirical Ration Equation G value of the goal in Expected Gain Equation L life of chunk i , i.e.... ..."

Cited by 7

### Table 1: Summary of RMS retrieval errors for both neural net and regression methods, for three channel sets. TIGR error is error retrieving odd numbered pro les. All retrievals are with AIRS instrument noise added.

1995

"... In PAGE 9: ...04K. Table1 and Figures 2, 3, and 4 give a summary of RMS error for both neural nets and regression, for several channel sets. Training (or regression) is performed on even-numbered TIGR pro les.... In PAGE 13: ... If ^ Tb is Tb with added noise, then let ^ Tb0 = BT 1 ^ Tb, let C be the least squares solution to C ^ T 0 b = T 0, and D = B2CBT 1 . Table1 and Figures 2, 3 and 4 summarize RMS testing error for the regression method, and compare regression results with neural nets. As with the neural nets, the eigenvector bases are determined from and the regression is performed on even-numbered TIGR pro les, while the error shown is for retrievals of the odd-numbered TIGR pro les.... In PAGE 21: ... In addition the adaptive learning rate variation of backprop that we used has several parameters: learning rate increment, decrement, and error threshold. Parameters for the adaptive learning algo- rithm, as used to train the 728-input net (run 410) described in Figure 2 and Table1 are as follows. Parameter Run 410 Useful Range momentum 0.... ..."

Cited by 1