### Table 1: Parameter smoothing with Bayesian learning.

### Table 8. Common nonparametric statistics

"... In PAGE 6: ... This study demonstrated that the use of non- parametric techniques is implicated whenever there is doubt regarding the fulf_illment of parametric assumptions, such as normality or sample size. Which non-parametric test should we use? The most common non-parametric tests can be found in Table8 . Please refer to the following statistical texts for the derivation and calculation of these statistics, as this is beyond the scope or intention of this paper: Nonparametric Statistics for the Behavioural Science (Siegel Sand Castellan NJ, 1988) (6), Applied Nonparametric Statistical Methods (Sprent P and Smeeton NC, 2001) (9), Nonparametric Statistical Inference (Gibbons JD, 1985) (8), Nonparametrics: Statistical Methods Based On Ranks (Lehmann EL, 1975) (18), Practical Nonparametric Statistics (Conover WJ, 1980) (19), Fundamentals of Nonparametric Statistics (Pierce A, 1970) (15), and Essentials of Research Methods in Health, Physical Education, Exercise Science and Recreation (Berg KE and Latin RW, 2003) (10).... ..."

### Table 3 Consistent nonparametric smooth test of the benchmark normal copula function This table reports Chen et al. (2003) consistent nonparametric smooth test of the null hypothesis ()1 1 , , Pr :

2005

### Table 1: Segmentation errors

"... In PAGE 3: ... In each of these gures the top leftmost gure is the original signal, the top middle gure represents I, the result of passing the image I through the low-pass lter; the top right gure is the image ^ I obtained after taking the derivative of I and then smoothing it; the bottom left gure is the image ~ I resulting from thresholding of ^ I; the bottom middle gure shows I0, the nal segments (after merging seg- ments in ~ I which are too close); the bottom right gure shows the segments obtained by manual segmentation. The segmentation errors of a few representative les are tabulated in Table1 , where N is the number of (manual) segments in the data, Ne and Nm are the number of extra and missing segments in the auto-... ..."

### Table 2: Parallel Bayesian Inference in a Tree Network.

1998

"... In PAGE 5: ... variable, etc. The full psuedocode is given in Table2 . For any constant number of evidence variables, its runningtime is O(log n) with n processors.... ..."

Cited by 12

### TABLE III The inference results of fatigue Bayesian network model

2004

Cited by 5

### Table 1. Steps in Mixture Simulation and Conventional Bayesian Inference

"... In PAGE 3: ... Generate an observation, x2, from the posterior. Table1 lists the steps in the mixture simulation approach alongside major steps... ..."

### TABLE I UTTERANCE LEVEL BAYESIAN ADAPTIVE INFERENCE PERFORMANCE

### TABLE III INCREMENTAL BAYESIAN ADAPTIVE INFERENCE PERFORMANCE ON THE COMPLETE DATA SET

### Table 1. Average denoising performance of various inference techniques and models on 10 test images

2006

"... In PAGE 10: ... We find that the model proposed here substantially outperforms the model from [4] using the suggested parameters, both visually and quantitatively. As de- tailed in Table1 , the PSNR of the learned model is better by more than 5dB. Figure 4 shows one of the 10 test images, in which we can see that the denoising results from the learned model show characteristic piecewise constant patches, whereas the results from the hand-defined model are overly smooth in many places.... In PAGE 12: ... Since this approximation is possible for both max-product and sum-product BP, we report results for both algorithms. Table1 compares both algorithms to a selection of pairwise MRFs (always with 50% update probability). We can see that the higher-order model outperforms the pairwise priors by about 0.... ..."

Cited by 4