### Table 2. Optimal views based on model-based reconstruction.

2004

"... In PAGE 7: ... Based on an average reconstruction time of 30 seconds, this search takes about 45 hours. The results are presented in Table2 which shows the optimal views for K = {1,2,3,4,5} and the correspond- ing minimum average reconstruction errors (refer to Table 1 for exact coordinates). The standard deviation of the indi- Figure 7.... In PAGE 7: ... Figure 7 shows the errors of all combinatorial view configurations for the case K = 4, ranked in ascending order of error. Each er- ror bar represents the subjects standard deviation for that configuration (the first error bar corresponds to the optimal configuration and is the subject standard deviation listed in Table2 ). Other plots for K = 1,2,3 and 5 are quite sim- ilar in nature, all showing a well-defined minimum with the subject variation (error-bars) being lowest for the best configuration (left most) and highest for the worst (right most).... In PAGE 7: ... Using the same search strategy, we now evaluate the visual hull construc- tions obtained from the given subset of silhouette im- ages and compare them to the ground truth. Table 3 shows the optimal views for K = {2,3,4,5} and the correspond- ing error values (same format as in Table2 except that the visual hull from a single silhouette (K = 1) has no fi- nite volume and is omitted). Note that a visual hull recon- struction (especially one from few images) is not a very... In PAGE 8: ... Interestingly, the first plateau corresponding to the top group is all the sub- sets which include the profile view #10 (one of the most salient). We can see marked similarities in the opti- mal views in Table2 and Table 3. For example, both methods indicate views #3 and #10 to be the most infor- mative.... In PAGE 8: ... For example, the two most salient views (#3 and #10) correspond very closely with the established (biomet- ric) standards of 3/4 view (INS photos) and profile view ( mugshot photos). We have not yet searched for K gt; 5 mainly due to the computational costs, but it appears that reconstructions do not improve significantly beyond 4-5 views (see the best errors listed in Table2 ). One can easily incorporate additional physical and operational constraints into our framework.... ..."

Cited by 7

### Table 2. Optimal views based on model-based reconstruction.

2004

"... In PAGE 5: ... Based on an average reconstruction time of 30 sec- onds, this search takes about 45 hours. The results are presented in Table2 which shows the optimal views for K = {1,2,3,4,5} and the correspond- ing minimum average reconstruction errors (refer to Table 1 Figure 7. Reconstruction errors for all view con- figurations with 4 cameras (K = 4) ranked by magnitude of ensemble error.... In PAGE 5: ... Figure 7 shows the errors of all combinatorial view configurations for the case K = 4, ranked in ascending order of error. Each er- ror bar represents the subjects standard deviation for that configuration (the first error bar corresponds to the optimal configuration and is the subject standard deviation listed in Table2 ). Other plots for K = 1,2,3 and 5 are quite sim- ilar in nature, all showing a well-defined minimum with the subject variation (error-bars) being lowest for the best configuration (left most) and highest for the worst (right most).... In PAGE 5: ... Using the same search strategy, we now evaluate the visual hull constructions ob- tained from the given subset of silhouette images and com- pare them to the ground truth. Table 3 shows the optimal views for K = {2,3,4,5} and the corresponding error val- ues (same format as in Table2 except that the visual hull from a single silhouette (K = 1) has no finite volume and... In PAGE 6: ... There are a few dif- ferences but these are somewhat misleading. The best view configurations in Table2 are marked in Figure 8 with ar- rows. We note that our model-based optimal views have al- most the same errors as the best views chosen with visual hull method and are always in the first plateau or top quar- tile that includes the key profile view #10.... ..."

Cited by 7

### Table 4. Model-based reductions of the complete pathway (model statistics)

"... In PAGE 25: ... Similarly, the removal of the de- complexation of FGFR:FRS2 would only be noticeable over a very small time scale: as the rate of FRS2 and FGFR complexation is extremely fast, following the decomplexation of FRS2 and FGFR one would see the (re)complexation of FGFR and FRS2 almost immediately. Table4 gives the model statistics both for the complete model and the model obtained after applying the reductions (1), (2) and (3) both in isolation and collectively. The results show that reduction (1) - removal of Sos - yields the greatest decrease in state space of the three.... ..."

### Table 3 Summary of model-based time series clustering algorithms Paper Variable Model Model output of inter-

2005

"... In PAGE 13: ... The extracted feature vectors are then con- verted into a symbol sequence by vector quantization, which in turn is used as input for training the hidden Markov model by the expectation maximization approach. Table3 summarizes the major components used in each model-based clustering algorithm. Like feature-based meth- ods, model-based methods are capable of handling series with unequal length as well through the modeling operation.... ..."

### Table 3: Ranking distibution of the label obtained with the model-based approach on the validation dataset

2004

"... In PAGE 10: ... 4.3 Two-stage classification system As we can see on Table3 , after the first stage of classification the label of the data is not always in the first two classes, which justifies the choice of a dynamic number of classes in conflict. ... ..."

### Table 4. Model-based reductions of the complete pathway (model statistics) states transitions construction model checking

"... In PAGE 25: ... Similarly, the removal of the de- complexation of FGFR:FRS2 would only be noticeable over a very small time scale: as the rate of FRS2 and FGFR complexation is extremely fast, following the decomplexation of FRS2 and FGFR one would see the (re)complexation of FGFR and FRS2 almost immediately. Table4 gives the model statistics both for the complete model and the model obtained after applying the reductions (1), (2) and (3), both in isolation and collectively. The results show that reduction (1) - removal of Sos - yields the greatest decrease in state space of the three.... In PAGE 25: ... In general, it is therefore advantageous to look into a number of different reduction approaches, although, as already stated, this does re- quire some understanding of the model under study. Table4 also presents the times required for model construction and model checking of a single property (property H of Section 6) using each of the different combinations of model reductions. It can be seen that the decreases in model size are also reflected in these timings.... ..."

### Table 1: Relative percentage bias and instability of variance estimators for the model-based estimator ^ Fm(t)att= p.

"... In PAGE 11: ... RB%andINST were computed for t = p at p =0:10, p =0:25, p =0:50, p =0:75 and p =0:90, where p is the pth population quantile. Table1 reports the values of RB%andINST of variance estimators vm, vJm1, vJm2 and vJ1 for the model-based estimator ^ Fm(t). We observe that: (a) For n =50(f=0:025): (i) the jackknife variance estimators vJm1 and vJm2 perform well; (ii) vJ1, the leading term in both vJm1 and vJm2, also provides valid estimated variance, but has negative bias in all cases; (iii) the analytical variance estimator vm has the smallest value of INST in all cases, but it has the largest negative bias among vm, vJm1 and vJm2;(b)Forn= 200 (f =0:1): (i) vm, vJm1 and vJm2 perform quantitatively similar in terms of both RB%andINST; (ii) they all have larger and positive bias for F(t)closeto0:50 and smaller or negative bias when F(t) is close to 0 or 1; (iii) by ignoring the sampling fraction, vJ1 seriously underestimates the true variance; (iv) it is interesting to notice the good performance of vJm1 for n = 200, since it is not clear whether vJm1 is consistent or not when f is not negligible; and (v) we should note that vm took in the order of 30 times longer to calculate than the jackknife variance estimators because of the density estimation.... ..."

### Table 4 ARMD trial

2007

"... In PAGE 17: ...ared. For the observed, partially incomplete data, GEE is supplemented with WGEE. Further, a random-intercepts GLMM is considered, based on numerical integration. The GEE analyses are reported in Table4 and the random-effects models in Table 5. For GEE, a working exchangeable correlation matrix is considered.... In PAGE 19: ... The advantage of having separate treatment effects at each time is that particular attention can be given at the treatment effect assessment at the last planned measurement occasion, that is, after one year. From Table4 it is clear that the model-based and empirically corrected standard errors agree extremely well. This is due to the unstructured nature of the full time by treatment mean structure.... In PAGE 20: ... The results for the random-effects models are given in Table 5. We observe the usual relationship between the marginal parameters of Table4 and their random-effects counterparts. Note also that the random-intercepts variance is largest under LOCF, underscoring again that this method artificially increases the association between mea- surements on the same subject.... ..."

### Table 1: Simulation model based on the AMD Athlon.

2007

"... In PAGE 3: ...0 x86 Tool Set [7] for simulating our x86 binaries. The con guration is given in Table1 and based loosely on an AMD Athlon processor, as this represents a widely deployed modern desktop system, and a pipeline that is more reasonable to emulate.... ..."

Cited by 1

### Table 5: Fraction of Correct Models based on the LGscore.

"... In PAGE 7: ...Table5 . Both the incremental window-based alignment methods, as well as the SW-PSSM alignment method, are able to pick the correct models with similar degrees of accuracy.... In PAGE 7: ... Our techniques also seem to identify a higher percentage of correct models when com- pared to the previously studied schemes, especially PSI and SSPSI, both of which also incorporate some profile informa- tion. As seen from Table5 our methods are able to pick a larger fraction of higher quality models for the family and superfamily levels. 4.... ..."

Cited by 1