### Table 5. Dempster-Shafer calculations with one bit error.

"... In PAGE 31: ...e will reduce its confidence to 0.75. All other test confidences will be set to 0.99 The results of processing all eight test vectors through the Dempster- Shafer calculations with the diagnostic inference model are given in Table5 . The normalized values for evidential probability, though quite low, show the leading candidates for diagnosis are c1 and nf.... ..."

### Table 6. Dempster-Shafer calculations with two bit errors.

"... In PAGE 33: ...o warrant assigning confidence values of 0.75 to these two tests and 0.99 for all other tests. The results of processing all eight test vectors through the Dempster- Shafer calculations with the diagnostic inference model are given in Table6 . The normalized values for evidential probability, this time, show the leading candidates for diagnosis are a1 and i0.... ..."

### Table 3 Classification rate of majority voting and Dempster-Shafer fusion rule

"... In PAGE 7: ... Following the weighted k-NN classifier with k = 11, we continue the fusion procedure to update decisions. Table3 shows the classification rates of multi-sensor fusion. We compared the fusion results of Dempster-Shafer theory with majority voting fusion rule.... In PAGE 7: ... And then the unclassified feature is labeled by the majority vote of these decisions. As shown in Table3 , Dempster-Shafer method has generally higher classification rate than ma- jority voting. The uncertainty management of Dempster-Shafer theory provides a feasible approach for information fusion in wireless sensor networks.... ..."

### Table 3.1: Masses as used for Dempster-Shafer for several cases of mines and possible false alarm targets.

1998

Cited by 6

### Table 1. The token quot;END-DEMPSTER-SHAFER-MOD quot; in Line seven terminates the current definition and enables COSMET to accept other statements, including multi-line statements. This line is required by COSMET.

2001

"... In PAGE 24: ... Problem Size The current limits of COSMET (Version 2.00) are defined in Table1 . The table is shown in three segments, one for each of the three supported data types.... In PAGE 25: ...LHS File Paths 16 COSMET Structure Limits Contest Definitions 10 Contestants (Total) 20 Dempster Shafer Modified Models 20 Dempster Shafer Experts (Total) 100 Extreme Min/Max Definitions 10 Link Min/Max Definitions 10 Link Min/Max Responses (Total) 100 Markov Structure Limits Modules 100 Module Inputs (All Modules) 1000 Module Experts (All Modules) 5000 Dependency Group Definitions 300 Early Alert Equations 15 Early Alert Equation Symbols 750 Table1 .... In PAGE 32: ... Lines six through eight specify the user-supplied response definitions that are to be combined to form the resultant response. The user can specify as many response definitions as necessary as long as the total response limit (for all link Min/Max definitions) is not exceeded (refer to the second segment of Table1 for this value). As shown, the name of the response follows keyword quot;RESPONSE quot; in these definitions.... In PAGE 45: ... If equal weighting of all experts is desired, then all expert weight values can be omitted for the input. If more than one expert is specified, they are combined to form a 6 A module definition may specify any number of inputs as long as the total number of inputs in all defined modules does not exceed the limit defined in Table1 . In like fashion, a primary input definition may... In PAGE 45: ... If more than one expert is specified, they are combined to form a 6 A module definition may specify any number of inputs as long as the total number of inputs in all defined modules does not exceed the limit defined in Table 1. In like fashion, a primary input definition may specify any number of experts as long as the limit defined in Table1... ..."

### Table 7. Accuracy using Dempster-Shafer on a fault dictionary.

"... In PAGE 34: ...4 Dempster-Shafer and Nearest Neighbor Compared To further compare the differences between the Dempster-Shafer approach and nearest-neighbor classification, we computed the accuracy for all bit-error combinations using Dempster-Shafer as we did for nearest neighbor. These results are shown in Table7 . In interpreting this table and Table 4, we can consider the bit errors as corresponding to some amount of lost information.... ..."

### Table 3. Accuracies of nearest neighbor and Dempster-Shafer.

"... In PAGE 5: ... The results of these experiments are given in Table 3. In the top part of Table3 , we see some characteristics of introducing error into the fault signature to be matched. First, we see that the higher the number of bits in error, the lower the accuracy in matching, down to a limit of 0% accuracy.... In PAGE 6: ... Ties were broken at random. These results are shown in the bottom part of Table3 . In interpreting this table, we can consider the bit errors as corresponding to some amount of lost information.... In PAGE 6: ... When this loss exceeds 50%, both techniques fail to find the correct diagnosis, as expected. An interesting result with Dempster-Shafer involves examining the number of times the correct answer is either the first or second most likely conclusion identified (shown Table3 b in the rows associated with Correct = 1st or 2nd ). Here we find the correct fault a very high percentage of the time, indicating that an alternative answer in the event repair based on the first choice is ineffective.... ..."

### Table 2. Logical connectives and Dempster-Shafer sets of hypotheses

2005

Cited by 1

### Table 1 B, G, N: Beta, Gamma, Normal distributions, respectively, with the corresponding parameters. MPM (Y1): Error ratio obtained by the MPM method using the only probabilistic sensor Y1. Fusion MPM: Error ratio obtained by the MPM method after the Dempster-Shafer fusion of the sensors Y1 and Y2. ICE The classical ICE assuming all distributions normal.

"... In PAGE 4: ... Thus, concerning these densities, we consider a particular case, though a generalized ICE could easily be applied here. The forms and parameters of ga,gb,gc are given in Table1 , which also contains the Bayesian error ratio (performed from the Bayesian sensor Y1 only), and the quot;fusioned quot; error ratio (performed from the... ..."

### Table 5: Dempster-Shafer vs Bayesian fusion. From table 5 it is observed that all success ratios are above 95% except for objects of type ve. Hence, persons, windows and open or closed doors are all ob- servable objects using the six available sensing actions, while the more loosely de ned object, O5, occasionally is falsely identi ed. By averaging the convergence rate, success ratio, and the computational complexity for the ve objects, table 6 emerges.

1998

Cited by 1