### Table 3 Classification rate of majority voting and Dempster-Shafer fusion rule

"... In PAGE 7: ... Following the weighted k-NN classifier with k = 11, we continue the fusion procedure to update decisions. Table3 shows the classification rates of multi-sensor fusion. We compared the fusion results of Dempster-Shafer theory with majority voting fusion rule.... In PAGE 7: ... And then the unclassified feature is labeled by the majority vote of these decisions. As shown in Table3 , Dempster-Shafer method has generally higher classification rate than ma- jority voting. The uncertainty management of Dempster-Shafer theory provides a feasible approach for information fusion in wireless sensor networks.... ..."

### Table 3. Accuracies of nearest neighbor and Dempster-Shafer.

"... In PAGE 5: ... The results of these experiments are given in Table 3. In the top part of Table3 , we see some characteristics of introducing error into the fault signature to be matched. First, we see that the higher the number of bits in error, the lower the accuracy in matching, down to a limit of 0% accuracy.... In PAGE 6: ... Ties were broken at random. These results are shown in the bottom part of Table3 . In interpreting this table, we can consider the bit errors as corresponding to some amount of lost information.... In PAGE 6: ... When this loss exceeds 50%, both techniques fail to find the correct diagnosis, as expected. An interesting result with Dempster-Shafer involves examining the number of times the correct answer is either the first or second most likely conclusion identified (shown Table3 b in the rows associated with Correct = 1st or 2nd ). Here we find the correct fault a very high percentage of the time, indicating that an alternative answer in the event repair based on the first choice is ineffective.... ..."

### TABLE 1 Comparative evaluation of probability and possibility theory and the Dempster-Shafer theory of evidence, based on the criteria of Walley (1996)

### Table 7. Accuracy using Dempster-Shafer on a fault dictionary.

"... In PAGE 34: ...4 Dempster-Shafer and Nearest Neighbor Compared To further compare the differences between the Dempster-Shafer approach and nearest-neighbor classification, we computed the accuracy for all bit-error combinations using Dempster-Shafer as we did for nearest neighbor. These results are shown in Table7 . In interpreting this table and Table 4, we can consider the bit errors as corresponding to some amount of lost information.... ..."

### Table 6. Dempster-Shafer calculations with two bit errors.

"... In PAGE 33: ...o warrant assigning confidence values of 0.75 to these two tests and 0.99 for all other tests. The results of processing all eight test vectors through the Dempster- Shafer calculations with the diagnostic inference model are given in Table6 . The normalized values for evidential probability, this time, show the leading candidates for diagnosis are a1 and i0.... ..."

### Table 1 B, G, N: Beta, Gamma, Normal distributions, respectively, with the corresponding parameters. MPM (Y1): Error ratio obtained by the MPM method using the only probabilistic sensor Y1. Fusion MPM: Error ratio obtained by the MPM method after the Dempster-Shafer fusion of the sensors Y1 and Y2. ICE The classical ICE assuming all distributions normal.

"... In PAGE 4: ... Thus, concerning these densities, we consider a particular case, though a generalized ICE could easily be applied here. The forms and parameters of ga,gb,gc are given in Table1 , which also contains the Bayesian error ratio (performed from the Bayesian sensor Y1 only), and the quot;fusioned quot; error ratio (performed from the... ..."

### Table 1. Time-space tradeoffs for boolean BPs computing certain fundamental functions

2004

"... In PAGE 4: ... Specific results for certain target functions. We have applied the general method described in the foregoing paragraph to several well-studied target functions, and our results are summarized in Table1 be- low. In each case, we were able to find families of codes that encode the desired function on one hand and have a sufficiently large minimum distance on the other hand (see Section 4 for more details).... In PAGE 4: ... In each case, we were able to find families of codes that encode the desired function on one hand and have a sufficiently large minimum distance on the other hand (see Section 4 for more details). We note that the bounds1 in Table1 are based on Theorem 4 and Theorem 5, which are slightly stronger than (1). We also point out that the branching programs we consider are multi-output.... In PAGE 4: ...Table 1. Time-space tradeoffs for boolean BPs computing certain fundamental functions In all cases, except for the third row in Table1 , the underlying computation model is a deterministic boolean branching program that is not restricted in any way (not necessarily oblivious, or leveled, or read/write limited, and so forth). This makes it somewhat difficult to compare our results to the best previously known bounds, since these bounds usually apply to more restricted computation models.... In PAGE 4: ... One of these applies only to q-way BPs, where q grows as nO(1) and must be at least 2120. Since we are concerned with boolean (2-way) BPs, bounds of this kind are not directly comparable to those in Table1 . For boolean BPs, Sauerhoff and Woelfel [34] prove the following.... In PAGE 4: ... There exists a positive constant c such that for all r 6 c log n, the space of all the BPs in this set is bounded by n=r234r . There are several important differences between Theorem 1 and our bounds for IMUL in Table1 . First, Theorem 1 applies to nondeterministic BPs whereas our results do not; in this sense Theorem 1 is more 1All the logarithms in Table1, and throughout this paper, are to base 2.... In PAGE 4: ... There are several important differences between Theorem 1 and our bounds for IMUL in Table 1. First, Theorem 1 applies to nondeterministic BPs whereas our results do not; in this sense Theorem 1 is more 1All the logarithms in Table1 , and throughout this paper, are to base 2.... In PAGE 5: ... This difference does not seem to be significant, since it is known [12,38] that the middle bit is the hardest one to compute. A third difference is that the number of reads r in Theorem 1 is restricted to O(log n), whereas our bounds in the second and third rows of Table1 hold without this restriction. Note that when the number of reads is limited to r, the computation time T is also limited, since T 6 rn.... In PAGE 5: ...ondeterministic BPs and read-r vs. unrestricted BPs. However, ignoring these differences, we can try to make a comparison as follows. If r is constant and m = (n), then Theorem 2 reduces to S = (n), which is exactly the same result we get from the first row of Table1 for the case T = O(n). On the other hand, if r is allowed to grow, say r = 1=4 log n, then the bound on S in Table 1 becomes stronger than Theorem 2.... In PAGE 5: ... If r is constant and m = (n), then Theorem 2 reduces to S = (n), which is exactly the same result we get from the first row of Table 1 for the case T = O(n). On the other hand, if r is allowed to grow, say r = 1=4 log n, then the bound on S in Table1 becomes stronger than Theorem 2. With regard to DFT, the best known (to us) lower bound on the time-space tradeoff of boolean BPs, due to [1, 40], establishes TS = (n2).... In PAGE 5: ... Here, if time is superlinear in n, then the resulting bound on the space is sublinear. In contrast, the bound in the fourth row of Table1 makes it possible to provide superlinear bounds on space when time is also superlinear. For example, for2 T = !(n log1 T n) our results imply that S = !(n log1 S n), where T, S are arbitrary positive constants.... In PAGE 5: ... We then explain how these results lead to lower bounds on the time-space tradeoff of branching programs. In Section 4, we deal with specific target functions and prove the bounds compiled in Table1 . In particular, in Section 4.... In PAGE 5: ... From this, we infer the lower bound for the DFT operation. Finally, in Section 5, we describe several typical functional forms for the gen- eral lower bounds in Table1 . We also compare these results with the complexity of known algorithms [19].... In PAGE 11: ... Operation Time Space Model n-bit FMUL, CONV, MVMUL ! (n) (n) General BP2 n-bit FMUL, CONV, MVMUL ! nlog n log1+ T logn , 8 T gt; 0 ! n1 S , 8 S gt; 0 General BP2 n-bit IMUL ! n log1+ T logn , 8 T gt; 0 ! n1 S , 8 S gt; 0 General BP2 n-bit IMUL t = ! log n log1+ T log n , 8 T gt; 0 ! n1 S , 8 S gt; 0 read-r/write-w BP2 t = max fr, wg n-point DFT ! n log1 T n , 8 T gt; 0 ! n log1 S n , 8 S gt; 0 General BP2 n-point DFT ! nlog2 n log2 logn ! n1 S , 8 S gt; 0 General BP2 Table 2. Typical functional forms for the lower bounds in Table 1 It can be easily shown (using MATHEMATICATM for instance) that if we substitute the lower bounds in Table 2 for T and S in the corresponding time-space tradeoff expressions given in Table1 , they vanish asymptotically as n ! 1, thereby verifying the results in Table 2. Upper bounds from known efficient algorithms.... ..."

### Table III. Results for the baseline and Dempster-Shafer combination of evidence with with jBj = 20 for WT10g.

2005

Cited by 2

### Table IV. Results for the baseline and Dempster-Shafer combination of evidence with with jBj = 50 for WT10g.

2005

Cited by 2

### Table V. Results for the baseline and Dempster-Shafer combination of evidence with with jBj = 1000 for WT10g.

2005

Cited by 2