### Table 4: Design summary of EM0, EM1, and EM2.

"... In PAGE 4: ... The G m means average gate count necessary to implement each in- struction. The G m in the experiment is about 18064=40 : = 452, where 18064 is the total gate counts necessary to imple- ment the initial design speci#0Ccation#28see Table4 #29, and 40 is the number of instructions in the initial design speci#0Ccation. The eq.... In PAGE 4: ... CF CMPY1#282#29 complex MAC 2096 220464 4 SDIS1#282#29 square distance 1808 271238 8 HDIS hamming distance 87 99876 17 CVENC convolutional 306 48526 23 encoding MIN,MAX min, max 101 100034 3 Table 3: The user-de#0Cned instructions for GSM application. Table4 shows the summary of this experiment. The EM0 is the initial design containing all prede#0Cned instruc- tions of MetaCore.... ..."

### Table 4: Design summary of EM0, EM1, and EM2.

"... In PAGE 4: ... The G m means average gate count necessary to implement each in- struction. The G m in the experiment is about 18064=40 : = 452, where 18064 is the total gate counts necessary to imple- ment the initial design speci#0Ccation#28see Table4 #29, and 40 is the number of instructions in the initial design speci#0Ccation. The eq.... In PAGE 4: ... CF CMPY1#282#29 complex MAC 2096 220464 4 SDIS1#282#29 square distance 1808 271238 8 HDIS hamming distance 87 99876 17 CVENC convolutional 306 48526 23 encoding MIN,MAX min, max 101 100034 3 Table 3: The user-de#0Cned instructions for GSM application. Table4 shows the summary of this experiment. The EM0 is the initial design containing all prede#0Cned instruc- tions of MetaCore.... ..."

### Table 1: Reestimation formulas for EM

2003

"... In PAGE 4: ..... a k } Initial sentence ACQUISITION STEP F I L T E R I N G S T E P Figure 1: Paraphrase learning system 1998)); they are given in Table1 , where N() de-... ..."

Cited by 11

### Table 1: EM code instructions 2.2 Abstract Syntax The abstract syntax of EM code is de ned as follows, where Label, Register, Nat, and Symbol are syntactic representations of the corresponding primitive domains, as de ned in section 4.1. P 2 Program

### Table 3. Precision-Recall breakeven points showing performance of binary classi ers on Reuters with traditional naive Bayes, EM with one mixture component per class, and EM with varying multi-component models for the negative class. The best multi-component model is noted in bold, and the di erence in performance between it and naive Bayes is noted in the rightmost column. Results are shown on the optimal vocabulary size, indicated in parentheses. Note that performance is poor with a single component per class for EM because the data-model t is poor. When a more natural multi-component model is used for the negative class, EM improves upon naive Bayes.

1998

"... In PAGE 16: ...The left column of Table3 shows average precision-recall breakeven points for 10 trials of each experiment, for naive Bayes. These numbers are presented at the best vocabulary size for each task, indicated in parentheses.... In PAGE 16: ... The categories with narrow de nitions require small vocabularies for best classi - cation, while those with a broader de nition require a large vocabulary to capture the category. The second column of Table3 shows the results of performing EM on the data with a single negative centroid, as in previous experiments. As expected, the t between the assumed model and the Reuters data is poor, and the results using EM are dramatically worse than simple naive Bayes.... In PAGE 16: ... However, by choosing an appropriate multi- component model with which to run EM, we can get results that do improve upon naive Bayes. The remainder of Table3 shows the e ects of using di erent multi-component models in conjunction with EM. The negative class is modeled with ve, 20 or 40 negative centroids.... ..."

Cited by 139

### Table 1: Negative log-likelihood on test data for para- metric EM, for di erent starting points.

1999

"... In PAGE 6: ... We ran two ver- sions of this experiment: one with the approximate message passing algorithm (as above) but without the approximation of the ESS, and the other with both. As can be seen on Table1 , the approximation does not degrade the learning accuracy. On the contrary, the approximation even seems to be slightly bene cial, which could be explained as a regularization e ect.... ..."

Cited by 20

### Table 1: Negative log-likelihood on test data for para- metric EM, for di erent starting points.

1999

"... In PAGE 6: ... We ran two ver- sions of this experiment: one with the approximate message passing algorithm (as above) but without the approximation of the ESS, and the other with both. As can be seen on Table1 , the approximation does not degrade the learning accuracy. On the contrary, the approximation even seems to be slightly bene cial, which could be explained as a regularization e ect.... ..."

Cited by 20

### TABLE 3. TERMINAL INPUT REQUESTS TO SYSiEM SOFTWARE

Cited by 1