### Table 2. Computational models of language using probabilistic and statistical methodsa

"... In PAGE 2: ...earning; what is distinctive is the specific structures (e.g. trees, dependency diagrams) relevant for language. In computational linguistics, the practical challenge of parsing and interpreting corpora of real language (typi- cally text, sometimes speech) has led to a strong focus on probabilistic methods ( Table2 ). However, computational linguistics often parts company from standard linguistic www.... ..."

### Table 2. Computational models of language using probabilistic and statistical methodsa

"... In PAGE 2: ...earning; what is distinctive is the specific structures (e.g. trees, dependency diagrams) relevant for language. In computational linguistics, the practical challenge of parsing and interpreting corpora of real language (typi- cally text, sometimes speech) has led to a strong focus on probabilistic methods ( Table2 ). However, computational linguistics often parts company from standard linguistic www.... ..."

### Table 2. The inference rules for PC. y0 are xed and all di erent. 3.3 Power to Simulate 2-Counter Machines The rst (and weakest) form of universality that we consider is that a process calculus has the expressive power of 2-counter machines (or, equivalently, Turing machines) in the sense that, for each n, we can exhibit a term U2CM n whose process graph simulates in lock step a universal 2-counter machine on input n. Calculi like CCS, CSP, ACP, and Meije are all universally expressive in this sense. Actu- ally, trying to code a 2-counter or Turing machine in each of these languages is a nice way to get familiar with them. Via a rather tricky encoding, we prove below that also PC has the power of 2-counter machines.1 Theorem 3.8 PC has the expressive power of 2-counter machines. Proof Suppose that a universal 2-counter machine has code of the form l1:

1993

"... In PAGE 11: ... It is also possible to view this operator as a special case of the action re nement operator as studied by Goltz and Van Glabbeek [17]: r re nes an action a into the nondeterministic sum of the actions in fb j r(a; b)g. The inference rules of PC are presented in Table2 . In the table a and b range over A,... ..."

Cited by 19

### Table 2: Classification accuracy (%) on text comparing our method (LDM) vs. Euclidean (EDM), probabilistic global metric (PGDM) and support vector machine (SVM).

2006

"... In PAGE 5: ... We observe that both the classification and retrieval accuracy improve noticeably when unlabeled data is available. Experimental Results for Text Categorization Table2 shows the classification accuracy for the three dis- tance metrics.... ..."

Cited by 3

### Table 1 Proving the Equivalence of STATICquery and D-WFS

2001

"... In PAGE 33: ... Speci cally, the main result of this paper shows that, when restricted to a common query language, the two semantics become entirely equivalent: STATICquery D-WFS : This fundamental result uses an elegant and powerful characterization of STATICquery, the static semantics restricted to the query language Lquery AEB . Table1 illustrates the overall idea of the proof of the equivalence between STATICquery and D-WFS. The rst line contains information about the full STATIC semantics.... ..."

### Table 2: Effect of Induction-based Learning on BMC

"... In PAGE 5: ...1. Table2 shows the runtime for a few industrial instances. We can see that the induction-based learning can be very powerful, espe- cially for hard UNSAT cases.... ..."

### Table 2: Effect of Induction-based Learning on BMC

"... In PAGE 5: ...1. Table2 shows the runtime for a few industrial instances. We can see that the induction-based learning can be very powerful, espe- cially for hard UNSAT cases.... ..."

### Table 1: Classification accuracy (%) on image data compar- ing our method (LDM) vs. Euclidean (EDM), probabilistic global metric (PGDM) and support vector machine (SVM).

2006

"... In PAGE 5: ... We refer to this algorithm as Probabilistic Global Distance Metric Learning , or PGDM for short. Experimental Results for Image Classification Classification Accuracy The classification accuracy using Euclidean distance, the probabilistic global distance metric (PGDM), and the local distance metric (LDM) is shown in Table1 . Clearly, LDM outperforms the other two algorithms in terms of the classification accuracy.... In PAGE 5: ... We estimate the top eigenvectors based on the mixture of labeled and unlabeled images, and these eigenvectors are used to learn the local distance metric. The classification accuracy and the retrieval accuracy of the local distance metric learning with unlabeled data are presented in Table1 and Figure 3. We observe that both the classification and retrieval accuracy improve noticeably when unlabeled data is available.... ..."

Cited by 3

### Table 1. The computer machines.

"... In PAGE 4: ... For the bio- inspired models, thirty independent runs were performed in every case to insure statistical significance. Since re- sults may depend on the computational power available, all simulations were tested in three different machines ( Table1 ). For the HW method, a grid search with a 0.... ..."

### Table 1: Comparison of sequential power estimation methods

1995

"... In PAGE 20: ... Computing the present state line probabilities using the technique presented in the previous sections results in 1) accurate switching activity estimates for all internal nodes in the network implementing the sequential machine; 2) accurate, robust and computationally e cient power estimate for the sequential machine. In Table1 , results are presented for several circuits. In the table, combinational corresponds to the purely combinational estimation method of [4] and uniform-prob corresponds to the se- quential estimation method of [4] that assumes uniform state probabilities.... In PAGE 26: ...y Theorem 7.3. In Table 5, we present results that indicate the improvement in accuracy in power estimation when k-unrolled or m-expanded networks are used. Results are presented for the nite state machine circuits of Table1 for 0 k 2 and 1 m 4. 4 The percentage di erences in power from the exact power estimate are given.... In PAGE 27: ... The CPU times for power estimation are in seconds on a SUN SPARC-2. These times can be compared with those listed in Table1 under the \Line Prob. quot; column as those times correspond to k = 0 and m = 1.... ..."

Cited by 34