### Table 2. Performance of graph-based propagation approach.

2007

Cited by 1

### Table 2. Comparison over GraphBase directed graphs.

### Table 3. Comparison over GraphBase undirected graphs.

### Table 2: Summary of experiments with attributes based on the citation graph. The smallest error that that can be achieved by each representation (by selecting suitable weights of the graph- based attributes) is shown.

2003

Cited by 2

### Table 2: Summary of experiments with attributes based on the citation graph. The smallest error that that can be achieved by each representation (by selecting suitable weights of the graph- based attributes) is shown.

### Table 2: Number of pivots, and accuracy of the CPU time estimators. running times. Since margins only need to be computed for a constant initiation interval, and only involve the precedence constraints, reverting to a graph-based approach instead of using a simplex tableau is likely to yield lower scheduling times. Another direction for experimentation is the removal of the redundancies from the central problems, a task easy to achieve in theory by taking advantage of the simplex tableau representation.

### Table 3: Graph-based iterative Group Analysis of gene expression during the yeast diauxic shift.

2004

"... In PAGE 4: ... This connectivity between genes and functional classes is provided by GiGA. Table3 summarizes the results for the 20.5 hour time point, using two different networks, one for GeneOntology classes, and one for enzyme substrates, extracted from the SwissProt catalytic activity descriptors of yeast proteins.... In PAGE 9: ...entral biological processes detected by DeRisi et al. (1997) and by iGA (see Table 1 and 2). N, number of genes in each subgraph. Table3 : Graph-based iterative Group Analysis of gene expression during the yeast diauxic shift. (Continued) Page 9 of 10 (page number not for citation purposes)... ..."

### Table 1: Results for text summarization using Text- Rank sentence extraction. Graph-based ranking al-

"... In PAGE 3: ... We evaluate the summaries produced by TextRank using each of the three graph-based ranking algo- rithms described in Section 2. Table1 shows the re- sults obtained with each algorithm, when using graphs that are: (a) undirected, (b) directed forward, or (c) di- rected backward. For a comparative evaluation, Table 2 shows the re- sults obtained on this data set by the top 5 (out of 15) performing systems participating in the single docu- ment summarization task at DUC 2002 (DUC, 2002).... In PAGE 4: ... 5 Related Work Sentence extraction is considered to be an important first step for automatic text summarization. As a con- sequence, there is a large body of work on algorithms 5Notice that rows two and four in Table1 are in fact redundant, since the hub ( weakness ) variations of the HITS (Positional) algorithms can be derived from their authority ( power ) coun- terparts by reversing the edge orientation in the graphs. 6Only seven edges are incident with vertex 15, less than e.... ..."

### Table 1: Results for text summarization using Text- Rank sentence extraction. Graph-based ranking al-

"... In PAGE 3: ... We evaluate the summaries produced by TextRank using each of the three graph-based ranking algo- rithms described in Section 2. Table1 shows the re- sults obtained with each algorithm, when using graphs that are: (a) undirected, (b) directed forward, or (c) di- rected backward. For a comparative evaluation, Table 2 shows the re- sults obtained on this data set by the top 5 (out of 15) performing systems participating in the single docu- ment summarization task at DUC 2002 (DUC, 2002).... In PAGE 4: ... 5 Related Work Sentence extraction is considered to be an important first step for automatic text summarization. As a con- sequence, there is a large body of work on algorithms 5Notice that rows two and four in Table1 are in fact redundant, since the hub ( weakness ) variations of the HITS (Positional) algorithms can be derived from their authority ( power ) coun- terparts by reversing the edge orientation in the graphs. 6Only seven edges are incident with vertex 15, less than e.... ..."

### Table 1: Syntactic domains for the source language We do not bother how variables are represented, we simply assume that there are enough of them. 4Categorical combinatorylogic can be viewed as \classical quot; combinatorylogic augmentedwith products. Categorical combinators have been proposed as an alternative to SK combinators by Lins [20] revealing once again the close interconnection between graph-based and environment-based approaches.

1993

"... In PAGE 3: ... Readers familiar with the topic may safely skip this section. The syntactic domains of the language are shown in Table1 . The domain var contains variables.... In PAGE 16: ... . `n : R[[en]] 0 3 7 5 where 0 = h: : :h ; p1 7! `1i : : :; pn 7! `ni Table1 0: The C scheme revisited for multiple recursive de nitions 5 r-Closed Expressions The compilation schemes which we have introduced in the last section are conceptionally very simple but they are of course too simple-minded to be used in a real implementation. In the following we will introduce several techniques which aim at improving the quality of the generated code (classically called code optimizations).... In PAGE 18: ... r-free : exp ! env (IP var) ! IP var r-free[[x]] h ; pi = fxg if x 2 vars[[p]] = r-free[[x]] otherwise r-free[[x]] h ; p 7! V i = V if x 2 vars[[p]] = r-free[[x]] otherwise r-free[[s(n) e1 en]] = r-free[[e1]] [ [ r-free[[en]] r-free[[()]] = ; r-free[[(e1; e2)]] = r-free[[e1]] [ r-free[[e2]] r-free[[c e]] = r-free[[e]] r-free[[e1 e2]] = r-free[[e1]] [ r-free[[e2]] r-free[[ p ! e]] = r-free[[e]] h ; pi n vars[[p]] r-free[[if e1 then e2 else e3]] = r-free[[e1]] [ r-free[[e2]] [ r-free[[e3]] r-free[[case e of c1 p1 ! e1 j j cn pn ! en]] = r-free[[e]] [ r-free[[e1]] h ; p1i n vars[[p1]] [ [ r-free[[en]] h ; pni n vars[[pn]] r-free[[let p1 = e1 in e]] = (r-free[[e]] h ; p1i n vars[[p1]]) [ r-free[[e1]] r-free[[letrec p1 = e1 ; : : : ; pn = en in e]] = r-free[[e]] 0 where 0 = h: : :h ; p1 7! r-free[[e1]] 0i : : :; pn 7! r-free[[en]] 0i r-closed[[e]] de nes the property of r-closedness relative to the environment . r-closed : exp ! env (IP var) ! bool r-closed[[e]] = r-free[[e]] = ; Table1 1: The computation of r-free variables... In PAGE 19: ... The above example is transformed to the nested expression: e = a b ! letrec g = : : :h : : : ; h = : : :b : : :g : : : in letrec f = : : :a : : :g : : : in : : : For expressions of this kind the last equation of r-free can be simpli ed as indicated in Table 12. The r-free[[letrec p1 = e1 ; : : : ; pn = en in e]] = r-free[[e]] 1 where 0 = h: : :h ; p1 = ;i : : :; pn = ;i V = r-free[[e1]] 0 [ [ r-free[[en]] 0 1 = h: : :h ; p1 = V i : : :; pn = V i Table1 2: The de nition of r-free revisited for truly recursive de nitions modi ed de nition re ects the fact that the de ning expressions are truly recursive.... In PAGE 20: ... Table 14 shows the modi ed compilation schemes. Again we will consider each of them register stack code register stack code stack operations v S Move; C () v : S C v1 v2 : S Pop; C v2 S C register operations v1 v2 : S Snoc; C (v1; v2) S C v S Comb ` ; C [`] S C control instructions [`] v : S App; C v C : S C` true S Gotoifalse ` ; C true S C false S Gotoifalse ` ; C false S C` (ci : v) S Switchi [c1 : `1; : : :; cn : `n] ; C v S C`i Table1 3: Some more instructions in turn. E [[x]] hi = fail E [[x]] h ; pi = E [[x]] E [[x]] h ; p 7! `i = (Call ` ; P[[x]] p) ? E [[x]] E[[x]] h ; pi n = Rest n ; P[[x]] p C[[x ]] = E [[x]] C[[e 1 e2]] = C[[e2]] ; Move; C[[e1]] ; App C[[e1 e 2]] = Move; C[[e2]] ; Swap; C[[e1]] ; App C[[( p ! e) ]] = Comb ` ` : R[[e]] h ; pi C[[if e1 then e 2 else e 3]] = C[[e1]] ; Gotoifalse `1 ; C[[e2]] ; Goto `2 ; `1 : C[[e3]] ; `2 : Skip C[[case e of (c1 p1 ! e1) j j (cn pn ! en) ]] = C[[e]] ; Switchi [c1 : `1; : : :; cn : `n] ; `1 : C[[e1]] h ; p1i ; Goto ` .... In PAGE 20: ... . `n : C[[en]] h ; pni ; ` : Skip C[[let p1 = e1 in e]] = C[[e1]] ; C[[e]] h ; p1i if p1 ! e is r-closed C[[let p1 = e 1 in e]] = Move; C[[e1]] ; Cons; C[[e]] h ; p1i T [[e1; e 2]] = C[[e1]] ; Move; C[[e2]] T [[e 1; e2]] = C[[e2]] ; Move; C[[e1]] ; Swap Table1 4: The E , E, C, and T compilation schemes for r-closed expressions... In PAGE 29: ... . `n : R[[en]] h ; pni ; R[[e]] = C[[e]] ; Return Table1 5: The R compilation scheme for last call optimization The mutual recursive de nitions of even and odd serve as an example for the e ects of last call optimization. letrec even = n ! if = n 0 then true else odd (dec n) ; odd = n ! if = n 0 then false else even (dec n) in even 56 Quote 56 Prim = `3 : Push Return Call `3 Gotofalse `2 Move `4 : Prim dec Stop Quote true Quote 0 Goto `1 `1 : Push Return Prim = Move `2 : Prim dec Gotofalse `4 Quote 0 Goto `3 Quote false 6.... In PAGE 30: ...code improved code o set access instructions Skip ?1 Rest 0 ?1 Rest 1 Fst ?1 Acc 0 Snd ?1 Fst ; Fst Rest 2 0 Fst ; Snd Acc 1 1 Rest n ; Fst Rest (n + 1) if n 2 0 Rest n ; Snd Acc n if n 2 1 stack operations Push ; Swap Push 0 Move; Pop ?1 register operations Swap ; Cons Snoc ?1 Swap ; Snoc Cons ?1 Swap ; Prim s(2) Prim sc (2) ?1 control instructions Cur ` ; App Snoc ; Call ` ?1 Comb ` ; App Pop ; Call ` ?1 Call ` ` : I ; Return I ` : I ; Return ?1 Call ` ; Return Goto ` 1 Table1 6: Optimization rules In the remainder we name some of the advantages and disadvantages of peephole optimizations in contrast to source code transformations like partial evaluation. It is obvious that the compilation of a -expression to a sequence of CAM instructions is a structure loosing mapping.... ..."