### Table 2: Transformation Table for Functional Logic Programming.

"... In PAGE 11: ... fib(1,1). 2 PAGE can model this programming paradigm introducing a new transformation table ( Table2 ) which is used in conjunction with the tables used for the LP paradigm. Now we consider that functional arguments have the same notational signi cance as the previously seen ordinary variables.... In PAGE 11: ... Functional arguments are prioritized in the uni cation procedure (the uni cation procedure becomes matching procedure since we are dealing with interpreted functional terms), so that when we have to unify a variable argument which is in the argument list of a functional argument we prefer to unify the latter and discard the former. This can easily be seen in Table 3, where the equivalent AG is given after the use of transformation Table 1 in conjunction with the transformation Table2 . Fig.... In PAGE 12: ....4.1. Multi-pass execution (simple case) The method described so far ( Table2 is used) is operationally incomplete when the minimal elements in the partial ordering induced by the generated dependency graph are unbound (for instance some of the arguments in the argument list of a functional argument are unbound). In such cases, a delayed binding mechanism has to be used.... In PAGE 12: ... 6 we can see the dependency graph for the equivalent AG corresponding to Table 5 generated after the the use of Table 1 in conjunction with Table 4. Here, we do not have functional arguments and so we do not apply the transformation Table2 . Arrows corresponding to Table 1 are designed with solid lines, while arrows corresponding to Table 4 are designed with dashed lines.... In PAGE 15: ...he new inherited attribute). This is shown in Fig. 5 with the dashed lines. 2 It is noteworthy that the same behaviour is possible if we supply the FLP tranformation table ( Table2 ) with extra transformation actions, simulating this way the constraint solver. However, that actions are problem depented and they do not t in a declarative way of programming.... ..."

### Table 4: Transformation operations

"... In PAGE 4: ... Rules consist of an ordered set of operations that progressively compose the answer value string. Table4 shows all possible operations and what they concatenate. Building a transformation rule for a particular hint tuple and answer value pair then involves three basic steps: 1.... In PAGE 5: ... Once an alignment has been chosen, the transformation rule can be built. This simply involves iterating through each token of the answer value and adding one of the operations in Table4 to the rule, based on the alignment (or lack of) with the tokenized hint. When a value token cannot be aligned to a hint token, Append is chosen.... ..."

### Table 1: Categories of parallelism in logic

"... In PAGE 2: ... It is also possible, however, to view the program to be evaluated as data, which are transformed by certain operations according to a particular inference mechanism, and apply some of these operations in parallel to the whole, or parts of the original program. Table1 shows an overview of the categories of parallelism, arranged according to the granularity and the components of a logic program. It identi es the particular data structures and operations applied in a category.... In PAGE 2: ... It identi es the particular data structures and operations applied in a category. The notation used in Table1 is based on viewing a logic program as a collection of clauses, possibly organized into modules (or objects). The clauses consist of literals, arranged as head and tail.... ..."

### Table 1: Rules for Transforming Boolean Operations to Probability Expressions

1994

"... In PAGE 4: ... The canonical SOP form must be used since it is necessary for one and only one product term to be at logic value quot;1 quot; for a given input to preserve independence. The rules in Table1 are used to determine the probability expression for each product in the canonical SOP form. Table 1: Rules for Transforming Boolean Operations to Probability Expressions... In PAGE 5: ...iven in [29]. This method requires the function to be represented as a logic diagram. In this technique, each primary input, each internal interconnection, and the output is assigned a unique variable name. Using the rules in Table1 , each internal node is expressed as a function of the primary inputs. This step is performed through subsequent substitutions until an expression is derived for the output variable in terms of the primary input variables thus forming the OPE.... In PAGE 5: ... As an example, consider the logic diagram illustrated in Figure 2 that is a realization of equation (1). Using the variables assigned to each interconnection and the rules in Table1 , the OPE can be derived as follows. First, apply the rule for the AND operator: D = AB (3) E = BC (4) Next, using the rule for the OR operator: G = AB + BC ? AB2C (5) Finally, the idempotence property rule is employed: G = AB + BC ? ABC (6) Notice that the idempotence property is particularly useful since it allows all exponents to be dropped during the formation of the equations.... ..."

Cited by 2

### Table 3. Results for composite transformations.

2007

"... In PAGE 12: ...3.2 Results Full results for the compositing operations appear in Table3 . Fig- ure 21 illustrates the speedups, which range from 1.... In PAGE 12: ... As for the pixel transformations, the composite videos produced by the compressed processing technique would sometimes benefit from an additional re-compression stage. The last three columns in Table3 quantify this benefit by comparing the compression fac- tors achieved by compressed processing and normal processing (in- cluding a re-compression step). For screencasts and computer an- imations, compressed processing preserves a sizable compression... ..."

### Table 3: Cost of basic operations:

1994

"... In PAGE 7: ... Parallel Database Generation The ideas in Programs (2) and (3) are fine for loading small tables (less than a million records), but they use a single processor and so run one hundred times slower than an algorithm that divides the task into a hundred smaller ones each running in parallel on a separate processor. Program (3) runs at 6,000 records/second given the performance assumptions of Table3 (5,000 instructions per insert, 30 MIPS, implies 6,000 inserts per second.) At that rate, the billion-record load would take almost two days.... In PAGE 7: ...) At that rate, the billion-record load would take almost two days. The same load could run in twenty minutes if done in parallel on the hundred-processor cluster described by Figure 2 and Table3 . Parallel algorithms require a way to create processes on specific CPUS.... In PAGE 9: ... Quickly Generating Billion-Record Synthetic Databases 8 Using the assumptions of Figure 2 and Table3 , the algorithm should generate 600,00 records per ... In PAGE 12: ... Machines with 64-bit registers and arithmatic make this technique unnecessary. #define P xxx; /* see Table3 for good values */ (9) #define G xxx; /* of prime and generator */ #define A (P / G); /* A = prime / generator */ #define B (P % G); /* B = prime mod generator */ static seed = G; /* start the seed at G */ long next_value(long N) /* function to compute next value */ { long seed_over_A = seed / A; /* compute the components of seed */ long seed_mod_A = seed % A; /* compared to A = (P/G) */ do /* loop if next is bigger than N */ { /* Use Schrage apos;s function to */ seed = (G * seed_mod_A ) - (B * seed_over_A) ; /* compute G*seed mode P */ if ( seed lt; 0) seed = seed + P; /* without overflow */ } while ( seed gt;= N ) /* discard all gt;= N */ return seed; /* return new value */ } /* end of next_value() */ ... ..."

Cited by 63

### Table 3: Cost of basic operations:

1994

"... In PAGE 7: ... Parallel Database Generation The ideas in Programs (2) and (3) are fine for loading small tables (less than a million records), but they use a single processor and so run one hundred times slower than an algorithm that divides the task into a hundred smaller ones each running in parallel on a separate processor. Program (3) runs at 6,000 records/second given the performance assumptions of Table3 (5,000 instructions per insert, 30 MIPS, implies 6,000 inserts per second.) At that rate, the billion-record load would take almost two days.... In PAGE 7: ...) At that rate, the billion-record load would take almost two days. The same load could run in twenty minutes if done in parallel on the hundred-processor cluster described by Figure 2 and Table3 . Parallel algorithms require a way to create processes on specific CPUS.... In PAGE 9: ... Quickly Generating Billion-Record Synthetic Databases 8 Using the assumptions of Figure 2 and Table3 , the algorithm should generate 600,00 records per ... In PAGE 12: ... Machines with 64-bit registers and arithmatic make this technique unnecessary. #define P xxx; /* see Table3 for good values */ (9) #define G xxx; /* of prime and generator */ #define A (P / G); /* A = prime / generator */ #define B (P % G); /* B = prime mod generator */ static seed = G; /* start the seed at G */ long next_value(long N) /* function to compute next value */ { long seed_over_A = seed / A; /* compute the components of seed */ long seed_mod_A = seed % A; /* compared to A = (P/G) */ do /* loop if next is bigger than N */ { /* Use Schrage apos;s function to */ seed = (G * seed_mod_A ) - (B * seed_over_A) ; /* compute G*seed mode P */ if ( seed lt; 0) seed = seed + P; /* without overflow */ } while ( seed gt;= N ) /* discard all gt;= N */ return seed; /* return new value */ } /* end of next_value() */ ... ..."

Cited by 63

### Table 1. Labeled program syntax of spock.

"... In PAGE 6: ... Here, we restrict ourselves to labeled normal logic programs albeit spock accepts also programs with a richer syntax like disjunctive logic programs. The basic input language of spock is depicted in Table1... In PAGE 7: ...ig.1. Data flow of answer-set computation for labeled normal programs. Rule labeling is introduced as a device to explicitly refer to certain rules. As stated in Table1 , a rule may have its label omitted. For a previously unlabeled rule, spock auto- matically assigns the label rn according to the line number n in which it appears in the program.... ..."

### Table 1. Labeled program syntax of spock.

"... In PAGE 6: ... Here, we restrict ourselves to labeled normal logic programs albeit spock accepts also programs with a richer syntax like disjunctive logic programs. The basic input language of spock is depicted in Table1... In PAGE 7: ...ig.1. Data flow of answer-set computation for labeled normal programs. Rule labeling is introduced as a device to explicitly refer to certain rules. As stated in Table1 , a rule may have its label omitted. For a previously unlabeled rule, spock auto- matically assigns the label rn according to the line number n in which it appears in the program.... ..."

### Table 3: Basic Computer Instructions.

"... In PAGE 5: ... The micro-operations required for this instruction are: Cycle) (Fetch G 0 F, 0 R, 0 : T C PC 1 PC M(MAR), Data MBR : T C MAR 0 PC, 0 : T C INF 0 , Data MBR PC : T C 3 4 2 4 1 4 0 4 fi fi fi fi + gt;fi lt; fi fi fi gt; lt; fi Register Transfer Statements: A register transfer language is useful not only for describing the internal organization of the computer, but also for specifying the logic circuits needed for its design. The implemented computer has 35 instructions, as in Table3 . Each instruction is represented by a single statement or a set of statements.... ..."