### (Table 1). Later in this paper, we will briefly discuss some initial results from the observations

### Table 1: Notations used throughout the paper Before describing the details of partial processing and se- lective processing, we first briefly review the basic concepts involved in processing windowed stream joins, and establish the notations that will be used throughout the paper.

"... In PAGE 4: ... Other notations will be introduced in the rest of the paper as needed. Table1 summarizes the notations used throughout the paper. A windowed stream join is performed by fetching tuples from the input streams and processing them against tuples in the opposite window.... ..."

### Table 1 briefly describes the selected benchmarks. The se- lection excludes the CPU oriented benchmarks whose per- formance is largely independent of the underlying operating system architecture and implementation and thus not rele- vant for this paper.

### Table 6. Research method applied in each paper.

2007

"... In PAGE 62: ... 3.4 Description of the studies and papers The methodology used in the six research papers is summarised in Table6 . Next, each paper is briefly described.... ..."

### Table 1: Details of the GAs compared in the paper. The bold entries represent the best one of each set, note that for set W the differences were not statistically significant. The rank column gives the rank position of an algorithm relative to others in its set as determined by our comparison method. Full descriptions of these algorithms can be found in Aickelin and Dowsland (2000) and (2002). Briefly, (In)direct refers to the type of algorithm used as described in section 2 of the paper; Bound refers to how intelligently the solutions are built (i.e. not applicable to the direct version); Crossover gives the type of crossover used (automatic meaning the algorithm tries to decide itself), Elitism gives the percentage of the best solution carried over from one generation to the next; Auto-weights indicates that the algorithm tries to optimise some further parameters itself.

"... In PAGE 8: ... 4 ANALYSIS AND DISCUSSION For the problem under investigation eight algorithms were initially compared (say V1, V2, V3, V4, V5, V6, V7, V8). Brief descriptions of these eight algorithms are given in Table1 . These particular eight algorithms were chosen as they represented milestones in our original research (Aickelin and Dowsland (2000) and (2002)).... In PAGE 10: ... Hence, a second set of algorithms U1, U2, U3, U4, U5, U6, U7 and U8 are compared. A description of these algorithms is given in Table1 . Each of these algorithms was used 20 times on each of the 52 weekly scheduling problems and the above process was applied.... In PAGE 10: ... To confirm that the comparison method is not too sensitive we decided to use a third set of algorithms (W1, W2, W3, W4, W5, W6, W7 and W8) on the scheduling problem in exactly the same way as before. A description of these algorithms is given in Table1 . The algorithms were chosen such that we were reasonably convinced that none was much better than any of the others.... ..."

### Table 1: Details of the GAs compared in the paper. The bold entries represent the best one of each set, note that for set W the differences were not statistically significant. The rank column gives the rank position of an algorithm relative to others in its set as determined by our comparison method. Full descriptions of these algorithms can be found in Aickelin and Dowsland (2000) and (2002). Briefly, (In)direct refers to the type of algorithm used as described in section 2 of the paper; Bound refers to how intelligently the solutions are built (i.e. not applicable to the direct version); Crossover gives the type of crossover used (automatic meaning the algorithm tries to decide itself), Elitism gives the percentage of the best solution carried over from one generation to the next; Auto-weights indicates that the algorithm tries to optimise some further parameters itself.

"... In PAGE 8: ... 4 ANALYSIS AND DISCUSSION For the problem under investigation eight algorithms were initially compared (say V1, V2, V3, V4, V5, V6, V7, V8). Brief descriptions of these eight algorithms are given in Table1 . These particular eight algorithms were chosen as they represented milestones in our original research (Aickelin and Dowsland (2000) and (2002)).... In PAGE 10: ... Hence, a second set of algorithms U1, U2, U3, U4, U5, U6, U7 and U8 are compared. A description of these algorithms is given in Table1 . Each of these algorithms was used 20 times on each of the 52 weekly scheduling problems and the above process was applied.... In PAGE 10: ... To confirm that the comparison method is not too sensitive we decided to use a third set of algorithms (W1, W2, W3, W4, W5, W6, W7 and W8) on the scheduling problem in exactly the same way as before. A description of these algorithms is given in Table1 . The algorithms were chosen such that we were reasonably convinced that none was much better than any of the others.... ..."

### Table 1: Details of the GAs compared in the paper. The bold entries represent the best one of each set, note that for set W the differences were not statistically significant. The rank column gives the rank position of an algorithm relative to others in its set as determined by our comparison method. Full descriptions of these algorithms can be found in Aickelin and Dowsland (2000) and (2002). Briefly, (In)direct refers to the type of algorithm used as described in section 2 of the paper; Bound refers to how intelligently the solutions are built (i.e. not applicable to the direct version); Crossover gives the type of crossover used (automatic meaning the algorithm tries to decide itself), Elitism gives the percentage of the best solution carried over from one generation to the next; Auto-weights indicates that the algorithm tries to optimise some further parameters itself.

"... In PAGE 8: ... 4 ANALYSIS AND DISCUSSION For the problem under investigation eight algorithms were initially compared (say V1, V2, V3, V4, V5, V6, V7, V8). Brief descriptions of these eight algorithms are given in Table1 . These particular eight algorithms were chosen as they represented milestones in our original research (Aickelin and Dowsland (2000) and (2002)).... In PAGE 10: ... Hence, a second set of algorithms U1, U2, U3, U4, U5, U6, U7 and U8 are compared. A description of these algorithms is given in Table1 . Each of these algorithms was used 20 times on each of the 52 weekly scheduling problems and the above process was applied.... In PAGE 10: ... To confirm that the comparison method is not too sensitive we decided to use a third set of algorithms (W1, W2, W3, W4, W5, W6, W7 and W8) on the scheduling problem in exactly the same way as before. A description of these algorithms is given in Table1 . The algorithms were chosen such that we were reasonably convinced that none was much better than any of the others.... ..."

### Table 3. Meta-Analyzed Empirical Papers on Marketing Orientation (MO)-Performance in the Not-for-Profit Sector

2006

"... In PAGE 10: ... We now turn to the meta-analyzed papers. We identified 11 papers that assessed the MO-performance link in the VNPO context that included the information needed to summarize them quantitatively (see Table3 ). As our purpose is not to review the literature but to meta-analyze it, we briefly describe each paper.... ..."

### Table 1 briefly summarizes a larger table of per-

1999

Cited by 19