### Table 12.2 illustrates several translations of natural language into two conceptual-graph representations [49]. Graphs involving advanced features like propositions, situations, coreference links, and beliefs would appear in similar form; the only additions are dashed lines and boxes around subgraphs.

in Preface

2001

### Table 2: Representation Use Summary

2005

"... In PAGE 5: ...). Table2 summarizes the number of terms required to repre- sent the 23 thoughts for each of the different representation types. The data in Table 2 show that in the representation of these 23 thoughts there is a natural compaction that oc- curs during the transformations that produce the first three of the six types of representation terms: on average, the 12 tokens appearing in a thought is reduced to 8.... In PAGE 5: ...ernym, entailment, etc.). Table 2 summarizes the number of terms required to repre- sent the 23 thoughts for each of the different representation types. The data in Table2 show that in the representation of these 23 thoughts there is a natural compaction that oc- curs during the transformations that produce the first three of the six types of representation terms: on average, the 12 tokens appearing in a thought is reduced to 8.5 concepts.... ..."

Cited by 1

### Table 2 Representation and Defaults for All Domains

2001

"... In PAGE 8: ... For each domain, we pick as default one representation that is either most natural or most commonly adopted. Table2 tabulates the options o ered and the default. 3.... ..."

Cited by 1

### Table I: History of the UNO natural language processing system. Abbreviations: KRM = Knowledge Representation Module PP = prepositional phrase UIUC = University of Illinois at Urbana-Champaign WSJ = Wall Street Journal WSU = Wayne State University

19

### Table 1. Natural, Gray, and K-code

"... In PAGE 5: ....3. Representation of Binary Codes Some binary codes have copy properties and due to that can be efficiently represented by CDDs. Consider the real- ization of the natural code, Gray code, and K-codes given in Table1 . Figure 8 shows the CDD representing these three codes.... ..."

### TABLE 2. Categorized items from the first part of the model perception questionnaire.

### Table 2. Value of external representation in supporting human task performance and learning

2004

"... In PAGE 7: ...Contents (continued) Page Table2 . Value of external representation in supporting human task performance and learning .... In PAGE 40: ... The second point is that representations can improve domain learning characteristics (Cheng, 1999) by providing the learner with an external representation that encodes all relevant features of a problem space and helping to promote the integration of those features. Table2 , below, is adapted from and extends (Woods, 1994) and presents a set of core performance and learning issues that can be supported by carefully constructed representations. Table 2.... In PAGE 42: ... These are significant challenges, particularly when we consider the task dependent nature of representations. Table2 (see page ) listed a number ways that external representations impact task and learning performance. The implication of this list is that task specific representations better mesh and support schemas and improve task and learning performance.... ..."

### Table 3: The geometries of three views: canonical representation. Mentioned are: the nature of the two equivalent invariant descriptions, the quantities above the horizontal line being the elements of the invariant description for two views, the quantities under that line being the additional parameters, which are measurable from three views but not from two pairs of invariant description for two views, the two alternative expressions for ~ P3, as a function of the description 1-2, 1-3, or of the descriptions 1-2, 2-3.

"... In PAGE 21: ... On the other hand, there are the groups which conserve it: the group of displacements SE3, the group of direct a ne unimodular transformations SA3. Because of the depth-speed ambiguity, the last two ones are not relevant in the context of analysis from two views, and therefore they are mentioned in Table3 but not in Table 2. The projective indetermination of scale in the two-view analysis It should be noted that the invariants e0, H1, S are projective, thus de ned only up to a scale factor, as well as the matrices A and H.... In PAGE 23: ...nd then N views is presented in Sec. 5.3, as a generalization of the canonical decomposition for two views. As previously, the basic insight is simple and natural, and leads automatically to the results which are summarized in Table3 . When recovering the global representation from the local representations, one needs to recover some scale factors from a set of representations which are each de ned only up to a scale factors.... In PAGE 27: ... The work of [23] for instance ts into our formalism in the projective case and illustrates its computational advantages. We have summarized in Table3 the results speci c to the canonical decomposition of a triple of projection matrices:... In PAGE 39: ... Applying the formulas (49), we obtain 1 and qN1. From then, Table3 enables us to compute the invariant description P1, P2, P3, which is a particular triplet of projection matrices (the rst one xed by de nition) yielding the epipolar geometry described by the three initial fundamental matrices. Thus we have started from fundamental matrices, which describe relations between cameras, and we have ended with projection matrices, which describe the cameras themselves.... ..."