### Table 3. First-Order Sensitivity Derivatives

2001

"... In PAGE 6: ...ard-mode (Eq. (3) and (4)) and the reverse-mode (Eq. (6) and (7)) approaches. The calculated FO SDs from a hand-differentiated incremental-iterative (HDII) im- plementation of these two approaches are presented in Table3 , where the results are seen to agree, as ex- pected. ... In PAGE 8: ... American Institute of Aeronautics and Astronautics The FO SDs presented in Table3 have been thoroughly verified for accuracy through a meticulous implementa- tion of the method of central finite-differences, where agreement to six significant digits or greater is noted in all comparisons. The SO Method 3 is implemented by application (in the forward-mode) of ADIFOR to appropriate pieces of the FORTRAN code used earlier for hand-differentiated forward-mode calculation of the FO SDs.... ..."

Cited by 3

### Table 1: Constructors in First-Order Description Logics

"... In PAGE 2: ... The for- mer are interpreted as subsets of a given domain, and the latter as binary relations on the domain. Table1 lists constructors that allow one to build (complex) concepts and roles from (atomic) concept names and role names.... In PAGE 3: ...Table 1: Constructors in First-Order Description Logics Description logics di er in the constructions they admit. By combining constructors taken from Table1 , two well-known hierarchies of description logics may be obtained. The logics we consider here are extensions of FL?; this is the logic with gt;, ?, universal quanti cation, conjunction and un- quali ed existential quanti cation 9R: gt;.... In PAGE 3: ... For instance, FLEU? is FL? with (full) existential quanti cation and disjunction. Description logics are interpreted on interpretations I = ( I; I), where I is a non-empty domain, and I is an interpretation function assigning subsets of I to concept names and binary relations over I to role names; complex concepts and roles are interpreted using the recipes speci ed in Table1 . The semantic value of an expression E in an interpretation I is simply the set EI.... In PAGE 4: ... First, item 1 is next to trivial. The semantics given in Table1 induces translations ( ) and ( ) taking concepts and roles, respectively, to formulas in a rst-order language whose signature consists of unary predicate symbols corresponding to atomic concepts names, and binary predicate symbols corresponding to... In PAGE 7: ... Hence, ALC lt; ALCR, ALCN, ALCRN. a Now, what do we need to do to adapt the above result for other exten- sions of FL? de ned by Table1 ? For logics less expressive than ALC we can not just use bisimulations, as such logics lack negation or disjunction, and these are automatically preserved under bisimulations; moreover, the proof of Theorem 3.3 uses the presence of the booleans in an essential way.... In PAGE 8: ...Table1 that are not in FL?, and examine which changes are needed to characterize the resulting logics. This is followed by a section in which we consider combina- tions of constructors.... In PAGE 20: ...7.6 Classifying an Arbitrary Description Logic To obtain a characterization of an arbitrary description logic (de ned from Table1 ), somply combine the observations listed in Sections 4.... In PAGE 20: ... Several comments are in order. First, the diagram does not mention all possible combinations of the constructors listed in Table1 . The reason for... ..."

### Table 1. First-order logic part of the ODL calculus.

2006

"... In PAGE 10: ... Yet, rule applications for rst- order reasoning and program reasoning are not separated but intertwined. For rst-order and propositional logic standard rule schemata are listed in Table1 , including an integer induction scheme. Within the rules for the program logic part (Table 2), state update rules R29{R30 constitute a peculiarity of ODL and will be discussed after de ning rule applications.... ..."

Cited by 7

### Table 1: Correspondence Between MEBN and First-Order Logic Syntactic Elements

2003

"... In PAGE 8: ... The value of RV X when applied to instance V is written X(V); the expression X(V)=O denotes that RV X has outcome O when applied to instance V. Table1 shows the correspondence between the above MEBN syntactic elements and syntactic elements of first-order logic. Table 1 also shows MEBN constructs corresponding to logical connectives, nested function application, and quantification.... In PAGE 8: ... Table 1 shows the correspondence between the above MEBN syntactic elements and syntactic elements of first-order logic. Table1 also shows MEBN constructs corresponding to logical connectives, nested function application, and quantification. In first-order logic, logical connectives are used to compose terms into sentences.... ..."

Cited by 2

### Table 2. First-order model-checking results for the EMF EMF-1 EMF-2 EMF-3

"... In PAGE 9: ... It turned out that the EMF (as well as the ELV mentioned above) satis es the applicability conditions of rst-order model checking. Table2 lists some results for the EMF case study. It shows that due to the automatic transfor- mation, the veri cation e ort needed signi cantly less time and space compared with the results of Table 1.... In PAGE 17: ... The main point about all this is that once we have transformed system and speci cation, we can use a standard (symbolic) model checker to verify a system, as all rst-order operations are coded in the discrete boolean domain. The trans- formation has been implemented and used to produce the results of Table2 . In the following, we will present the algorithm in more detail, illustrate it via an example, and sketch the arguments proving its soundness.... ..."

### Table 1: Categorization of di erent RNN models according to the four aspects proposed above. First-order networks

"... In PAGE 5: ...re also others, e.g. (Watrous amp; Kuhn, 1992; Zeng, Goodman, amp; Smyth, 1993). On the other hand, some methods only use an approximation to the real gradient by truncating the computation of the backward recurrence. With these, some representative RNN models are categorized as shown in Table1 . As far as the models listed are concerned, it seems very consistent that rst-order networks follow the prediction paradigm using only positive examples for training whereas second-order networks follow the classi cation paradigm using both positive and negative examples for training.... In PAGE 6: ... Moreover, using the RTRL algorithm, each update of the forward-propagated gradient involves O(n2m4) terms. On the other hand, the rst-order RNN models listed in Table1 have serious problems mainly caused by the prediction paradigm, as will be discussed in detail in Section 2. Our objective in this study is to avoid the high computational requirements of second-order RNN models and the problems caused by the prediction paradigm in existing rst-order RNN models.... In PAGE 21: ...5{0.8 Initial weight range 0:2{ 1:7 (a) Parameter settings Training # strings 500{700 set % illegal strings 88{93% Test # strings 500 set % illegal strings 94% (b) Data sets Table1 1: The setting that led to 100% accuracy and required the smallest number of training epochs in learning the grammar with embedded structures. # hidden units 9 Learning rate 0.... In PAGE 21: ...2 Momentum 0.5 Initial weight range 1:7 Training # strings 600 set % illegal strings 90% # epochs 194 Table1 2: Results of learning the grammar with embedded structures. # trials 20 % converged trials 55% Recognition rate 96.... In PAGE 21: ...4{100% # epochs 194{3030 Average # epochs 787 As in the rst experiment, let us analyze more closely the hidden layer patterns learned by the network. As shown in Table1 3, the hidden layer patterns for substrings BP and BT are quite... In PAGE 22: ... In other words, the relevant path information has not been remembered correctly for the correct identi cation of the last symbol. Table1 3: Hidden layer patterns found in the ASCOC trained to learn the grammar with em- bedded structures. H Substring 0.... In PAGE 23: ...Table1 4: Hidden layer patterns found in the SRN trained to learn the grammar with embedded structures. H Substring 0.... In PAGE 24: ...Table1 5: Summary of results for di erent variants of SRN, including SRN and ASCOC as the two extreme cases. The symbol c refers to the existence of direct context-to-output connections, a refers to the use of auto-associative learning, and n refers to the use of negative examples for training in addition to positive examples.... ..."

### Table 1: Syntax of First-Order System

"... In PAGE 21: ...A A A A0 A0 A00 -trans A A00 A1 B1 : : : An Bn m n -record fl1:A1; : : : ; ln:Amg fl1:B1; : : : ; ln:Bng A0 A B B0 -! A!B A0!B0 A B -sig Sig (X)A Sig (X)B -class (Class I with meth; init) Sig I Table1 . Subtyping empty-ok ` ok ? ` ok ? ` A ok weaken-ok ?; x : A ` ok ?ok ? ` init : Rep ? ` m : Rep!I(Rep) class-ok ? ` (Class I with s; m) ok Table 2.... In PAGE 89: ... 2.1 Syntax The language, whose syntax appears in Table1 , derives largely from the object calculi of Abadi and Cardelli [1], Fisher, Honsell, and Mitchell [10], and Liquori [18]. The types of the language in- clude base types, function types, and object types.... ..."

### Table 1: First-order Markov probabilities

1990

"... In PAGE 21: ... Table1 summarizes the statistical characteristics of our text. The rst seven columns of each row of Table 1 contain the conditional probabilities of a character occurring, given that the character de ning the row has just been observed.... In PAGE 21: ...Table 1 summarizes the statistical characteristics of our text. The rst seven columns of each row of Table1 contain the conditional probabilities of a character occurring, given that the character de ning the row has just been observed. Row i of the column labelled P 0 gives the unconditional probability of character i occurring.... In PAGE 22: ...443 The conditional probabilities are much more skewed than the unconditional dis- tribution. This is evident from Table1 and also from the forms of the corresponding Hu man trees ( rst column of Table 2). We nd that one needs 2832 bits to encode the text as a rst-order Markov process, or 2.... ..."

Cited by 8

### Table 2: Characteristics of First-Order Doppler-Factored STAP

1998

"... In PAGE 7: ... There are three processing stages in this appli- cation: Stage 1: Doppler processing; Stage 2: Weight computation by covariance matrix factorization; Stage 3: Weight application. The time complexity of each task running on one node of IBM SP-2 and the number of tasks in each stage are shown in Table2 . Library routines (IBM essl library and LAPACK) were used for implementing the computational kernels involved in each stage.... ..."

Cited by 7