### Table 1: Complexity of model checking for default logic

1999

"... In PAGE 4: ... The above property and Theorem 6 also imply p 2- completeness of model checking for prerequisite-free dis- junctive default theories. In Table1 we summarize the complexity results de- scribed in this section. Each column of the table corre- sponds to a di erent condition on the conclusion part of default rules.... In PAGE 6: ... From the computational viewpoint, it turns out that Liberatore and Schaerf apos;s notion of model checking is harder than the one presented in this paper. In fact, comparing Table1 with the results reported in [Liber- atore and Schaerf, 1998], it can be seen that our for- mulation of model checking is computationally easier in almost all the cases examined, with the exception of nor- mal and supernormal default theories, for which the com- plexity of the two versions of model checking is the same. 6 Conclusions In this paper we have studied the complexity of model checking in several nonmonotonic logics.... ..."

Cited by 1

### Table 2: Domain Complexity

in Contents

2006

"... In PAGE 10: ... As a third decision field we introduced the sources which could be eventually used as additional domain descriptions and thus as an aid for the domain analysis and the subsequent conceptualization. The global value for the DCLPX driver is a weighted sum of the aforementioned areas, which are depicted in Table2 , 3, 4.... In PAGE 20: ... Very Low Low Nominal High Very High OEXP 2 months 6 months 1 year 1.5 years 3 years DEEXP 6 months 1 year 3 years 5 years 7 years Table2 0: Ontologists and Domain Experts Experience Cost Drivers 5.1.... In PAGE 20: ... Due to the small size of the project teams we adapted the general ratings of the COCOMO model to a maximal team size of 10 (see Table 21). Very Low Low Nominal High Very High PCON 2 months 6 months 1 year 3 years 6 years Table2 1: Personnel Continuity Cost Driver 5.1.... In PAGE 21: ... The maximal time values for the tool experiences are adapted to the ontology management field and are thus lower than the corresponding language experience ratings (see Table 22). Very Low Low Nominal High Very High LEXP 2 months 6 months 1 year 3 years 6 years TEXP 2 months 6 months 1 year 1,5 years 3 years Table2 2: Language and Tool Experience Cost Drivers 6 ONTOCOM: Project Cost Drivers The project category states the dimensions of the engineering process which are relevant for the cost estimation. 6.... In PAGE 21: ... The ratings for tool support are defined at a general level, as shown in Table 23 below. Rating Rating Scale Very Low High quality tool support, no manual intervention needed Low Few manual processing required Nominal Basic manual intervention needed High Some tool support Very High Minimal tool support, mostly manual processing Table2 3: Tool Support Cost Driver... In PAGE 22: ... This measure involves the assessment of the communication support tools (see Table 24). Rating Rating Scale Very Low mail Low phone, fax Nominal email High teleconference, occasional meetings Very High frequent F2F meetings Table2 4: Multisite Ontology Development Cost Driver 6.1.... In PAGE 22: ....1.3 Required Development Schedule: SCED This cost driver takes into account the particularities of the engineering process given certain schedule constraints. Accelerated schedules (ratings below 100%, see Table2 5) tend to produce more efforts in the refinement and evolution steps due to the lack of time required by an elaborated domain analysis and conceptualization. Stretch-out schedules (over 100%) generate more effort in the earlier phases of the process while the evolution and refinement tasks are best case neglectable.... In PAGE 22: ... Stretch-out schedules (over 100%) generate more effort in the earlier phases of the process while the evolution and refinement tasks are best case neglectable. Very Low Low Nominal High Very High SCED 75% 85% 100% 130% 160% Table2 5: Required Development Schedule Cost Driver For example, a high SCED value of 130% (see Table 25) represents a stretch- out of the nominal schedule of 30% and thus more resources in the domain analysis and conceptualization. 7 Evaluation The parametric approach described in this report is currently being validated towards a reliable method for estimating the costs of ontology engineering.... In PAGE 22: ... Stretch-out schedules (over 100%) generate more effort in the earlier phases of the process while the evolution and refinement tasks are best case neglectable. Very Low Low Nominal High Very High SCED 75% 85% 100% 130% 160% Table 25: Required Development Schedule Cost Driver For example, a high SCED value of 130% (see Table2 5) represents a stretch- out of the nominal schedule of 30% and thus more resources in the domain analysis and conceptualization. 7 Evaluation The parametric approach described in this report is currently being validated towards a reliable method for estimating the costs of ontology engineering.... In PAGE 26: ...Non-calibrated Values of the Cost Drivers in ONTOCOM The initial input values for the cost drivers in the ONTOCOM model are illus- trated in Tables 26, 27 and 28. Product Cost Drivers Building Rating Very Low Low Nominal High Very High DCPLX 0,70 0,85 1 1,30 1,60 CCPLX 0,70 0,85 1 1,30 1,60 ICPLX 0,85 1 1,30 DATA 0,80 0,90 1 1,30 1,60 REUSE 0,70 0,85 1 1,15 1,30 DOCU 0,70 0,85 1 1,15 1,30 Reuse and Very Low Low Nominal High Very High Maintenance OE 0,70 0,85 1 1,30 1,60 OM 0,80 0,90 1 1,20 1,40 OT 0,70 0,85 1 1,30 1,60 OU 1,80 1,40 1 0,90 0,80 Table2 6: Product Cost Drivers and their ratings Personnel Cost Drivers Rating Very Low Low Nominal High Very High OCAP 1,30 1,15 1 0,85 0,70 DECAP 1,30 1,15 1 0,85 0,70 OEXP 1,30 1,15 1 0,85 0,70 DEEXP 1,30 1,15 1 0,85 0,70 PCON 1,30 1,15 1 0,85 0,70 LEXP 1,60 1,30 1 0,90 0,80 TEXP 1,50 1,25 1 0,90 0,80 Table 27: Personnel Cost Drivers and their ratings... In PAGE 26: ...Non-calibrated Values of the Cost Drivers in ONTOCOM The initial input values for the cost drivers in the ONTOCOM model are illus- trated in Tables 26, 27 and 28. Product Cost Drivers Building Rating Very Low Low Nominal High Very High DCPLX 0,70 0,85 1 1,30 1,60 CCPLX 0,70 0,85 1 1,30 1,60 ICPLX 0,85 1 1,30 DATA 0,80 0,90 1 1,30 1,60 REUSE 0,70 0,85 1 1,15 1,30 DOCU 0,70 0,85 1 1,15 1,30 Reuse and Very Low Low Nominal High Very High Maintenance OE 0,70 0,85 1 1,30 1,60 OM 0,80 0,90 1 1,20 1,40 OT 0,70 0,85 1 1,30 1,60 OU 1,80 1,40 1 0,90 0,80 Table 26: Product Cost Drivers and their ratings Personnel Cost Drivers Rating Very Low Low Nominal High Very High OCAP 1,30 1,15 1 0,85 0,70 DECAP 1,30 1,15 1 0,85 0,70 OEXP 1,30 1,15 1 0,85 0,70 DEEXP 1,30 1,15 1 0,85 0,70 PCON 1,30 1,15 1 0,85 0,70 LEXP 1,60 1,30 1 0,90 0,80 TEXP 1,50 1,25 1 0,90 0,80 Table2 7: Personnel Cost Drivers and their ratings... In PAGE 27: ...Rating Very Low Low Nominal High Very High TOOL 1,60 1,30 1 0,90 0,80 SITE 1,30 1,15 1 0,85 0,70 SCED 1,30 1,15 1 0,85 0,70 Table2 8: Project Cost Drivers and their ratings B Using ONTOCOM: An Example In this section we give a brief example on the usage of the ontology cost model described in this report. Starting from a typical ontology building scenario, in which a domain ontology is created from scratch by the engineering team, we simulate the cost estimation process according to the parametric method under- lying ONTOCOM.... In PAGE 28: ...Rating (Value) DCPLX High (1,20) CCPLX Nominal (1) Product Factors ICPLX Low (0,85) DATA High (1,30) REUSE Nominal (1) DOCU Low (0,85) OCAP High (0,85) DECAP Low (1,15) Personnel Factors OEXP High (0,85) DEEXP Very Low (1,30) PCON Very High (0,70) LEXP Nominal (1) TEXP Nominal (1) Project Factors TOOL Very Low (1,60) SITE Nominal (1) SCED Nominal (1) Table2 9: Values of the Cost Drivers References [1] A.... ..."

### Table 1n3a Universal domains.

1987

"... In PAGE 17: ... For example Mulmuley n5b14n5d requires a projection universal domain to prove some of his results on the existence of inclusive predicates n28for showing equivalence of semanticsn29. Table1 lists some of the known results on universal domains. Posets in the left column are assumed to be countablen3b their ideal completions are countably based.... In PAGE 27: ... It is welln2dknown that the convex powerdomain does not preserve the property of bounded completeness n28look in n5b17n5d for a counterexamplen29. It is not closed over any of the n0crst three classes listed in Table1 . In factn2c it is rather din0ecult to n0cnd a cartesian closed subcategory of PO which is closed under n28n01n29 n5c .... ..."

### Table 1: Abbreviations for predicates

"... In PAGE 4: ... For instance, the formula RestrictDomain(Creation; BuildFuns; Widgets) says that all functions that create instances of the class Widget are members of the set BuildFuns. Formal LePUS requires the predicate notation as de- ned above but, in practice, we use the more concise and readable notation shown in Table1 for the predi- cates that we have introduced in this section. The last line of the table de nes what we mean by saying that two relations commute over a given domain and range.... ..."

### Table 1: Constructors in First-Order Description Logics

"... In PAGE 2: ... The for- mer are interpreted as subsets of a given domain, and the latter as binary relations on the domain. Table1 lists constructors that allow one to build (complex) concepts and roles from (atomic) concept names and role names.... In PAGE 3: ...Table 1: Constructors in First-Order Description Logics Description logics di er in the constructions they admit. By combining constructors taken from Table1 , two well-known hierarchies of description logics may be obtained. The logics we consider here are extensions of FL?; this is the logic with gt;, ?, universal quanti cation, conjunction and un- quali ed existential quanti cation 9R: gt;.... In PAGE 3: ... For instance, FLEU? is FL? with (full) existential quanti cation and disjunction. Description logics are interpreted on interpretations I = ( I; I), where I is a non-empty domain, and I is an interpretation function assigning subsets of I to concept names and binary relations over I to role names; complex concepts and roles are interpreted using the recipes speci ed in Table1 . The semantic value of an expression E in an interpretation I is simply the set EI.... In PAGE 4: ... First, item 1 is next to trivial. The semantics given in Table1 induces translations ( ) and ( ) taking concepts and roles, respectively, to formulas in a rst-order language whose signature consists of unary predicate symbols corresponding to atomic concepts names, and binary predicate symbols corresponding to... In PAGE 7: ... Hence, ALC lt; ALCR, ALCN, ALCRN. a Now, what do we need to do to adapt the above result for other exten- sions of FL? de ned by Table1 ? For logics less expressive than ALC we can not just use bisimulations, as such logics lack negation or disjunction, and these are automatically preserved under bisimulations; moreover, the proof of Theorem 3.3 uses the presence of the booleans in an essential way.... In PAGE 8: ...Table1 that are not in FL?, and examine which changes are needed to characterize the resulting logics. This is followed by a section in which we consider combina- tions of constructors.... In PAGE 20: ...7.6 Classifying an Arbitrary Description Logic To obtain a characterization of an arbitrary description logic (de ned from Table1 ), somply combine the observations listed in Sections 4.... In PAGE 20: ... Several comments are in order. First, the diagram does not mention all possible combinations of the constructors listed in Table1 . The reason for... ..."

### Table 1. Predicate Logic and PST(a)

### Table 1. Naming convention for logical predicates

2003

"... In PAGE 82: ... For the description of the predicates we have used a named parameter notation based on the CLIPS [26, 9] syntax. Table1 portrays some signiflcant predicates. Each predicate argument has an associated value which is denoted with ?argument-value.... ..."

### Table 5: Complexity of deciding entailment given the default rankings

"... In PAGE 18: ...15 Table5 tells us that in all cases except DEB7, entailment does not become easier if the default ranking CA is known. Thus, from a worst case perspective, precomputing the default ranking CA does not pay off (but clearly saves time over repetitive computations).... ..."