Results 11  20
of
121
Encapsulation and Composition of Ontologies
 In Proceedings of AAAI Workshop on AI & Information Integration
, 1998
"... Ontology concerns itself with the representation of the objects in the universe and the web of their various connections. The traditional task of ontologists has been to extract from this tangle a single ordered structure, in the form of a tree or lattice. This structure consists of the terms that r ..."
Abstract

Cited by 30 (2 self)
 Add to MetaCart
Ontology concerns itself with the representation of the objects in the universe and the web of their various connections. The traditional task of ontologists has been to extract from this tangle a single ordered structure, in the form of a tree or lattice. This structure consists of the terms that represent the objects, and the relationships that represent connections between objects. Recent work in ontology goes so far as to consider several distinct, superimposed structures, which each represent a classification of the universe according to a particular criterion. Our purpose is to defer the task of globally classifying terms and relationships. Instead, we focus on composing them for use as we need them. We define contexts to be our unit of encapsulation for ontologies, and use a rulebased algebra to compose novel ontological structures within them. We separate context from concept, the unit of ontological abstraction. Also, we distinguish composition from subsumption, or containment, the relationships that commonly provide structure to ontologies. Adding a formal notation of encapsulation and composition to ontologies leads to more dynamic and maintainable structures, and, we believe, greater computational efficiency for knowledge bases.
Computational Comonads and Intensional Semantics
, 1991
"... We explore some foundational issues in the development of a theory of intensional semantics. A programming language may be given a variety of semantics, differing in the level of abstraction; one generally chooses the semantics at an abstraction level appropriate for reasoning about a particular kin ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
We explore some foundational issues in the development of a theory of intensional semantics. A programming language may be given a variety of semantics, differing in the level of abstraction; one generally chooses the semantics at an abstraction level appropriate for reasoning about a particular kind of program property. Extensional semantics are typically appropriate for proving properties such as partial correctness, but an intensional semantics at a lower abstraction level is required in order to reason about computation strategy and thereby support reasoning about intensional aspects of behavior such as order of evaluation and efficiency. It is obviously desirable to be able to establish sensible relationships between two semantics for the same language, and we seek a general categorytheoretic framework that permits this. Beginning with an "extensional" category, whose morphisms we can think of as functions of some kind, we model a notion of computation as a comonad with certain e...
ReactiveML, a Reactive Extension to ML
, 2005
"... We present ReactiveML, a programming language dedicated to the implementation of complex reactive systems as found in graphical user interfaces, video games or simulation problems. The language is based on the reactive model introduced by Boussinot. This model combines the socalled synchronous mode ..."
Abstract

Cited by 25 (11 self)
 Add to MetaCart
We present ReactiveML, a programming language dedicated to the implementation of complex reactive systems as found in graphical user interfaces, video games or simulation problems. The language is based on the reactive model introduced by Boussinot. This model combines the socalled synchronous model found in Esterel which provides instantaneous communication and parallel composition with classical features found in asynchronous models like dynamic creation of processes. The language comes as a conservative extension of an existing callbyvalue ML language and it provides additional constructs for describing the temporal part of a system. The language receives a behavioral semantics à la Esterel and a transition semantics describing precisely the interaction between ML values and reactive constructs. It is statically typed through a Milner type inference system and programs are compiled into regular ML programs. The language has been used for programming several complex simulation problems (e.g., routing protocols in mobile adhoc networks).
Datatype Laws without Signatures
, 1996
"... ing from syntax. Conventionally an equation for algebra ' is just a pair of terms built from variables, the constituent operations of ' , and some fixed standard operations. An equation is valid if the two terms are equal for all values of the variables. We are going to model a syntactic term as a m ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
ing from syntax. Conventionally an equation for algebra ' is just a pair of terms built from variables, the constituent operations of ' , and some fixed standard operations. An equation is valid if the two terms are equal for all values of the variables. We are going to model a syntactic term as a morphism that has the values of the variables as source. For example, the two terms ` x ' and ` x join x ' (with variable x of type tree ) are modeled by morphisms id and id \Delta id ; join of type tree ! tree . So, an equation for ' is modeled by a pair of terms (T '; T 0 ') , T and T 0 being mappings of morphisms which we call `transformers'. This faces us with the following problem: what properties must we require of an arbitrary mapping T in order that it model a classical syntactic Datatype Laws without Signatures 7 term? Or, rather, what properties of classical syntactic terms are semantically essential, and how can we formalise these as properties of a transformer T ? Of course...
Nondeterminism and Infinite Computations in Constraint Programming
, 1995
"... We investigate the semantics of concurrent constraint programming and of various sublanguages, with particular emphasis on nondeterminism and infinite behavior. The aim is to find out what is the minimal structure which a domain must have in order to capture these two aspects. We show that a notion ..."
Abstract

Cited by 16 (5 self)
 Add to MetaCart
We investigate the semantics of concurrent constraint programming and of various sublanguages, with particular emphasis on nondeterminism and infinite behavior. The aim is to find out what is the minimal structure which a domain must have in order to capture these two aspects. We show that a notion of observables, obtained by the upwardclosure of the results of computations, is relatively easy to model even in presence of synchronization. On the contrary modeling the exact set of results is problematic, even for the simple sublanguage of constraint logic programming. We show that most of the standard topological techniques fail in capturing this more precise notion of observables. The analysis of these failed attempts leads us to consider a categorical approach.
Using OCL to formalize object oriented metrics definitions
 INESC, Software Engineering Group ES007/2001, May (versão 0.9
, 2001
"... We propose to standardize objectoriented metrics definitions using the Object Constraint Language (OCL), a part of the Unified Modeling Language (UML) standard, and a metamodel of the modeling formalism. OCL allows specifying invariants, preconditions, postconditions and other types of constraints ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
We propose to standardize objectoriented metrics definitions using the Object Constraint Language (OCL), a part of the Unified Modeling Language (UML) standard, and a metamodel of the modeling formalism. OCL allows specifying invariants, preconditions, postconditions and other types of constraints. To illustrate this approach, we describe the MOOD2 metrics in OCL, based upon the metamodel of our object design modeling formalism – the GOODLY language. The outcome is, we believe, an elegant, precise and straightforward way to define metrics that may help to overcome several current problems. Besides, it is a natural approach since we are using object technology to define metrics on object technology itself. 1.
TIGUKAT: An Object Model for Query and View Support in Object Database Systems
, 1992
"... Objectoriented computing is influencing many areas of computer science including software engineering, user interfaces, operating systems, programming languages and database systems. The appeal of objectorientation is attributed to its higher levels of abstraction for modeling real world concepts, ..."
Abstract

Cited by 13 (6 self)
 Add to MetaCart
Objectoriented computing is influencing many areas of computer science including software engineering, user interfaces, operating systems, programming languages and database systems. The appeal of objectorientation is attributed to its higher levels of abstraction for modeling real world concepts, its support for incremental development and its potential for interoperability. Despite many advances, objectoriented computing is still in its infancy and a universally acceptable definition of an objectoriented data model is virtually nonexistent, although some standardization efforts are underway. This report presents the TIGUKAT 1 object model definition that is the result of an investigation of objectoriented modeling features which are common among earlier proposals, along with some distinctive qualities that extend the power and expressibility of this model beyond others. The literature recognizes two perspectives of an object model: the structural view and the behavioral view. ...
FUNCTIONAL PEARLS  Polytypic Unification
 Journal of Functional Programming
, 1998
"... Unification, or twoway pattern matching, is the process of solving an equation involving two firstorder terms with variables. Unification is used in type inference in many programming languages and in the execution of logic programs. This means that unification algorithms have to be written over a ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
Unification, or twoway pattern matching, is the process of solving an equation involving two firstorder terms with variables. Unification is used in type inference in many programming languages and in the execution of logic programs. This means that unification algorithms have to be written over and over again for different termtypes. Many other functions also make sense for a large class of datatypes  examples are pretty printers, equality checks, maps etc. They can be defined by induction on the structure of userdefined datatypes. Implementations of these functions for different datatypes are closely related to the structure of the datatypes. We call such functions polytypic. This paper describes a unification algorithm parametrised on the type of the terms and shows how to use polytypism to obtain a unification algorithm that works for all regular term types.
Data vs. Decision Fusion in the Category Theory Framework
 In Proceedings of FUSION 2001  4th International Conference on Information Fusion
, 2001
"... In this paper we first formal ly define the notions of data fusion andde3W#3I fusion.Thenwe formulate a theorem that decision fusion is a special case of data fusion. We show the meaning of this theorem on a simple example of edge detection. Edge detection can be done in two ways: by first fusing th ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
In this paper we first formal ly define the notions of data fusion andde3W#3I fusion.Thenwe formulate a theorem that decision fusion is a special case of data fusion. We show the meaning of this theorem on a simple example of edge detection. Edge detection can be done in two ways: by first fusing the original images and then detecting edges in the fused image (data fusion) or by first detecting edges in each image separately and then fusing the results (decision fusion) of edge detection in the decision fusion block. We show, first in general and then on the edge detection example, that decision fusion can be viewed as a special case of data fusion. To the designer of an information fusion system this means that the choice of the decision fusion approach over data fusion in any specific case needs to be supported by some additional consideration, for instance the computational complexity of the fusion algorithm.
Progressive ontology alignment for meaning coordination: An informationtheoretic foundation
 In 4th Int. Joint Conf. on Autonomous Agents and Multiagent Systems
, 2005
"... We elaborate on the mathematical foundations of the meaning coordination problem that agents face in open environments. We investigate to which extend the BarwiseSeligman theory of information flow provides a faithful theoretical description of the partial semantic integration that two agents achie ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
We elaborate on the mathematical foundations of the meaning coordination problem that agents face in open environments. We investigate to which extend the BarwiseSeligman theory of information flow provides a faithful theoretical description of the partial semantic integration that two agents achieve as they progressively align their underlying ontologies through the sharing of tokens, such as instances. We also discuss the insights and practical implications of the BarwiseSeligman theory with respect to the general meaning coordination problem.