Results 1  10
of
21
Web Ontology Segmentation: Analysis, Classification and Use
, 2006
"... Ontologies are at the heart of the semantic web. They define the concepts and relationships that make global interoperability possible. However, as these ontologies grow in size they become more and more difficult to create, use, understand, maintain, transform and classify. We present and evaluate ..."
Abstract

Cited by 74 (3 self)
 Add to MetaCart
Ontologies are at the heart of the semantic web. They define the concepts and relationships that make global interoperability possible. However, as these ontologies grow in size they become more and more difficult to create, use, understand, maintain, transform and classify. We present and evaluate several algorithms for extracting relevant segments out of large description logic ontologies for the purposes of increasing tractability for both humans and computers. The segments are not mere fragments, but stand alone as ontologies in their own right. This technique takes advantage of the detailed semantics captured within an OWL ontology to produce highly relevant segments. The research was evaluated using the GALEN ontology of medical terms and procedures.
Debugging owl ontologies
, 2005
"... Abstract. Modularity in ontologies is key both for large scale ontology development and for distributed ontology reuse on the Web. In this paper, we address the problem of determining and retrieving the subset of an ontology that captures the essential meaning of a given entity in the ontology. Howe ..."
Abstract

Cited by 67 (6 self)
 Add to MetaCart
Abstract. Modularity in ontologies is key both for large scale ontology development and for distributed ontology reuse on the Web. In this paper, we address the problem of determining and retrieving the subset of an ontology that captures the essential meaning of a given entity in the ontology. However, even defining what makes a certain set of axioms a relevant subset of an ontology for a certain task is a controversial issue. In this paper, we provide such a definition by introducing the notion of semantic encapsulation of an entity within an ontology. Such a notion will motivate a formal definition of module. We then provide an algorithm for finding and retrieving the module that encapsulates the meaning of each entity in a given ontology, an optimized implementation and some promising empirical results. 1
Scalable Distributed Reasoning using MapReduce
"... We address the problem of scalable distributed reasoning, proposing a technique for materialising the closure of an RDF graph based on MapReduce. We have implemented our approach on top of Hadoop and deployed it on a compute cluster of up to 64 commodity machines. We show that a naive implementatio ..."
Abstract

Cited by 55 (11 self)
 Add to MetaCart
We address the problem of scalable distributed reasoning, proposing a technique for materialising the closure of an RDF graph based on MapReduce. We have implemented our approach on top of Hadoop and deployed it on a compute cluster of up to 64 commodity machines. We show that a naive implementation on top of MapReduce is straightforward but performs badly and we present several nontrivial optimisations. Our algorithm is scalable and allows us to compute the RDFS closure of 865M triples from the Web (producing 30B triples) in less than two hours, faster than any other published approach.
Modularity and Web Ontologies
 In Proc. KR2006
, 2006
"... Modularity in ontologies is key both for large scale ontology development and for distributed ontology reuse on the Web. However, the problems of formally characterizing a modular representation, on the one hand, and of automatically identifying modules within an OWL ontology, on the other, has not ..."
Abstract

Cited by 37 (9 self)
 Add to MetaCart
Modularity in ontologies is key both for large scale ontology development and for distributed ontology reuse on the Web. However, the problems of formally characterizing a modular representation, on the one hand, and of automatically identifying modules within an OWL ontology, on the other, has not been satisfactorily addressed, although their relevance has been widely accepted by the Ontology Engineering and Semantic Web communities. In this paper, we provide a notion of modularity grounded on the semantics of OWLDL. We present an algorithm for automatically identifying and extracting modules from OWLDL ontologies, an implementation and some promising empirical results on realworld ontologies.
Automatic Partitioning of OWL Ontologies Using EConnections
 In International Workshop on Description Logics
, 2005
"... On the Semantic Web, the ability to combine, integrate and reuse ontologies is crucial. The Web Ontology Language (OWL) defines the owl:imports construct, which allows to include by reference all the axioms contained in another knowledge base (KB) on the Web. This certainly provides some syntactic m ..."
Abstract

Cited by 20 (0 self)
 Add to MetaCart
On the Semantic Web, the ability to combine, integrate and reuse ontologies is crucial. The Web Ontology Language (OWL) defines the owl:imports construct, which allows to include by reference all the axioms contained in another knowledge base (KB) on the Web. This certainly provides some syntactic modularity, but not a logical modularity. We have proposed [3] EConnections as a suitable formalism for combining KBs and for achieving modular ontology development on the Web. EConnections are KR languages defined as a combination of other logical formalisms. They were originally introduced in [4] mostly as a way to go beyond the expressivity of each of the component logics, while preserving the decidability of the reasoning services in the combination. We have found that EConnections can help process, evolve, reuse, and understand OWL ontologies. In this paper, we address the problem of automatically transforming an OWL KB O into a EConnection Σ in such a way that each of the relevant subdomains modeled in O is represented in a different component of Σ. We present a formal definition and investigation of different variants of the problem, a polynomial
Modularization: a Key for the Dynamic Selection of Relevant Knowledge Components
 In Proc. of the ISWC 2006 Workshop on Modular Ontologies
, 2006
"... Abstract. Ontology selection is crucial to support knowledge reuse on the ever increasing Semantic Web. However, applications that rely on reusing existing knowledge often require only relevant parts of existing ontologies rather than entire ontologies. In this paper we investigate how modularizatio ..."
Abstract

Cited by 15 (5 self)
 Add to MetaCart
Abstract. Ontology selection is crucial to support knowledge reuse on the ever increasing Semantic Web. However, applications that rely on reusing existing knowledge often require only relevant parts of existing ontologies rather than entire ontologies. In this paper we investigate how modularization can be integrated with ontology selection techniques. Our contribution is twofold. On the one hand we extend a selection technique with a modularization component. On the other hand we design and implement a modularization algorithm which, unlike many existing approaches, is tightly integrated in a concrete tool.
First order LUB approximations: characterization and algorithms
 Artif. Intell
, 2005
"... One of the major approaches to approximation of logical theories is the upper and lower bounds approach introduced in (Selman and Kautz, 1991, 1996). In this paper, we address the problem of lowest upper bound (LUB) approximation in a general setting. We characterize LUB approximations for arbitrary ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
One of the major approaches to approximation of logical theories is the upper and lower bounds approach introduced in (Selman and Kautz, 1991, 1996). In this paper, we address the problem of lowest upper bound (LUB) approximation in a general setting. We characterize LUB approximations for arbitrary target languages, both propositional and first order, and describe algorithms of varying generality and efficiency for all target languages, proving their correctness. We also examine some aspects of the computational complexity of the algorithms, both propositional and first order; show that they can be used to characterize properties of whole families of resolution procedures; discuss the quality of approximations; and relate LUB approximations to other approaches existing in the literature which are not typically seen in the approximation framework, and which go beyond the “knowledge compilation ” perspective that led to the introduction of LUBs.
Mind the data skew: Distributed inferencing by speeddating in elastic regions
 In Proc. of the WWW
, 2010
"... Semantic Web data exhibits very skewed frequency distributions among terms. Efficient largescale distributed reasoning methods should maintain loadbalance in the face of such highly skewed distribution of input data. We show that termbased partitioning, used by most distributed reasoning approach ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Semantic Web data exhibits very skewed frequency distributions among terms. Efficient largescale distributed reasoning methods should maintain loadbalance in the face of such highly skewed distribution of input data. We show that termbased partitioning, used by most distributed reasoning approaches, has limited scalability due to loadbalancing problems. We address this problem with a method for data distribution based on clustering in elastic regions. Instead of assigning data to fixed peers, data flows semirandomly in the network. Data items “speeddate ” while being temporarily collocated in the same peer. We introduce a bias in the routing to allow semantically clustered neighborhoods to emerge. Our approach is selforganising, efficient and does not require any central coordination. We have implemented this method on the MaRVIN platform and have performed experiments on large realworld datasets, using a cluster of up to 64 nodes. We compute the RDFS closure over different datasets and show that our clustering algorithm drastically reduces computation time, calculating the RDFS closure of 200 million triples in 7.2 minutes.
Knowledge Representation and Classical Logic
"... Mathematical logicians had developed the art of formalizing declarative knowledge long before the advent of the computer age. But they were interested primarily in formalizing mathematics. Because of the important role of nonmathematical knowledge in AI, their emphasis was too narrow from the perspe ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Mathematical logicians had developed the art of formalizing declarative knowledge long before the advent of the computer age. But they were interested primarily in formalizing mathematics. Because of the important role of nonmathematical knowledge in AI, their emphasis was too narrow from the perspective of knowledge representation, their formal languages were not sufficiently expressive. On the other hand, most logicians were not concerned about the possibility of automated reasoning; from the perspective of knowledge representation, they were often too generous in the choice of syntactic constructs. In spite of these differences, classical mathematical logic has exerted significant influence on knowledge representation research, and it is appropriate to begin this handbook with a discussion of the relationship between these fields. The language of classical logic that is most widely used in the theory of knowledge representation is the language of firstorder (predicate) formulas. These are the formulas that John McCarthy proposed to use for representing declarative knowledge in his advice taker paper [176], and Alan Robinson proposed to prove automatically using resolution [236]. Propositional logic is, of course, the most important subset of firstorder logic; recent
Approximation algorithms for treewidth
, 2002
"... Abstract. This paper presents algorithms whose input is an undirected graph, and whose output is a tree decomposition of width that approximates the optimal, the treewidth of that graph. The algorithms differ in their computation time and their approximation guarantees. The first algorithm works in ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Abstract. This paper presents algorithms whose input is an undirected graph, and whose output is a tree decomposition of width that approximates the optimal, the treewidth of that graph. The algorithms differ in their computation time and their approximation guarantees. The first algorithm works in polynomialtime and finds a factorO(log OP T), where OP T is the treewidth of the graph. This is the first polynomialtime algorithm that approximates the optimal by a factor that does not depend on n, the number of nodes in the input graph. As a result, we get an algorithm for finding pathwidth within a factor of O(log OP T · log n) from the optimal. We also present algorithms that approximate the treewidth of a graph by constant factors of 3.66, 4, and 4.5, respectively and take time that is exponential in the treewidth. These are more efficient than previously known algorithms by an exponential factor, and are of practical interest. Finding triangulations of minimum treewidth for graphs is central to many problems in computer science. Realworld problems in artificial intelligence, VLSI design and databases are efficiently solvable if we have an efficient approximation algorithm for them. Many of those applications rely on weighted graphs. We extend our results to weighted graphs and weighted treewidth, showing similar approximation results for this more general notion. We report on experimental results confirming the effectiveness of our algorithms for large