Results 1 - 10
of
389
Benchmarking Ontologies: Bigger or Better?
"... A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the ..."
Abstract
- Add to MetaCart
A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities
Towards Composing and Benchmarking Ontology Alignments
- In Proc. ISWC-2003 workshop on semantic information integration, Sanibel Island (FL US
"... ding query wrappers which rewrite queries for reaching a particular source. The ontology alignment problem can be described in one sentence: given two ontologies which describe each a set of discrete entities (which can be classes, properties, rules, predicates, etc.), find the relationships (e.g., ..."
Abstract
- Add to MetaCart
ding query wrappers which rewrite queries for reaching a particular source. The ontology alignment problem can be described in one sentence: given two ontologies which describe each a set of discrete entities (which can be classes, properties, rules, predicates, etc.), find the relationships (e
Benchmarking Ontology-based Query Rewriting Systems
"... Query rewriting is a prominent reasoning technique in ontology-based data access applications. A wide variety of query rewriting algorithms have been proposed in recent years and implemented in highly optimised reasoning systems. Query rewriting systems are complex software programs; even if based o ..."
Abstract
- Add to MetaCart
that most publicly available query rewriting systems are unsound and/or incomplete, even on commonly used benchmark ontologies; more importantly, our techniques revealed the precise causes of their correctness issues and the systems were then corrected based on our feedback. Finally, since our evaluation
Benchmarking ontology-based annotation tools for the semantic web
- In UK e-Science Programme All Hands Meeting (AHM2005) Workshop Text Mining, e-Research and Grid-enabled Language Technology
, 2005
"... This paper discusses and explores the main issues for evaluating ontology-based annotation tools, a key component in text mining applications for the Semantic Web. Semantic annotation and ontologybased information extraction technologies form the cornerstone of such applications. There has been a gr ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
This paper discusses and explores the main issues for evaluating ontology-based annotation tools, a key component in text mining applications for the Semantic Web. Semantic annotation and ontologybased information extraction technologies form the cornerstone of such applications. There has been a
Benchmarking ontology tools. A case study for the WebODE platform
"... As the Semantic Web grows the number of tools that support it increases, and a new need arises: the assessment of these tools in order to analyse whether they can deal with actual and future performance requirements. In order to evaluate ontology tools ’ performance, the development and use of bench ..."
Abstract
- Add to MetaCart
of benchmark suites for these tools is needed. In this paper we describe the design and execution of a benchmark suite for assessing the performance of the WebODE ontology engineering workbench.
OntoDBench: Interactively Benchmarking Ontology Storage in a Database
"... Nowadays, all ingredients are available for developing domain ontologies. This is due to the presence of various types of methodologies for creating domain ontologies [3]. The adoption of ontologies by real life applications generates mountains of ontological data that need techniques and tools to f ..."
Abstract
- Add to MetaCart
Nowadays, all ingredients are available for developing domain ontologies. This is due to the presence of various types of methodologies for creating domain ontologies [3]. The adoption of ontologies by real life applications generates mountains of ontological data that need techniques and tools
LUBM: A benchmark for OWL knowledge base systems
- Semantic Web Journal
, 2005
"... We describe our method for benchmarking Semantic Web knowledge base systems with respect to use in large OWL applications. We present the Lehigh University Benchmark (LUBM) as an example of how to design such benchmarks. The LUBM features an ontology for the university domain, synthetic OWL data sca ..."
Abstract
-
Cited by 378 (10 self)
- Add to MetaCart
We describe our method for benchmarking Semantic Web knowledge base systems with respect to use in large OWL applications. We present the Lehigh University Benchmark (LUBM) as an example of how to design such benchmarks. The LUBM features an ontology for the university domain, synthetic OWL data
Ontology matching benchmarks:
, 2013
"... The OAEI Benchmark test set has been used for many years as a main reference to evaluate and compare ontology matching systems. However, this test set has barely varied since 2004 and has become a relatively easy task for matchers. In this paper, we present the design of a flexible test generator ba ..."
Abstract
- Add to MetaCart
The OAEI Benchmark test set has been used for many years as a main reference to evaluate and compare ontology matching systems. However, this test set has barely varied since 2004 and has become a relatively easy task for matchers. In this paper, we present the design of a flexible test generator
Ontology matching benchmarks: generation and evaluation
"... Abstract. The OAEI Benchmark data set has been used as a main reference to evaluate and compare matching systems. It requires matching an ontology with systematically modified versions of itself. However, it has two main drawbacks: it has not varied since 2004 and it has become a relatively easy tas ..."
Abstract
-
Cited by 6 (0 self)
- Add to MetaCart
Abstract. The OAEI Benchmark data set has been used as a main reference to evaluate and compare matching systems. It requires matching an ontology with systematically modified versions of itself. However, it has two main drawbacks: it has not varied since 2004 and it has become a relatively easy
Benchmarking Reasoners for Multi-Ontology Applications
"... Abstract. We describe an approach to create a synthetic workload for large scale extensional query answering experiments. The workload comprises multiple interrelated domain ontologies, data sources which commit to these ontologies, synthetic queries and map ontologies that specify a graph over the ..."
Abstract
-
Cited by 3 (3 self)
- Add to MetaCart
Abstract. We describe an approach to create a synthetic workload for large scale extensional query answering experiments. The workload comprises multiple interrelated domain ontologies, data sources which commit to these ontologies, synthetic queries and map ontologies that specify a graph over
Results 1 - 10
of
389