Results 1 - 10
of
322
A survey of peer-to-peer content distribution technologies
- ACM Computing Surveys
, 2004
"... Distributed computer architectures labeled “peer-to-peer ” are designed for the sharing of computer resources (content, storage, CPU cycles) by direct exchange, rather than requiring the intermediation or support of a centralized server or authority. Peer-to-peer architectures are characterized by t ..."
Abstract
-
Cited by 378 (7 self)
- Add to MetaCart
Distributed computer architectures labeled “peer-to-peer ” are designed for the sharing of computer resources (content, storage, CPU cycles) by direct exchange, rather than requiring the intermediation or support of a centralized server or authority. Peer-to-peer architectures are characterized by their ability to adapt to failures and accommodate transient populations of nodes while maintaining acceptable connectivity and performance. Content distribution is an important peer-to-peer application on the Internet that has received considerable research attention. Content distribution applications typically allow personal computers to function in a coordinated manner as a distributed storage medium by contributing, searching, and obtaining digital content. In this survey, we propose a framework for analyzing peer-to-peer content distribution technologies. Our approach focuses on nonfunctional characteristics such as security, scalability, performance, fairness, and resource management potential, and examines the way in which these characteristics are reflected in—and affected by—the architectural design decisions adopted by current peer-to-peer systems. We study current peer-to-peer systems and infrastructure technologies in terms of their distributed object location and routing mechanisms, their approach to content replication, caching and migration, their support for encryption, access control, authentication and identity, anonymity, deniability, accountability and reputation, and their use of resource trading and management schemes.
RDFPeers: A Scalable Distributed RDF Repository Based on a Structured Peer-to-Peer Network
, 2004
"... Centralized Resource Description Framework (RDF) repositories have limitations both in their failure tolerance and in their scalability. Existing Peer-to-Peer (P2P) RDF repositories either cannot guarantee to find query results, even if these results exist in the network, or require up-front definit ..."
Abstract
-
Cited by 169 (2 self)
- Add to MetaCart
Centralized Resource Description Framework (RDF) repositories have limitations both in their failure tolerance and in their scalability. Existing Peer-to-Peer (P2P) RDF repositories either cannot guarantee to find query results, even if these results exist in the network, or require up-front definition of RDF schemas and designation of super peers. We present a scalable distributed RDF repository ("RDFPeers") that stores each triple at three places in a multi-attribute addressable network by applying globally known hash functions to its subject, predicate, and object. Thus, all nodes know which node is responsible for storing triple values they are looking for, and both exact-match and range queries can be efficiently routed to those nodes. RDFPeers has no single point of failure nor elevated peers, and does not require the prior definition of RDF schemas. Queries are guaranteed to find matched triples in the network if the triples exist. In RDFPeers, both the number of neighbors per node and the number of routing hops for inserting RDF triples and for resolving most queries are logarithmic to the number of nodes in the network. We further performed experiments that show that the triple-storing load in RDFPeers differs by less than an order of magnitude between the most and the least loaded nodes for real-world RDF data.
Piazza: Data Management Infrastructure for Semantic Web
, 2003
"... The Semantic Web envisions a World Wide Web in which data is described with rich semantics and applications can pose complex queries. To this point, researchers have defined new languages for specifying meanings for concepts and developed techniques for reasoning about them, using RDF as the data mo ..."
Abstract
-
Cited by 166 (12 self)
- Add to MetaCart
(Show Context)
The Semantic Web envisions a World Wide Web in which data is described with rich semantics and applications can pose complex queries. To this point, researchers have defined new languages for specifying meanings for concepts and developed techniques for reasoning about them, using RDF as the data model. To flourish, the Semantic Web needs to be able to accommodate the huge amounts of existing data and the applications operating on them. To achieve this, we are faced with two problems. First, most of the world's data is available not in RDF but in XML; XML and the applications consuming it rely not only on the domain structure of the data, but also on its document structure. Hence, to provide interoperability between such sources, we must map between both their domain structures and their document structures. Second, data management practitioners often prefer to exchange data through local point-to-point data translations, rather than mapping to common mediated schemas or ontologies.
From Adaptive Hypermedia to the Adaptive Web
- Communications of the ACM
, 2002
"... hypertext in early 1990, it now attracts many researchers from different communities such as hypertext, user modeling, machine learning, natural language generation, information retrieval, intelligent tutoring systems, cognitive science, and Web-based education. model, an adaptable system requir ..."
Abstract
-
Cited by 145 (4 self)
- Add to MetaCart
hypertext in early 1990, it now attracts many researchers from different communities such as hypertext, user modeling, machine learning, natural language generation, information retrieval, intelligent tutoring systems, cognitive science, and Web-based education. model, an adaptable system requires the user to specify exactly how the system should be different, for example, tailoring the sports section to provide information about a favorite team [9]. In different kinds of adaptive systems, adaptation effects could be realized in a variety of ways. Adaptive hypermedia and Web systems are essentially collections of connected information items that allow users to navigate from one item to another and search for relevant items. The adaptation effect in this reasonably rigid context is limited to three major adaptation technologies---adaptive content selection, adaptive navigation support, and adaptive presentation. When the user searches for relevant information, the system can ada
Super-Peer-Based Routing and Clustering Strategies for RDF-Based Peer-to-Peer Networks
, 2002
"... RDF-based P2P networks have a number of advantages compared with simpler P2P networks such as Napster, Gnutella or with approaches based on distributed indices such as CAN and CHORD. RDF-based P2P networks allow complex and extendable descriptions of resources instead of fixed and limited ones, and ..."
Abstract
-
Cited by 139 (23 self)
- Add to MetaCart
(Show Context)
RDF-based P2P networks have a number of advantages compared with simpler P2P networks such as Napster, Gnutella or with approaches based on distributed indices such as CAN and CHORD. RDF-based P2P networks allow complex and extendable descriptions of resources instead of fixed and limited ones, and they provide complex query facilities against these metadata instead of simple keyword-based searches.
The Chatty Web: Emergent Semantics Through Gossiping
, 2003
"... This paper describes a novel approach for obtaining semantic interoperability among data sources in a bottom-up, semi-automatic manner without relying on pre-existing, global semantic models. We assume that large amounts of data exist that have been organized and annotated according to local schemas ..."
Abstract
-
Cited by 122 (12 self)
- Add to MetaCart
This paper describes a novel approach for obtaining semantic interoperability among data sources in a bottom-up, semi-automatic manner without relying on pre-existing, global semantic models. We assume that large amounts of data exist that have been organized and annotated according to local schemas. Seeing semantics as a form of agreement, our approach enables the participating data sources to incrementally develop global agreement in an evolutionary and completely decentralized process that solely relies on pair-wise, local interactions: Participants provide translations between schemas they are interested in and can learn about other translations by routing queries (gossiping). To support the participants in assessing the semantic quality of the achieved agreements we develop a formal framework that takes into account both syntactic and semantic criteria. The assessment process is incremental and the quality ratings are adjusted along with the operation of the system. Ultimately, this process results in global agreement, i.e., the semantics that all participants understand. We discuss strategies to efficiently find translations and provide results from a case study to justify our claims. Our approach applies to any system which provides a communication infrastructure (existing websites or databases, decentralized systems, P2P systems) and offers the opportunity to study semantic interoperability as a global phenomenon in a network of information sharing parties.
KnowledgeTree: A Distributed Architecture for Adaptive E-Learning
- WWW2004
, 2004
"... This paper presents KnowledgeTree, an architecture for adaptive E-Learning based on distributed reusable intelligent learning activities. The goal of KnowledgeTree is to bridge the gap between the currently popular approach to Web-based education, which is centered on learning management systems vs. ..."
Abstract
-
Cited by 104 (20 self)
- Add to MetaCart
This paper presents KnowledgeTree, an architecture for adaptive E-Learning based on distributed reusable intelligent learning activities. The goal of KnowledgeTree is to bridge the gap between the currently popular approach to Web-based education, which is centered on learning management systems vs. the powerful but underused technologies in intelligent tutoring and adaptive hypermedia. This integrative architecture attempts to address both the component-based assembly of adaptive systems and teacher-level reusability.
S.: Towards Semantically-Interlinked Online Communities
, 2005
"... Abstract. Online community sites have replaced the traditional means of keeping a community informed via libraries and publishing. At present, online communities are islands that are not interlinked. We describe dif-ferent types of online communities and tools that are currently used to build and su ..."
Abstract
-
Cited by 101 (37 self)
- Add to MetaCart
(Show Context)
Abstract. Online community sites have replaced the traditional means of keeping a community informed via libraries and publishing. At present, online communities are islands that are not interlinked. We describe dif-ferent types of online communities and tools that are currently used to build and support such communities. Ontologies and Semantic Web tech-nologies offer an upgrade path to providing more complex services. Fus-ing information and inferring links between the various applications and types of information provides relevant insights that make the available information on the Internet more valuable. We present the SIOC ontol-ogy which combines terms from vocabularies that already exist with new terms needed to describe the relationships between concepts in the realm of online community sites. 1
Authoring and Annotation of Web Pages in CREAM
, 2002
"... Richly interlinked, machine-understandable data constitute the basis for the Semantic Web. We provide a framework, CREAM, that allows for creation of metadata. While the annotation mode of CREAM allows to create metadata for existing web pages, the authoring mode lets authors create metadata --- ..."
Abstract
-
Cited by 101 (14 self)
- Add to MetaCart
Richly interlinked, machine-understandable data constitute the basis for the Semantic Web. We provide a framework, CREAM, that allows for creation of metadata. While the annotation mode of CREAM allows to create metadata for existing web pages, the authoring mode lets authors create metadata --- almost for free --- while putting together the content of a page. As a particularity of our framework, CREAM allows to create relational metadata, i.e. metadata that instantiate interrelated definitions of classes in a domain ontology rather than a comparatively rigid template-like schema as Dublin Core. We discuss some of the requirements one has to meet when developing such an ontology-based framework, e.g. the integration of a metadata crawler, inference services, document management and a meta-ontology, and describe its implementation, viz. Ont-O-Mat a component-based, ontology-driven Web page authoring and annotation tool.
GridVine: Building Internet-Scale Semantic Overlay Networks
- In International Semantic Web Conference
, 2004
"... This paper addresses the problem of building scalable semantic overlay networks. Our approach follows the principle of data independence by separating a logical layer, the semantic overlay for managing and mapping data and metadata schemas, from a physical layer consisting of a structured peer-t ..."
Abstract
-
Cited by 96 (8 self)
- Add to MetaCart
This paper addresses the problem of building scalable semantic overlay networks. Our approach follows the principle of data independence by separating a logical layer, the semantic overlay for managing and mapping data and metadata schemas, from a physical layer consisting of a structured peer-to-peer overlay network for e#cient routing of messages. The physical layer is used to implement various functions at the logical layer, including attribute-based search, schema management and schema mapping management. The separation of a physical from a logical layer allows us to process logical operations in the semantic overlay using di#erent physical execution strategies. In particular we identify iterative and recursive strategies for the traversal of semantic overlay networks as two important alternatives. At the logical layer we support semantic interoperability through schema inheritance and semantic gossiping. Thus our system provides a complete solution to the implementation of semantic overlay networks supporting both scalability and interoperability.