Results 1 - 10
of
30
QoS-Assured Service Composition in Managed Service Overlay Networks
- ICDCS 2003
, 2003
"... Many value-added and content delivery services are being offered via service level agreements (SLAs). These services can be interconnected to form a service overlay network (SON) over the Internet. Service composition in SON has emerged as a cost-effective approach to quickly creating new services. ..."
Abstract
-
Cited by 76 (15 self)
- Add to MetaCart
(Show Context)
Many value-added and content delivery services are being offered via service level agreements (SLAs). These services can be interconnected to form a service overlay network (SON) over the Internet. Service composition in SON has emerged as a cost-effective approach to quickly creating new services. Previous research has addressed the reliability, adaptability, and compatibility issues for composed services. However, little has been done to manage generic quality-of-service (QoS) provisioning for composed services, based on the SLA contracts of individual services. In this paper, we present QUEST, a QoS assUred composEable Service infrasTructure, to address the problem. QUEST framework provides: (1) initial service composition, which can compose a qualified service path under multiple QoS constraints (e.g., response time, availability). If multiple qualified service paths exist, QUEST chooses the best one according to the load balancing metric; and (2) dynamic service composition, which can dynamically recompose the service path to quickly recover from service outages and QoS violations. Different from the previous work, QUEST can simultaneously achieve QoS assurances and good load balancing in SON.
SpiderNet: An Integrated Peer-to-Peer Service Composition Framework
, 2004
"... Service composition is highly desirable in peer-to-peer (P2P) systems where application services are naturally dispersed on distributed peers. However, it is challenging to provide high quality and failure resilient service composition in P2P systems due to the decentralization requirement and dynam ..."
Abstract
-
Cited by 64 (7 self)
- Add to MetaCart
Service composition is highly desirable in peer-to-peer (P2P) systems where application services are naturally dispersed on distributed peers. However, it is challenging to provide high quality and failure resilient service composition in P2P systems due to the decentralization requirement and dynamic peer arrivals/departures. In this paper, we present an integrated P2P service composition framework called SpiderNet to address the challenges. At service setup phase, SpiderNet performs a novel bounded composition probing protocol to provide scalable quality-aware and resource-efficient service composition in a fully distributed fashion. Moreover, SpiderNet supports directed acyclic graph composition topologies and explores exchangeable composition orders for enhanced service quality. During service runtime, SpiderNet provides proactive failure recovery to overcome dynamic changes (e.g., peer departures) in P2P systems. The proactive failure recovery scheme maintains a small number of dynamically selected backup compositions to achieve quick failure recovery for soft realtime streaming applications. We have implemented a prototype of SpiderNet and conducted extensive experiments using both large-scale simulations and wide-area network testbed. Experimental results show the feasibility and efficiency of the SpiderNet service composition solution for P2P systems.
Distributed multimedia service composition with statistical QoS assurances
- IEEE Transactions on Multimedia
, 2005
"... Abstract — Service composition allows future multimedia services to be automatically composed from atomic service components based on user’s dynamic service requirements. Previous work falls short for distributed multimedia service composition in terms of scalability, flexibility and qualityof-servi ..."
Abstract
-
Cited by 41 (7 self)
- Add to MetaCart
(Show Context)
Abstract — Service composition allows future multimedia services to be automatically composed from atomic service components based on user’s dynamic service requirements. Previous work falls short for distributed multimedia service composition in terms of scalability, flexibility and qualityof-service (QoS) management. In this paper, we present a fully decentralized service composition framework, called SpiderNet, to address the challenges. SpiderNet provides statistical multi-constrained QoS assurances and load balancing for service composition. SpiderNet supports directed acyclic graph composition topologies and exchangeable composition orders. We have implemented a prototype of SpiderNet and conducted experiments on both wide-area networks and simulation testbed. Our experimental results show the feasibility and efficiency of the SpiderNet service composition framework.
A Scalable QoS-Aware Service Aggregation Model for Peer-to-Peer Computing Grids
, 2002
"... Peer-to-peer (P2P) computing grids consist of peer nodes that communicate directly among themselves through wide-area networks and can act as both clients and servers. These systems have drawn much research attention since they promote Internet-scale resource and service sharing without any administ ..."
Abstract
-
Cited by 37 (9 self)
- Add to MetaCart
Peer-to-peer (P2P) computing grids consist of peer nodes that communicate directly among themselves through wide-area networks and can act as both clients and servers. These systems have drawn much research attention since they promote Internet-scale resource and service sharing without any administration cost or centralized infrastructure support. However, aggregating different application services into a high-performance distributed application delivery in such systems is challenging due to the presence of dynamic performance information, arbitrary peer arrivals/departures, and systems ' scalability requirement. In this paper, we propose a scalable QoS-aware service aggregation model to address the challenges. The model includes two tiers: (1) on-demand service composition tier, which is responsible for choosing and composing different application services into a service path satisfying the user's quality requirements; and (2) dynamic peer selection tier, which decides the specific peers where the chosen services are actually instantiated based on the dynamic, composite and distributed performance information. The model is designed and implemented in a fully distributed and self-organizing fashion. Finally, we show that the proposed model and algorithms can achieve better performance than common heuristic algorithms using largescale simulations.
Adaptive Offloading for Pervasive Computing
"... Pervasive computing allows a user to access an application on heterogeneous devices continuously and consistently. However, it is challenging to deliver complex applications on resource-constrained mobile devices such as cell phones. Application-based or system-based adaptations have been proposed t ..."
Abstract
-
Cited by 28 (2 self)
- Add to MetaCart
Pervasive computing allows a user to access an application on heterogeneous devices continuously and consistently. However, it is challenging to deliver complex applications on resource-constrained mobile devices such as cell phones. Application-based or system-based adaptations have been proposed to address the problem, but they often require application fidelity to be significantly degraded. We believe that this problem can be overcome by dynamically partitioning the application, and by offloading part of the application execution with data to a powerful nearby surrogate. This allows the application to be delivered in a pervasive computing environment without significant fidelity degradation or expensive application rewriting. Runtime offloading needs to adapt to different application execution patterns and resource fluctuations in the pervasive computing environment. Hence, we have developed an offloading inference engine to adaptively solve two key decision-making problems in runtime offloading: (1) timely triggering of offloading, and (2) efficient partitioning of applications. Both trace-driven simulations and prototype experiments show the effectiveness of the adaptive offloading system.
Adaptive Offloading Inference for Delivering Applications in Pervasive Computing Environments
- Proc. of IEEE International Conference on Pervasive Computing and Communications (PerCom 2003), Dallas-Fort
, 2003
"... Pervasive computing allows a user to access an application on heterogeneous devices continuously and consistently. However, it is challenging to deliver complex applications on resource-constrained mobile devices, such as cell phones and PDAs. Different approaches, such as application-based or syste ..."
Abstract
-
Cited by 26 (2 self)
- Add to MetaCart
(Show Context)
Pervasive computing allows a user to access an application on heterogeneous devices continuously and consistently. However, it is challenging to deliver complex applications on resource-constrained mobile devices, such as cell phones and PDAs. Different approaches, such as application-based or system-based adaptations, have been proposed to address the problem. However, existing solutions often require degrading application fidelity. We believe that this problem can be overcome by dynamically partitioning the application and offloading part of the application execution to a powerful nearby surrogate. This will enable pervasive application delivery to be realized without significant fidelity degradation or expensive application rewriting. Because pervasive computing environments are highly dynamic, the runtime offloading system needs to adapt to both application execution patterns and resource fluctuations. Using the Fuzzy Control model, we have developed an offloading inference engine to adaptively solve two key decision-making problems during runtime offloading: (1) timely triggering of adaptive offloading, and (2) intelligent selection of an application partitioning policy. Extensive trace-driven evaluations show the effectiveness of the offloading inference engine.
PAC: Pattern-driven Application Consolidation for Efficient Cloud Computing
- in Proc. of MASCOTS
, 2010
"... Abstract—To reduce cloud system resource cost, application consolidation is a must. In this paper, we present a novel patterndriven application consolidation (PAC) system to achieve efficient resource sharing in virtualized cloud computing infrastructures. PAC employs signal processing techniques to ..."
Abstract
-
Cited by 22 (5 self)
- Add to MetaCart
(Show Context)
Abstract—To reduce cloud system resource cost, application consolidation is a must. In this paper, we present a novel patterndriven application consolidation (PAC) system to achieve efficient resource sharing in virtualized cloud computing infrastructures. PAC employs signal processing techniques to dynamically discover significant patterns called signatures of different applications and hosts. PAC then performs dynamic application consolidation based on the extracted signatures. We have implemented a prototype of the PAC system on top of the Xen virtual machine platform and tested it on the NCSU Virtual Computing Lab. We have tested our system using RUBiS benchmarks, Hadoop data processing systems, and IBM System S stream processing system. Our experiments show that 1) PAC can efficiently discover repeating resource usage patterns in the tested applications; 2) Signatures can reduce resource prediction errors by 50-90% compared to traditional coarse-grained schemes; 3) PAC can improve application performance by up to 50 % when running a large number of applications on a shared cluster. I.
A decision-theoretic planner with dynamic component reconfiguration for distributed real-time applications
- in Proceedings of the 21th National Conference on Artificial Intelligence
, 2006
"... Middleware is increasingly being used to develop and deploy components in large-scale distributed real-time and embedded (DRE) systems, such as the proposed NASA sensor web composed of networked remote sensing satellites, atmospheric, oceanic, and terrestrial sensors. Such a system must perform sequ ..."
Abstract
-
Cited by 17 (10 self)
- Add to MetaCart
(Show Context)
Middleware is increasingly being used to develop and deploy components in large-scale distributed real-time and embedded (DRE) systems, such as the proposed NASA sensor web composed of networked remote sensing satellites, atmospheric, oceanic, and terrestrial sensors. Such a system must perform sequences of autonomous coordination and heterogeneous data manipulation tasks to meet specified goals. For example, accurate weather prediction requires multiple satellites that fly coordinated missions to collect and analyze large quantities of atmospheric and earth surface data. The efficacy and utility of the task sequences are governed by dynamic factors, such as data analysis results, changing goals and priorities, and uncertainties due to changing environmental conditions. One way to implement task sequences in DRE systems is to use component middleware (Heineman & Councill 2001), which automates remoting, lifecycle management, system resource management, and deployment and configuration. In large DRE systems, the sheer number of available components often poses a combinatorial planning problem for identifying component sequences to achieve specified goals. Moreover, the dynamic nature of these systems requires runtime management and modification of deployed components. To support such DRE systems, we have developed a novel computationally efficient algorithm called the Spreading Activation Partial Order Planner (SA-POP) for dynamic (re)planning under uncertainty. Prior research (Srivastava & Kambhampati 1999) identified scaling limitations in earlier AI approaches that combine planning and resource allocation/scheduling in one computational algorithm. To address this problem, we combined SA-POP with a Resource Allocation and Control Engine (RACE), which is a reusable component middleware framework that separates resource allocation and control algorithms from the underlying middleware deployment, configuration, and control mechanisms to enforce quality of service (QoS) requirements (see
On Construction of Service Multicast Trees
- in Proc. of IEEE International Conference on Communications (ICC2003
, 2003
"... Abstract — Internet heterogeneity has been a major problem in multimedia data delivery. To deal with the problem, overlay proxy networks as well as distributed and composable services across these overlay networks are being deployed. This solution however, implies that the overlay networks must supp ..."
Abstract
-
Cited by 9 (4 self)
- Add to MetaCart
(Show Context)
Abstract — Internet heterogeneity has been a major problem in multimedia data delivery. To deal with the problem, overlay proxy networks as well as distributed and composable services across these overlay networks are being deployed. This solution however, implies that the overlay networks must support not only data mul-ticast for data delivery to a group of destinations, but also service multicast (incorporate services in the distribution tree) for seman-tic data transformations in order to deal with Internet heterogene-ity. This paper presents challenges and solutions for building ser-vice multicast trees. We compare two groups of algorithms, the shortest-service-path-tree (SSPT) algorithm and the longest-match (LM) algorithm. Simulation results show trade-offs between com-plexity and overall tree performance, as well as cost differences when further refinements of the LM approach are considered. I.