Results 1 - 10
of
102
Cloud Computing and Emerging IT Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th Utility
, 2008
"... With the significant advances in Information and Communications Technology (ICT) over the last half century, there is an increasingly perceived vision that computing will one day be the 5th utility (after water, electricity, gas, and telephony). This computing utility, like all other four existing u ..."
Abstract
-
Cited by 656 (63 self)
- Add to MetaCart
(Show Context)
With the significant advances in Information and Communications Technology (ICT) over the last half century, there is an increasingly perceived vision that computing will one day be the 5th utility (after water, electricity, gas, and telephony). This computing utility, like all other four existing utilities, will provide the basic level of computing service that is considered essential to meet the everyday needs of the general community. To deliver this vision, a number of computing paradigms have been proposed, of which the latest one is known as Cloud computing. Hence, in this paper, we define Cloud computing and provide the architecture for creating Clouds with market-oriented resource allocation by leveraging technologies such as Virtual Machines (VMs). We also provide insights on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain Service Level Agreement (SLA)-oriented resource allocation. In addition, we reveal our early thoughts on interconnecting Clouds for dynamically creating global Cloud exchanges and markets. Then, we present some representative Cloud platforms, especially those developed in industries along with our current work towards realizing market-oriented resource allocation of Clouds as realized in Aneka enterprise Cloud technology. Furthermore, we highlight the difference between High Performance Computing (HPC) workload and Internet-based services workload. We also describe a meta-negotiation infrastructure to establish global Cloud
Market-oriented cloud computing: Vision, hype, and reality for delivering IT services as computing utilities, in
- Department of Computer Science and Software Engineering (CSSE), The University of Melbourne, Australia. He
, 2008
"... This keynote paper: presents a 21 st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides ..."
Abstract
-
Cited by 328 (21 self)
- Add to MetaCart
(Show Context)
This keynote paper: presents a 21 st century vision of computing; identifies various computing paradigms promising to deliver the vision of computing utilities; defines Cloud computing and provides the architecture for creating market-oriented Clouds by leveraging technologies such as VMs; provides thoughts on market-based resource management strategies that encompass both customer-driven service management and computational risk management to sustain SLAoriented resource allocation; presents some representative Cloud platforms especially those developed in industries along with our current work towards realising market-oriented resource allocation of Clouds by leveraging the 3 rd generation Aneka enterprise Grid technology; reveals our early thoughts on interconnecting Clouds for dynamically creating an atmospheric computing environment along with pointers to future community research; and concludes with the need for convergence of competing IT paradigms for delivering our 21 st century vision. 1.
Scientific Cloud Computing: Early Definition and Experience
- In 10th IEEE International Conference on High Performance Computing and Communications
, 2008
"... 2.2 Functional aspects of Cloud computing. 3 ..."
(Show Context)
Combining batch execution and leasing using virtual machines
- In HPDC ’08: Proceedings of the 17th international symposium on High performance distributed computing
, 2008
"... As cluster computers are used for a wider range of applications, we encounter the need to deliver resources at particular times, to meet particular deadlines, and/or at the same time as other resources are provided elsewhere. To address such requirements, we describe a scheduling approach in which u ..."
Abstract
-
Cited by 83 (5 self)
- Add to MetaCart
(Show Context)
As cluster computers are used for a wider range of applications, we encounter the need to deliver resources at particular times, to meet particular deadlines, and/or at the same time as other resources are provided elsewhere. To address such requirements, we describe a scheduling approach in which users request resource leases, where leases can request either as-soon-as-possible (“best-effort”) or reservation start times. We present the design of a lease management architecture, Haizea, that implements leases as virtual machines (VMs), leveraging their ability to suspend, migrate, and resume computations and to provide leased resources with customized application environments. We discuss methods to minimize the overhead introduced by having to deploy VM images before the start of a lease. We also present the results of simulation studies that compare alternative approaches. Using workloads with various mixes of best-effort and advance reservation requests, we compare the performance of our VM-based approach with that of non-VMbased schedulers. We find that a VM-based approach can provide better performance (measured in terms of both total execution time and average delay incurred by best-effort requests) than a scheduler that does not support task pre-emption, and only slightly worse performance than a scheduler that does support task pre-emption. We also compare the impact of different VM image popularity distributions and VM image caching strategies on performance. These results emphasize the importance of VM image caching for the workloads studied and quantify the sensitivity of scheduling performance to VM image popularity distribution.
Virtual machine hosting for networked clusters: Building the foundations for ’autonomic’ orchestration
- In Proc. VTDC ’06
, 2006
"... Virtualization technology offers powerful resource management mechanisms, including performance-isolating resource schedulers, live migration, and suspend/resume. But how should networked virtual computing systems use these mechanisms? A grand challenge is to devise practical policies to drive these ..."
Abstract
-
Cited by 55 (8 self)
- Add to MetaCart
(Show Context)
Virtualization technology offers powerful resource management mechanisms, including performance-isolating resource schedulers, live migration, and suspend/resume. But how should networked virtual computing systems use these mechanisms? A grand challenge is to devise practical policies to drive these mechanisms in a self-managing or “autonomic” system, without relying on human operators. This paper explores architectural and algorithmic issues for resource management policy and orchestration in Shirako, a system for on-demand leasing of shared networked resources in federated clusters. Shirako enables a flexible factoring of resource management functions across the participants in a federated system, to accommodate a range of models of distributed virtual computing. We present extensions to Shirako to provision fine-grained virtual machine “slivers ” and drive virtual machine migration. We illustrate the interactions of provisioning and placement/migration policies, and their impact. 1
Market-oriented Grids and Utility Computing: The state-of-the-art and future directions
, 2007
"... Traditional resource management techniques (resource allocation, admission control and scheduling) have been found to be inadequate for many shared Grid and distributed systems, that consist of autonomous and dynamic distributed resources contributed by multiple organisations. They provide no incen ..."
Abstract
-
Cited by 50 (14 self)
- Add to MetaCart
(Show Context)
Traditional resource management techniques (resource allocation, admission control and scheduling) have been found to be inadequate for many shared Grid and distributed systems, that consist of autonomous and dynamic distributed resources contributed by multiple organisations. They provide no incentive for users to request resources judiciously and appropriately, and do not accurately capture the true value, importance and deadline (the utility) of a user’s job. Furthermore, they provide no compensation for resource providers to contribute their computing resources to shared Grids, as traditional approaches have a usercentric focus on maximising throughput and minimising waiting time rather than maximising a providers own benefit. Consequently, researchers and practitioners have been examining the appropriateness of ‘market-inspired ’ resource management techniques to address these limitations. Such techniques aim to smooth out access patterns and reduce the chance of transient overload, by providing a framework for users to be truthful about their resource requirements and job deadlines, and offering incentives for service providers to prioritise urgent, high utility jobs over low utility jobs. We examine the recent innovations in these systems (from 2000-2007), looking at the state-of-the-art in price setting and negotiation, grid economy management and utility-driven scheduling and resource allocation, and identify the advantages and limitations of these systems. We then look to the future of these systems, examining the emerging ‘Catallaxy ’ market paradigm. Finally we consider the future directions that need to be pursued to address the limitations of the current generation of market oriented Grids and Utility Computing systems.
Automated Control for Elastic Storage
"... Elasticity—where systems acquire and release resources in response to dynamic workloads, while paying only for what they need—is a driving property of cloud computing. At the core of any elastic system is an automated controller. This paper addresses elastic control for multi-tier application servic ..."
Abstract
-
Cited by 46 (2 self)
- Add to MetaCart
(Show Context)
Elasticity—where systems acquire and release resources in response to dynamic workloads, while paying only for what they need—is a driving property of cloud computing. At the core of any elastic system is an automated controller. This paper addresses elastic control for multi-tier application services that allocate and release resources in discrete units, such as virtual server instances of predetermined sizes. It focuses on elastic control of the storage tier, in which adding or removing a storage node or “brick ” requires rebalancing stored data across the nodes. The storage tier presents new challenges for elastic control: actuator delays (lag) due to rebalancing, interference with applications and sensor measurements, and the need to synchronize the multiple control elements, including rebalancing. We have designed and implemented a new controller for elastic storage systems to address these challenges. Using a popular distributed storage system—the Hadoop Distributed File System (HDFS)—under dynamic Web 2.0 workloads, we show how the controller adapts to workload changes to maintain performance objectives efficiently in a pay-as-you-go cloud computing environment.
InterGrid: A case for internetworking islands
- of Grids, Concurrency and Computation: Practice and Experience (CCPE
"... Abstract: Over the last few years, several nations around the world have set up Grids to share resources such as computers, data, and instruments to enable collaborative science, engineering, and business applications. These Grids follow a restricted organisational model wherein a Virtual Organisati ..."
Abstract
-
Cited by 42 (12 self)
- Add to MetaCart
(Show Context)
Abstract: Over the last few years, several nations around the world have set up Grids to share resources such as computers, data, and instruments to enable collaborative science, engineering, and business applications. These Grids follow a restricted organisational model wherein a Virtual Organisation (VO) is created for a specific collaboration and all interactions such as resource sharing are limited to within the VO. Therefore, dispersed Grid initiatives have led to the creation of disparate Grids with little or no interaction between them. In this paper, we propose a model that: (a) promotes interlinking of islands of Grids through peering arrangements to enable inter-Grid resource sharing; (b) provides a scalable structure for Grids that allow them to interconnect with one another and grow in a sustainable way; (c) creates a global Cyberinfrastructure to support e-Science and e-Business applications. This work identifies and proposes architecture, mechanisms and policies that allow the internetworking of Grids and allows Grids to grow in a similar manner as the Internet. We term the structure resulting from such internetworking between Grids as the InterGrid. The proposed InterGrid architecture is composed of InterGrid Gateways responsible for managing peering arrangements between Grids. We discuss the main components of the architecture and present a research agenda to enable the InterGrid vision.
Capacity leasing in cloud systems using the opennebula engine
- in Workshop on Cloud Computing and its Applications
"... Clouds can be used to provide on-demand capacity as a utility. Although the realization of this idea can differ among various cloud providers (from Google App Engine 1 to Amazon EC2 2), the most flexible approach is the provisioning of virtualized resources as a service. These virtualization-based c ..."
Abstract
-
Cited by 42 (5 self)
- Add to MetaCart
(Show Context)
Clouds can be used to provide on-demand capacity as a utility. Although the realization of this idea can differ among various cloud providers (from Google App Engine 1 to Amazon EC2 2), the most flexible approach is the provisioning of virtualized resources as a service. These virtualization-based clouds, like Amazon EC2 or the Science Clouds 3 (which uses the Globus Virtual Workspace Service [4]), provide a way to build a
Evaluating the CostBenefit of Using Cloud Computing to Extend the Capacity of Clusters
- In Proceedings of the International Symposium on High Performance Distributed Computing (HPDC 2009
, 2009
"... In this paper, we investigate the benefits that organisations can reap by using “Cloud Computing ” providers to augment the computing capacity of their local infrastructure. We evaluate the cost of six scheduling strategies used by an organisation that operates a cluster managed by virtual machine t ..."
Abstract
-
Cited by 42 (7 self)
- Add to MetaCart
(Show Context)
In this paper, we investigate the benefits that organisations can reap by using “Cloud Computing ” providers to augment the computing capacity of their local infrastructure. We evaluate the cost of six scheduling strategies used by an organisation that operates a cluster managed by virtual machine technology and seeks to utilise resources from a remote Infrastructure as a Service (IaaS) provider to reduce the response time of its user requests. Requests for virtual machines are submitted to the organisation’s cluster, but additional virtual machines are instantiated in the remote provider and added to the local cluster when there are insufficient resources to serve the users ’ requests. Naïve scheduling strategies can have a great impact on the amount paid by the organisation for using the remote resources, potentially increasing the overall cost with the use of IaaS. Therefore, in this work we investigate six scheduling strategies that consider the use of resources from the “Cloud”, to understand how these strategies achieve a balance between performance and usage cost, and how much they improve the requests ’ response times.