• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

A quantitative analysis of performance of shared service systems with multiple resource contention. (2010)

by S-H LIM, J-S HUH, Y KIM, G M SHIPMAN, C R DAS
Add To MetaCart

Tools

Sorted by:
Results 1 - 3 of 3

D-factor: a quantitative model of application slow-down in multi-resource shared systems

by Seung-Hwan Lim , Jae-Seok Huh , Youngjae Kim , Galen M Shipman , Chita R Das - in Proceedings of the 12th ACM SIGMETRICS/PERFORMANCE conference , 2012
"... ABSTRACT Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price -re ..."
Abstract - Cited by 7 (0 self) - Add to MetaCart
ABSTRACT Scheduling multiple jobs onto a platform enhances system utilization by sharing resources. The benefits from higher resource utilization include reduced cost to construct, operate, and maintain a system, which often include energy consumption. Maximizing these benefits comes at a price -resource contention among jobs increases job completion time. In this paper, we analyze slow-downs of jobs due to contention for multiple resources in a system; referred to as dilation factor. We observe that multiple-resource contention creates non-linear dilation factors of jobs. From this observation, we establish a general quantitative model for dilation factors of jobs in multi-resource systems. A job is characterized by a vector-valued loading statistics and dilation factors of a job set are given by a quadratic function of their loading vectors. We demonstrate how to systematically characterize a job, maintain the data structure to calculate the dilation factor (loading matrix), and calculate the dilation factor of each job. We validate the accuracy of the model with multiple processes running on a native Linux server, virtualized servers, and with multiple MapReduce workloads co-scheduled in a cluster. Evaluation with measured data shows that the D-factor model has an error margin of less than 16%. We also show that the model can be integrated with an existing on-line scheduler to minimize the makespan of workloads.
(Show Context)

Citation Context

...ne job. Then, the total completion time is the sum of individual neutral completion times independent of the starting and ending times of the jobs, that is, T = n∑ j=1 τj . (7) Proof. Please refer to =-=[19]-=-. Note that Lemma 1 and Theorem 1 assume that jobs are fully overlapped for estimating completion times. Remark 4. In general cases, such a simple formula for total completion time does not exist. Con...

Migration, assignment, and scheduling of jobs in virtualized environment

by Seung-Hwan Lim , ⋆ , Jae-Seok Huh , Youngjae Kim , Chita R Das - In HotCloud , 2010
"... Abstract Migration is an interesting issue for managing resource utilization and performance in clusters. Recent advances in server virtualization have made migration a practical method to achieve these goals. Especially, the live migration of virtualized servers made their pausing times negligible ..."
Abstract - Cited by 5 (1 self) - Add to MetaCart
Abstract Migration is an interesting issue for managing resource utilization and performance in clusters. Recent advances in server virtualization have made migration a practical method to achieve these goals. Especially, the live migration of virtualized servers made their pausing times negligible. However, migration of a virtual machine (VM) can slow down other collocated VMs in multiresource shared systems, where all the system resources are shared among collocated VMs. In parallel execution environment, such sudden slow-down phase of systems is called system noise; it may slow down overall systems while increasing the variability of system performance. When we consider the virtual machine assignment problem as resource allocation, those performance issues are hard to be properly treated. In this work, we address how to consider performance in assigning VMs. To achieve this goal, we model a migration process of a VM instance as a pair of jobs that run at the hosts of sender and receiver. We propose a method to analyze the migration time and the performance impact on multiresource shared systems for completing given VM assignment plan. This study may contribute to create more robust performance in virtualized environment.
(Show Context)

Citation Context

... the assignment cost T and performance variation β since we have to consider those parameters in the context of shared service systems with multiple resources. We proceed to explain how to determine the job size pj of job j, assignment cost T and performance variation β. The performance model in shared service systems with multiple resources Since the time to complete a job is a primary performancemetric, we propose a model to estimate the expanded execution time of jobs when they compete for multiple system resources. Due to the page limits, we explain a two-resource, two-job model. Refer to [5] for anm-resource, n-job model. Suppose that a system consists of only two resources, r1 and r2. Let us assume that we know the probability of accessing those resources by two jobs, j1 and j2, which can be represented by loading vector p 1 = (p1, 1 − p1) and p 2 = (p2, 1− p2), where pi is the probability of accessing r1 by job i. Execution times will be expanded if two workloads access the same resource at the same time. The probability of accessing the same resource by two independent jobs is p1p2+(1−p1)(1−p2). Thus, the expectations of expanded execution times of two competing jobs, Ti, are ...

Insoon Jo et al. / International Journal on Computer Science and Engineering (IJCSE) Workload-aware VM Scheduling on Multicore Systems

by Insoon Jo, Im Y. Jung, Heon Y. Yeom
"... Abstract—In virtualized environments, performance interference between virtual machines (VMs) is a key challenge. In order to mitigate resource contention, an efficient VM scheduling is positively necessary. In this paper, we propose a workload-aware VM scheduler on multi-core systems, which finds a ..."
Abstract - Add to MetaCart
Abstract—In virtualized environments, performance interference between virtual machines (VMs) is a key challenge. In order to mitigate resource contention, an efficient VM scheduling is positively necessary. In this paper, we propose a workload-aware VM scheduler on multi-core systems, which finds a systemwide mapping of VMs to physical cores. Our work aims not only at minimizing the number of used hosts, but at maximizing the system throughput. To achieve the first goal, our scheduler dynamically adjusts a set of used hosts. To achieve the second goal, it maps each VM on a physical core where the physical core and its host most sufficiently meet the resource requirements of the VM. Evaluation demonstrates that our scheduling ensures efficient use of data center resources. Keywords- server consolidation; virtualization; virtual machine scheduling; multi-core systems.
(Show Context)

Citation Context

...ment or multi-core systems. Firstly, contention for physical resources impacts performance differently in different workload configurations, causing significant variance in observed system throughput =-=[1, 16, 20, 19, 24]-=-. To this end, characterizing workload that generates resource contention and reflecting such contention in VM placement are important to maximize the system throughput. Secondly, though more and more...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University