A General-purpose and Multi-level Scheduling Approach in Energy Efficient Computing

A General-purpose and Multi-level Scheduling Approach in Energy Efficient Computing
of 6
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Related Documents
  A GENERAL-PURPOSE AND MULTI-LEVEL SCHEDULINGAPPROACH IN ENERGY EFFICIENT COMPUTING Mehdi Sheikhalishahi, Manoj Devare  Department of Electronics, Computer and System Sciences, University of Calabria, CS, Rende, Italy { alishahi, mdevare }  Lucio Grandinetti, Demetrio Lagan  Department of Electronics, Computer and System Sciences, University of Calabria, CS, Rende,,  Keywords: Green computing, Cloud computing, Scheduling, Energy, DVFS.Abstract: Green computing denotes energy efficiency in all components of computing systems i.e. hardware, software,local area and etc. In this work, we explore software part of green computing in computing paradigms ingeneral. Energy efficient computing has to achieve manifold objectives of energy consumption optimizationand utilization improvement for computing paradigms that are not pay-per-use such as cluster and grid, andrevenue maximization as another additional metric for cloud computing model. We propose a multi-leveland general-purpose scheduling approach for energy efficient computing. Some parts of this approach suchas consolidation are well defined for IaaS cloud paradigm, however it is not limited to IaaS cloud model.We discuss policies, models, algorithms and cloud pricing strategies in general. In particular, wherever it isapplicable we explain our solutions in the context of Haizea. Through experiments, we show big improvementin utilization and energy consumption in a static setting as workloads run with lower frequencies and energyoptimization correlates with utilization improvement. 1 INTRODUCTION Primary use of energy in ICT is in data centers. Ef-ficient power supply design and evaporative coolingrather than air conditioning are two common ways toreduce the energy consumption; in addition, nowa-days work on other areas such as resource manage-ment and scheduling to optimize energy consumption(Kim et al., 2007) (Dhiman et al., 2009) in computingparadigms in order to be carbon neutral, more envi-ronmentally friendly and reducing operational costsis a hot research topic. In this paper, we present aperspective on energy efficient computing to achievemanifold objectives of energy consumption optimiza-tion, utilization improvement and revenue maximiza-tion for cloud provider in cloud paradigm. This ap-proach is a multi-level and general-purpose schedul-ing approach which could be applied to all level of resource management stack in any distributed com-puting paradigm. There are many approaches, mech-anisms and algorithms in the literature for energy effi-ciency, which most of them are special-purpose. Theproposed approach is architected in all layers of com-puting paradigms and systems in order to be generalas much as possible. However, it can be extended orspecialized for different environments.We focus our attention on exploring pricing strate-gies, policies, models and exploiting the latest tech-niques and technologies in modern processors to de-sign scheduling algorithms to improve resource uti-lization by using free spaces in scheduler’s availabil-ity window and optimizing energy consumption.Throughout this paper we refer to Haizea (Sotomayor,2010), it is an open source lease (implemented as vir-tual machines) management system which supportspluggable scheduling policies, various scheduling al-gorithms, etc. Being open, flexible and modular forfurther extensions and enhancements, and a general-purpose VM-based scheduler, motivated us to selectHaizea as a tool for explaining where our solutionscould be applied. In brief, our paper makes the fol-lowing contributions in the field: •  We propose an abstract approach as a multi-level and general-purpose approach in energy ef-ficient computing by exploring policies, modelsand algorithms. We highlight that energy efficient 37  scheduling is paradigm based; details of a pol-icy, a model or an algorithm is different in dif-ferent paradigms. We enumerate components of adistributed system scheduler as frontend policies,core scheduler, information service, and backendpolicies. •  As part of core scheduler, we develop an energyaware algorithm based on operations in DVFS.Through experimentation we demonstrate howusing lower level frequencies as operating pointof processors, utilization and energy consumptionimproves whereas total time increases. Accordingto the simulation results, energy consumption op-timizationcorrelateswithutilizationimprovementin a static setting.Next parts of this paper are organized as follows. Af-ter reviewing related works in Section 2, Section 3goes over contemporary technologies for Energy Ef-ficient computing especially in processors (Section3.1). Next, Section 4 proposes our multi-level andgeneral-purpose approach in Energy Efficient com-puting. Then, Section 5 discusses experimental re-sults as early lessons on energy aware operations.Finally, Section 5 presents our conclusions and dis-cusses future work. 2 RELATED WORKS In this decade, many approaches have been proposedto address the problem of Energy Efficient comput-ing with special attention on energy consumption op-timization and utilization improvement. The premisein DVFS works is to shorten the idle period as muchas possible using time scaling, since it is always ben-eficial to do so from energy efficiency perspective.In (Hong et al., 1999), there are some assumptionsto simplify DVFS-related operations, when the task arrival times, workload and deadlines are known inadvance, these techniques target real time systems.Techniques presented in (Azevedo et al., 2002) re-quire either application or compiler support for per-forming DVFS.In (Varma et al., 2003) system level DVFS tech-niques demonstrated. They monitor CPU utilizationat regular intervals and then perform dynamic scal-ing based on their estimate of utilization for the nextinterval.In contrast to the aforementioned works, theapproaches in (Knauerhase et al., 2008) characterizethe running tasks and accordingly make the voltagescaling decisions only for those phases of the taskswhere it is beneficial. Further, the policies take DVFSdecisions based on how beneficial they are from CPUenergy savings perspective. Nonetheless, in (Dhimanet al., 2008) it is shown that doing so does not neces-sarily result in higher system level energy savings. In(Dhiman et al., 2008) instead of using DVFS, simplepower management policies presented based on uti-lizing the low power modes commonly available inmodern processors and memories.In (Kim et al., 2007), authors proposed power-aware scheduling algorithms for bag-of-tasks applica-tions with deadline constraints on DVS-enabled clus-ter systems. Eucalyptus (Nurmi et al., 2008), Nimbus(Nimbus, 2010) as Virtual Infrastructure Managers donot support scheduling policies to dynamically con-solidate or redistribute VMs. Scheduling componentof OpenNebula (Sotomayor et al., 2009) (in Haizeamode) and (Dhiman et al., 2009) are able to dynam-ically schedule the VMs across a cluster based ontheir CPU, memory and network utilization. Simi-larly, scheduling algorithms in (Bobroff et al., 2007)provide dynamic consolidation and redistribution of VMs for managing performance and SLA i.e. servicelevel agreements violations. VMware’s Distributedresource scheduler (VMWare, 2010) also performsautomated load balancing in response to CPU andmemory pressure. However, none of these schedulingalgorithms take into account the impact of resourcecontention and policy decisions on energy consump-tion.Authors in (Laszewskiy et al., 2009) present anefficient scheduling algorithm to allocate virtual ma-chines in a DVFS-enabled cluster by dynamicallyscaling the supplied voltages. vGreen (Dhiman et al.,2009) is also a system for energy efficient computingin virtualized environments by linking online work-load characterization to dynamic VM scheduling de-cisions to achieve better performance, energy effi-ciency and power balance in the system. 3 TECHNOLOGIES FORENERGY EFFICIENTCOMPUTING Modern hardware components such as processor,memory, disk and network offer feature sets (Burdand Brodersen, 1995) to support energy aware opera-tions. Exploiting these feature sets in order to be moreenergy efficient is a very important and challengingtask, for example in modeling cost/performance, indesigning algorithms, in defining policies, etc. In thissection, we review some of these feature sets. CLOSER 2011 - International Conference on Cloud Computing and Services Science 38  3.1 Technologies Embedded inProcessors Nowadays processors offer two important features forpower-saving, cpuidle and cpufreq (Dynamic Voltageand Frequency Scaling). In the cpuidle feature, thereare a number of CPU power states ( C   −  states ) inwhich they could reduce power when CPU is idleby closing some internal gates. The CPU  C  − states are  C  0 , C  1 , ..., Cn .  C  0 is the normal working statewhere CPU will execute instruction and C  1 , ..., Cn  aresleeping state where CPU stops executing instructionand power down some internal components to savepower. The cpufreq is another power-saving methodespecially when CPUs are in load line, allowing quick adjustment to frequency/voltage upon demand insmall interval. The key idea behind DVFS techniquesis to dynamically scale the supply voltage level of the CPU so as to provide just-enough circuit speed toprocess the system workload, thereby reducing theenergy consumption. 3.2 Electricity ConsumptionFormulation In this section, first we formulate electricity consump-tion and then we will discuss the effect of using twodifferent frequencies on energy consumption.To formulate energy consumption model, we con-sider that processors are homogeneous and DVFS-enabled. They have  n  operating points for each corein a multicore architecture as the following: VF   =  { (  f  0 , v 0 ) , ···  , (  f  n , v n ) }  (1)If a core runs at frequency  f  i  it consumes  v i  volt-age, respectively. According to (Kim et al., 2007)the energy consumption for a computation which uses v  voltage to run at frequency  f  , is as the following(quadratically dependent on the supply voltage level): P dynamic  ≈  v 2 .  f   (2)  E   ≈  E  dynamic  = ∑ t  P dynamic . ∆ t   = ∑ t  v 2 .  f  . ∆ t   (3)Thus, if we have J number of jobs, the energy con-sumption for scheduling them would be:  E   ≈  J  ∑  j = 1 v (  j ) 2 .  f  (  j ) . t  (  j )  (4)In which  v (  j )  and  f  (  j )  are the amount of voltagethat is used and the core’s frequency while running job  j , respectively, from V   and  S   sets defined before.Let’s have a job which requires  t   seconds executiontime to complete with a 1 GHz  processor. If this jobran with two different operating points  a ( v a , s a )  and b ( v b , s b ) , its energy consumption in these two scenar-ios would be like the following:  E  a  ≈  v 2 a .  f  a . t  a  =  v 2 a .  f  a . t  f  a =  v 2 a . t   (5)  E  b  ≈  v 2 b .  f  b . t  b  =  v 2 b .  f  b . t  f  b =  v 2 b . t   (6)so that: v a  >  v b  ⇒  E  a  >  E  b  (7)If   v a  >  v b  this means that  E  a  >  E  b , so energy con-sumptioninoperatingpoint a isgreaterthanoperatingpoint  b  while it will be finished earlier  t  a  < t  b . 4 A MULTI-LEVEL ANDGENERAL-PURPOSEAPPROACH IN ENERGYEFFICIENT COMPUTING We believe in order to approach conflicting goals of Energy Efficient computing, a multi-level approachwhich applies to all the levels of workload’s pathin resource management stack should be devised.Policies, cost/performance models and algorithmswithin scheduling domain and pricing schemas incloud computing paradigm are the components whichshould become or incorporate energy aware place-ments, operations, techniques, models, etc. 4.1 Policies Policies have profound impacts on Energy Efficientcomputing. Policies could be categorized into threetypes: general-purpose policies (Dhiman et al., 2009),architecture-specific (or infrastructure-specific) poli-cies (Hong et al., 1999) application-specific (orworkload-specific) policies (Kim et al., 2007).General-purpose policies are those that can be ap-plied to most of the computing models. For instance,CPU/cache-intensive workloads should run at highfrequencies, since by increasing frequency the per-formance scales linearly for a CPU/cache-intensiveworkload.Architecture-specific policies are definedbased on the architecture or infrastructure in whichcomputation happens. Also application-specific poli-cies are defined around applications’ characteris-tic.Workload consolidation (Hermenier et al., 2009)policy is a sort of policy at the intersection of thelast two mentioned policies. Mixing various types of workloads on top of a physical machine is called con-solidation.Furthermore, consolidation-based policies A GENERAL-PURPOSE AND MULTI-LEVEL SCHEDULING APPROACH IN ENERGY EFFICIENT COMPUTING 39  should be designed in such a way to be an effectiveconsolidation. In fact, effective consolidation is notpacking the maximum workload in the smallest num-ber of servers, keeping each resource (cpu, disk, net-work and memory) on every server at 100% utiliza-tion, such an approach may increase the energy usedper service unit. 4.2 Algorithms Algorithms constitute the next part of energy con-sumption optimization. At this level, we are dealingmainly with energy aware operations over resourceslike processor, memory, disk and network. For in-stance, for a processor we have lightweightoperationsi.e. decrease/increase freq/volt, moving to an idle-state or moving to a performance-state, and heavy-weight operations such as suspend/resume/migrateand start/stop on virtual machines or turn on/turn off on physical machines.Algorithms based on cost/performance models arepart of the scheduling to model cost vs. performanceof system states. A formal cost model quantifies thecost of a system state in terms of power and perfor-mance. These models are exploited by the schedul-ing algorithms to select the best state of a processor,memory, disk and network. Sleep states’ power rateand their latency i.e. the time needed to change to andfrom the running state, are examples of parameters inmodeling cost vs. performance. In addition, mod-els should specify how much energy will be saved instate transitions and how long it takes for state transi-tions. Models are also architecture and infrastructuredependent e.g. internal of Multicore and NUMA sys-tems have different features and characteristics to beconsidered. 4.3 Pricing Strategy At the highest level in the cloud interface, we havepricing strategies such as Spot Pricing in Amazon(Amazon, 2010) and recent pricing approaches inHaizea or perhaps game theory mechanisms. Thesemechanisms apply cloud policies that are revenuemaximization or improving utilization. More or lessthese policies have the same goals, and also theyare energy efficient, since they keep cloud resourcesbusy by offering various prices to attract more cloudconsumers. A dynamic pricing strategy like offeringcheaper prices for applications that will lead to lessenergy consumption (or higher performance) basedon the current cloud status (workloads and resources)compared to the others, is an Energy Efficient pricingschema. 4.4 Autonomic Scheduling We enumerate components of a distributed systemscheduler as frontend policies, core scheduler, infor-mationservice, andbackendpolicies. In summarizingour approach, as a reference architecture, some com-ponents of a distributed system scheduler are enumer-ated as the following: •  Frontend policies: Admission control and pricingare placed in this component. Job requests passvia these policies before queueing. •  Core scheduler: Queue mechanism, variousscheduling algorithms and specific energy awarealgorithms such as those dealing with energyaware operations (next Section) constitute core of a scheduler. •  Information service: Scheduler’s slot table, avail-ability window, etc. provide various informationto be exploited in different components. •  Backend policies: Host selection, mapping andpreemption constitute backend policies.An autonomic scheduling approach could ex-ploit this reference architecture to make various de-cisions in different components, for example in queuemechansim based on job’s characteristic, informationprovided by information service, and other jobs inthe queue, a grouping or affinity mechanism couldbe implemented to reduce resource contention among jobs; we are studying this research in another work.Nonetheless, more details of this approach is out of the scope of this paper. 5 EARLY LESSONS ON ENERGYAWARE OPERATIONS In this section, we review two main energy awareoperations from the scheduling point of view at theprocessor level in a detailed manner compared to theprevious section.We have added DVFS feature in theHaizea’ resource model to support a set of differentfrequencies and voltages. We also extended durationclass of Haizea to keep track of running leases withdifferent frequencies and update the remaining timeof a lease accordingly. This section highlights the im-portance of classifying compute intensive workloadsaccording to their demands and running them on themost appropriate processor (i.e. operating on the fre-quency required by the classified workloads). CLOSER 2011 - International Conference on Cloud Computing and Services Science 40  5.1 Experimental Results In the first experiment, we run Haizea in simulationmode to process 30 days of lease requests from theSDSCBlueHorizonclusterjobsubmissiontrace(Fei-telson, 2010).We have done two separate experimentswith two different processors as the processing unit of the nodes, with the following DVFS specifications: Table 1: Operating Points for processor1. Perf. state Frequency VoltageP0 3600 1.4P1 3400 1.35P2 3200 1.3P3 3000 1.25P4 2800 1.2 Table 2: Operating Points for processor2. Perf. state Frequency VoltageP0 1600 1.484P1 1400 1.420P2 1200 1.276P3 1000 1.164P4 800 1.036P5 600 0.956In each run, we measured the whole experimen-tation time, sum of all leases slowdown (all-leases-slowdown) and energy consumption according toequation (3) metrics as well as resource utilization.Tables 3, 4 show these metrics for processor1 andproessor2, respectively. Table 3: SDSC Blue Metrics for processor1.Freq Time(sec.) Slowdown(sec.) EC(Joule) Util3600 2668402 9870 160560938160 0.683400 2690148 12552 149281951131 0.713200 2720196 14349 138431042048 0.753000 2748824 14473 127989300000 0.792800 2829515 20649 117954039168 0.82 This experiment reveals that as frequencies de-crease, the running time increases slightly; resourceutilization increases and this is true also for slowdownmetric. On the other hand, there is a big decrease inenergy consumption. In conclusion, if we comparethe highest frequency run with the lowest frequencyrun, we observe that while the running time increases,but there is a big improvement for energy consump-tion metric; in addition resource utilization increasesaround 14% which for utilization is a very good gain.Increasing utilization also is an energy efficient ap-proach, since resources would be busy and the wasteof energy would become less. Nonetheless, in caseof high load it would be better if we run applicationswith the highest frequency, since during peak timesutilization is always high and by this technique com-puting system could service more jobs. In fact, thereis a trade-off between load, utilization, cost, perfor-mance, and sustainability. In all, in case of low loadwhich resource utilization is low, taking advantage of running applications with low frequency is a promis-ing technique to fill out availability window of sched-uler and increase utilization; on the other hand, inhigh load times, we could run applications with thehighest performance of computing system. Trade-off is a key in all these aspects. Furthermore, we havethe same observation for experimentation with pro-cessor2. Table 4: SDSC Blue Metrics for processor2.Freq Time Slowdown EC Util1600 2668402 9870 80180564496.3 0.681400 2739271 15832 73407588444.4 0.761200 2851453 22414 59276240343.1 0.851000 3269150 51268 49327094388.4 0.89800 4017467 66126 39076964327.3 0.90600 5336254 85235 33274070266.6 0.91Table 5: SDSC DataStar Metrics for processor1.Freq Time Slowdown EC Util3600 2654637 9017 399685094928 0.583400 2658448 10538 371630049746 0.613200 2662737 12051 344615195392 0.653000 2667597 16264 318617648438 0.692800 2673151 20650 293637262464 0.74 In the second experiment, we studied the samemetrics on SDSC DataStar trace. Table 5 shows theaforementioned metrics for processor1.Interestingly, experimental results are promising withbig improvement in utilization and energy consump-tion as workloads are running with low frequencies. 6 CONCLUSIONS AND FUTUREWORKS We have proposed a multi-level and general-purposeapproach for energy efficient computing. In partic-ular, we have added support for DVFS in Haizea’sresource model to do some simulation experimentsregarding running workloads with different frequen-cies in a static setting. Through experiments, we haveshown big improvement in utilization and energyconsumption as workloads are running with low fre-quencies and the coincidence of Energy consumption A GENERAL-PURPOSE AND MULTI-LEVEL SCHEDULING APPROACH IN ENERGY EFFICIENT COMPUTING 41
Similar documents
View more...
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks