TEEC 2016 Abstracts

Full Papers
Paper Nr: 3

Integrated Energy Efficient Data Centre Management for Green Cloud Computing - The FP7 GENiC Project Experience


J. Ignacio Torrens, Deepak Mehta, Vojtech Zavrel, Diarmuid Grimes, Thomas Scherer, Robert Birke, Lydia Chen, Susan Rea, Lara Lopez, Enric Pages and Dirk Pesch

Abstract: Energy consumed by computation and cooling represents the greatest percentage of the average energy consumed in a data centre. As these two aspects are not always coordinated, energy consumption is not optimised. Data centres lack an integrated system that jointly optimises and controls all the operations in order to reduce energy consumption and increase the usage of renewable sources. GENiC is addressing this through a novel scalable, integrate energy management and control platform for data centre wide optimisation. We have implemented a prototype of the platform together with workload and thermal management algorithms. We evaluate the algorithms in a simulation based model of a real data centre. Results show significant energy savings potential, in some cases up to 40%, by integrating workload and thermal management.

Paper Nr: 4

An Energy-aware Scheduling Algorithm in DVFS-enabled Networked Data Centers


Mohammad Shojafar, Claudia Canali, Riccardo Lancellotti and Saeid Abolfazli

Abstract: In this paper, we propose an adaptive online energy-aware scheduling algorithm by exploiting the reconfiguration capability of a Virtualized Networked Data Centers (VNetDCs) processing large amount of data in parallel. To achieve energy efficiency in such intensive computing scenarios, a joint balanced provisioning and scaling of the networking-plus-computing resources is required. We propose a scheduler that manages both the incoming workload and the VNetDC infrastructure to minimize the communication-plus-computing energy dissipated by processing incoming traffic under hard real-time constraints on the per-job computingplus-communication delays. Specifically, our scheduler can distribute the workload among multiple virtual machines (VMs) and can tune the processor frequencies and the network bandwidth. The energy model used in our scheduler is rather sophisticated and takes into account also the internal/external frequency switching energy costs. Our experiments demonstrate that the proposed scheduler guarantees high quality of service to the users respecting the service level agreements. Furthermore, it attains minimum energy consumptions under two real-world operating conditions: a discrete and finite number of CPU frequencies and not negligible VMs reconfiguration costs. Our results confirm that the overall energy savings of data center can be significantly higher with respect to the existing solutions.

Paper Nr: 6

Towards Design-time Simulation Support for Energy-aware Cloud Application Development


Christophe Ponsard, Renaud De Landtsheer, Gustavo Ospina and Jean-Christophe Deprez

Abstract: Cloud application deployment is becoming increasingly popular for the removal of upfront hardware costs, the pay-per-use cost model and their ability to scale. However, deploying software on the Cloud carries both opportunities and threats regarding energy efficiency. In order to help Cloud application developers learn and reason about the energy consumption of their application on the server-side, we have developed a framework centred on a UML profile for relating energy goals, requirements and associated KPI metrics to application design and deployment elements. Our previous work has focused on the use of such a framework to carry out our run-time experiments in order to select the best approach. In this paper, we explore the feasibility of a complementary approach for providing support at design time based on finer grained deployment models, the specification of Cloud and energy adaptation policies and the use of a discrete event simulator for reasoning on key performance indicators such as energy but also overall performance, delay and costs. The goal is to support the Cloud developer in pre-selecting the best trade-off that can be further tuned at run-time.