CLOSER 2017 Abstracts


Area 1 - Cloud Computing Fundamentals

Full Papers
Paper Nr: 33
Title:

Reconfigurable and Adaptive Spark Applications

Authors:

Mohamad Jaber, Mohamed Nassar, Wael Al Rahal Al Orabi, Bilal Abi Farraj, Mohamad Omar Kayali and Chadi Helwe

Abstract: The contribution of this paper is two-fold. First, we propose a Domain Specific Language (DSL) to easily reconfigure and compose Spark applications. For each Spark application we define its input and output interfaces. Then, given a set of connections that map outputs of some Spark applications to free inputs of other Spark applications, we automatically embed Spark applications with the required synchronization and communication to properly run them according to the user-defined mapping. Second, we present an adaptive quality management/selection method for Spark applications. The method takes as input a pipeline of parameterized Spark applications, where the execution time of each Spark application is an unknown increasing function of quality level parameters. The method builds a controller that automatically computes adequate quality for each Spark application to meet a user-defined deadline. Consequently, users can submit a pipeline of Spark applications and a deadline, our method automatically runs all the Spark applications with the maximum quality while respecting the deadline specified by the user. We present experimental results showing the effectiveness of our method.

Paper Nr: 38
Title:

A Quantitative Methodology for Cloud Security Risk Assessment

Authors:

Srijita Basu, Anirban Sengupta and Chandan Mazumdar

Abstract: Assets of Cloud stakeholders (Service Providers, Consumers and Third Parties) are the essential elements required to carry out necessary functions / services of the cloud system. Assets usually contain vulnerabilities that may be exploited by threats to jeopardize the functioning of the cloud system. Therefore a proper risk assessment methodology is required to determine the asset-specific and stakeholder-specific risks so as to be able to control them. Existing methodologies fail to comprehensively evaluate various risk elements like asset value, vulnerabilities and threats. This paper is an attempt to quantitatively model all risk elements and devise a methodology to assess risks to assets and stakeholders of a cloud system.

Paper Nr: 48
Title:

An Ontological Template for Context Expressions in Attribute-based Access Control Policies

Authors:

Simeon Veloudis, Iraklis Paraskakis, Chris Petsos, Yiannis Verginadis, Ioannis Patiniotakis and Gregoris Mentzas

Abstract: By taking up the cloud computing paradigm enterprises are able to realise significant cost savings whilst increasing their agility and productivity. However, due to security concerns, many enterprises are reluctant to migrate their critical data and operations to the cloud. One way to alleviate these concerns is to devise suitable policies that infuse adequate access controls into cloud services. However, the dynamicity inherent in cloud environments, coupled with the heterogeneous nature of cloud services, hinders the formulation of effective and interoperable access control policies that are suitable for the underlying domain of application. To this end, this work proposes an ontological template for the semantic representation of context expressions in access control policies. This template is underpinned by a suitable set of interrelated concepts that generically capture a wide range of contextual knowledge that must be considered during the evaluation of policies.

Paper Nr: 59
Title:

Avoiding Free Riders in the Cloud Federation Highways

Authors:

Marcio Roberto Miranda Assis and Luiz Fernando Bittencourt

Abstract: The maturity of the Cloud Computing paradigm has highlighted a set of obstacles which isolated cloud providers are not being able to handle. To overcome these obstacles, isolated providers can organize themselves in entities called Inter-Clouds, mainly to share resources. However, this kind of resource-sharing environment may face the emergence of free riders, which only consume resources without caring for the whole, in a modern version of the Tragedy of the Commons. This work characterizes the free riders and proposes an Inter-Cloud architecture to avoid them based on the main features of Cloud Federations. This Inter-Cloud architecture, called Multi-Clouds Tournament, organizes multiple cloud providers in a tournament-based fashion, allowing those with better scores, determined by a function of offer and consumption, to take advantage of the system. On the other hand, those with low expected returns to the system, free riders for example, face disadvantages or even are eliminated from the tournament. We show preliminary tests of a score function within the tournament, illustrating how the system promotes or eliminates participants according to their behavior.

Paper Nr: 86
Title:

Recovery-Oriented Resource Management in Hybrid Cloud Environments

Authors:

Yasser Aldwyan and Richard O. Sinnott

Abstract: Cloud-based systems suffer from an increased risk of individual server failures due to their scale. When failures happen, resource utilization and system reliability can be negatively affected. Hybrid cloud models allow utilization of local resources in private clouds with resources from public clouds as and when needed through cloudbursting. There is an urgent need to develop cloudbursting approaches that are cognisant of the reliability and fault tolerance of external cloud environments. Recovery oriented computing (ROC) is a new approach for building reliable services that places emphasis on recovery from failures rather than avoiding them completely since even the most dependable systems will eventually fail. All fault tolerant techniques aim to reduce time to recover (TTR). In this paper, we develop a ROC-based fault tolerant approach for managing resources in hybrid clouds by proposing failure models with associated feedback control supporting a local resource-aware resource provisioning algorithm. We present a recovery-oriented virtual infrastructure management system (RVIMS). Results show that RVIMS is more reliable than those of single cloud environments even though TTR in the single cloud environments are about 10% less than those of RVIMS.

Short Papers
Paper Nr: 36
Title:

DECIDE: DevOps for Trusted, Portable and Interoperable Multi-Cloud Applications towards the Digital Single Market

Authors:

Juncal Alonso, Leire Orue-Echevarria, Marisa Escalante and Gorka Benguria

Abstract: The main objective of the DECIDE action is to provide a new generation of multi-cloud service-based software framework, enabling techniques and mechanisms to design, develop, and dynamically deploy multi-cloud aware applications in an ecosystem of reliable, interoperable, and legal compliant cloud services. Three use cases will be conducted to validate the proposed approach.

Paper Nr: 76
Title:

Enforcing Hidden Access Policy for Supporting Write Access in Cloud Storage Systems

Authors:

Somchart Fugkeaw and Hiroyuki Sato

Abstract: Ciphertext Policy Attribute-based Encryption (CP-ABE) is recognized as one of the most effective approaches for data access control solution in cloud computing. This is because it provides efficient key management based on user attributes of multiple users in accessing shared data. However, one of the major drawbacks of CP-ABE is the privacy of policy content. Furthermore, the communication and computation cost at data owner would be very expensive if there are frequent updates of data as those updated data need to be re-encrypted and uploaded back to the cloud. For the policy privacy perspective in CP-ABE based access control, access policy is usually applied to encrypt the plain data and is carried with the ciphertext. In a real-world system, policies may contain sensitive information that must be hidden from untrusted parties or even the users of the system. This paper proposes a flexible and secure policy hiding scheme that is capable to support policy content privacy preserving and secure policy sharing in multi-authority cloud storage systems. To address the policy privacy issue, we introduce randomized hash-based public attribute key validation to cryptographically protect the content of access policy and dynamically enforce hidden policies to collaborative users. In addition, we propose a write access enforcement mechanism based the proxy re-encryption method to enable optimized and secure file re-encryption. Finally, we present the security analysis and compare the access control and policy hiding features of our scheme and related works. The analysis shows that our proposed scheme is secure and efficient in practice and it also provides less complexity of cryptographic formulation for policy hiding compared to the related works.

Paper Nr: 78
Title:

An Immuno-based Autonomic Computing System for IaaS Security in Public Clouds

Authors:

Abdelwahhab Satta, Sihem Mostefai and Imane Boussebough

Abstract: Cloud Computing is the new way for computing infrastructures exploitation. These infrastructures, offered as a service by the cloud IaaS service model are being very appealing to the new industry and business. However, surveys reveal that security issues are still the major barrier facing the migration from Infrastructure in premise to public clouds. On the other hand, Autonomic Computing Systems have been used so far to enable the cloud, and in this work, we will investigate these systems capabilities to enable security management for IaaS in public clouds.

Paper Nr: 87
Title:

Establishing Trust in Cloud Services via Integration of Cloud Trust Protocol with a Trust Label System

Authors:

Vincent C. Emeakaroha, Eoin O'Meara, Brian Lee, Theo Lynn and John P. Morrison

Abstract: Cloud computing has transformed the computing landscape by enabling flexible compute-resource provisioning. The rapid growth of cloud computing and its large-scale nature provide many advantages to business enterprises. Lack of trust in cloud services however remains a major barrier to the adoption of cloud computing. Trust issues typically relate to concerns about the location, protection and privacy of data. The concept of trust includes trust in both novel technologies and cloud service providers and is composed of both persistent and dynamic trust elements. We have previously developed a trust label system that is capable of facilitating both persistent and dynamic trust building in both cloud services and cloud service providers; however, it lacks reliable information delivery mechanisms. The Cloud Trust Protocol (CTP) aims to provide a reliable information communication system for cloud service consumers and thus offers a natural complement to the trust label system by providing a reliable and interoperable information exchange mechanism. In this paper, we propose a novel integration of the trust label system with CTP to provide end-to-end visibility of cloud service operational information to consumers.

Paper Nr: 100
Title:

Performance of Trusted Computing in Cloud Infrastructures with Intel SGX

Authors:

Anders T. Gjerdrum, Robert Pettersen, Håvard D. Johansen and Dag Johansen

Abstract: Sensitive personal data is to an increasing degree hosted on third-party cloud providers. This generates strong concerns about data security and privacy as the trusted computing base is expanded to include hardware components not under the direct supervision of the administrative entity responsible for the data. Fortunately, major hardware manufacturers now include mechanisms promoting secure remote execution. This paper studies Intel’s Software Guard eXtensions (SGX), and experimentally quantifies how basic usage of this instruction set extension will affect how cloud hosted services must be constructed. Our experiments show that correct partitioning of a service’s functional components will be critical for performance.

Paper Nr: 103
Title:

Managing Personalized Cross-cloud Storage Systems with Meta-code

Authors:

Magnus Stenhaug, Håvard D. Johansen and Dag Johansen

Abstract: Providing fine-level and customized storage solutions for novice cloud users is challenging. At best, a limited set of customization options are provided, often related to volume of data to be stored. We are proposing a radical different customization approach for cloud users, one where personalization of services provided is transparently managed and supported. Our approach to building personalized cloud storage services is to allow the user to specify data management policies that execute in a user space container transparently to the user. In this paper we describe Balva, a cloud management system that allows users to configure flexible management policies on their data. To support legacy applications, Balva is implemented at the file system level, intercepting system calls to effectuate dynamic and personalized management policies attached to files.

Posters
Paper Nr: 6
Title:

Attribute based Encryption: Traitor Tracing, Revocation and Fully Security on Prime Order Groups

Authors:

Xiaoyi Li, Kaitai Liang, Zhen Liu and Duncan Wong

Abstract: A Ciphertext-Policy Attribute-Based Encryption (CP-ABE) allows users to specify the access policies without having to know the identities of users. In this paper, we contribute by proposing an ABE scheme which enables revoking corrupted users. Given a key-like blackbox, our system can identify at least one of the users whose key must have been used to construct the blackbox and can revoke the key from the system. This paper extends the work of Liu and Wong to achieve traitor revocability. We construct an Augmented Revocable CPABE (AugR-CP-ABE) scheme, and describe its security by message-hiding and index-hiding games. Then we prove that an AugR-CP-ABE scheme with message-hiding and index-hiding properties can be transferred to a secure Revocable CP-ABE with fully collusion-resistant blackbox traceability. In the proof for index-hiding, we divide the adversary’s behaviors in two ways and build direct reductions that use adversary to solve the D3DH problem. Our scheme achieves the sub-linear overhead of O(√N), where N is the number of users in the system. This scheme is highly expressive and can take any monotonic access structures as ciphertext policies.

Paper Nr: 70
Title:

The Impact of Cloud Forensic Readiness on Security

Authors:

Ahmed Alenezi, Nurul H N. Zulkipli, Hany F. Atlam, Robert J. Walters and Gary B. Wills

Abstract: The rapid increase in the use of cloud computing has led it to become a new arena for cybercrime. Since cloud environments are, to some extent, a new field for digital forensics, a number of technical, legal and organisational challenges have been raised. Although security and digital forensics share the same concerns, when an attack occurs, the fields of security and digital forensics are considered different disciplines. This paper argues that cloud security and digital forensics in cloud environments are converging fields. As a result, unifying security and forensics by being forensically ready and including digital forensics aspects in security mechanisms would enhance the security level in cloud computing, increase forensic capabilities and prepare organizations for any potential attack.

Paper Nr: 106
Title:

An Investigation into the Use of Hybrid Solar Power and Cloud Service Solutions for 24/7 Computing

Authors:

Emmanuel Kayode Akinshola Ogunshile

Abstract: As the human race demands more from computing, the national grids of nations around the world subsequently have to burn additional fossil fuels to meet increased power requirements. The aim of this paper is to investigate ways in which an organisation could reduce its operational costs and therefore be greener through the implementation of either a complete solar solution or a more hybrid mix with cloud computing thrown in. Through the creation of a hypothetical UK based SME we compared solar technology currently in the market in order to understand not only the total investment required but also just how efficient solar technology is, or perhaps is not. We also investigated comparable technology from the three cloud providers (Microsoft, Amazon and Google) to discover whether replacing on-premise hardware with that available in data centres would be more cost-effective than full solar solution or reduce the total amount of solar technology required. Having conducted the research, we found that solar technology is in no way an effective solution for the total replacement of power from the national grid, it can be very pricey to implement especially on the scale of always on computing and is easily affected by the elements-which given the UK as a location is not ideal. It was also discovered that cloud computing is in no way as affordable as it is perhaps made out to be but has the benefits of being considered a) an operational expenditure, b) fully maintained and; c) fully flexible, these all being reasons which help a growing SME expand down the line without unnecessary hardware outlay. Our final recommendations provide a fair cost comparison over the total expected payback period for the solar setup of installing a solar solution to power the entire on-premise systems and simply having a hybrid of both solar and cloud.

Paper Nr: 107
Title:

An Investigation into the Use of Solar Power in Cloud Computing

Authors:

Emmanuel Kayode Akinshola Ogunshile

Abstract: Cisco predict that by 2019, 86% of computing workloads will be carried out within a cloud computing environment. This is leading to the dramatically increasing need for data centre expansion which in turn is consuming more and more of the world’s natural resources to generate the electricity needed to power them. This paper uses a fictitious electronics recycling company called Compucycle to investigate the feasibility and cost of integrating solar power generation into Compucycle’s IT Infrastructure compared to completely outsourcing it to a cloud service provider. It was discovered that a complete solar power solution was not feasible due to the excessive costs it brought to the business. It was then decided that two out of four proposed solutions in this paper were a good fit for the business. The first being a hybrid power solution where a small portion of power is derived from the grid along with solar power generation. The second being the outsourced option. The third and fourth solutions were disregarded due to the fact that one was completely unfeasible and the other went against what Compucycle wanted to achieve.

Area 2 - Cloud Operations and Analytics

Full Papers
Paper Nr: 52
Title:

Towards a Generic Autonomic Model to Manage Cloud Services

Authors:

Jonathan Lejeune, Frederico Alvares and Thomas Ledoux

Abstract: Autonomic Computing has recently contributed to the development of self-manageable Cloud services. It provides means to free Cloud administrators of the burden of manually managing varying-demand services while enforcing Service Level Agreements (SLAs). However, designing Autonomic Managers (AMs) that take into account services’ runtime properties so as to provide SLA guarantees without the proper tooling support may quickly become a non-trivial, fastidious and error-prone task as systems size grows. In fact, in order to achieve well-tuned AMs, administrators need to take into consideration the specificities of each managed service as well as its dependencies on underlying services (e.g., a Sofware-as-a-Service that depends on a Platform/Infrastructure-as-a-Service). We advocate that Cloud services, regardless of the layer, may share the same consumer/provider-based abstract model. From that model we can derive a unique and generic AM that can be used to manage any XaaS service defined with that model. This paper proposes such an abstract (although extensible) model along with a generic constraint-based AM that reasons on abstract concepts, service dependencies as well as SLA constraints in order to find the optimal configuration for the modeled XaaS. The genericity of our approach are showed and discussed through two motivating examples and a qualitative experiment has been carried out in order to show the approache’s applicability as well as to point out and discuss its limitations.

Paper Nr: 58
Title:

Combining TOSCA and BPMN to Enable Automated Cloud Service Provisioning

Authors:

Domenico Calcaterra, Vincenzo Cartelli, Giuseppe Di Modica and Orazio Tomarchio

Abstract: The Cloud computing paradigm has kept its promise to transform computing resources into utilities ready to be consumed in a dynamic and flexible way, on an “as per need” basis. The next big challenge cloud providers are facing is the capability of automating the internal operational processes that need to be run in order to efficiently serve the increasing customers’ demand. When a new cloud service request has to be served, there is a bunch of operations the provider needs to carry out in order to get the requested cloud service up and ready for usage. This paper investigates the automation of the “provisioning” activities that must be put into place in order to build up a cloud service. Those activities range from the procurement of computing resources to the deployment of a web application, passing through the installation and configuration of third party softwares and libraries that the web application depends upon in order to properly work. Leveraging on a well-known specification used for the representation of a cloud application’s structure (TOSCA), we designed and implemented an orchestrator capable of automating and putting in force, in the correct timing, the sequence of tasks building up the cloud application in a step-by-step fashion. The novelty in the followed approach is represented by the definition of a converter which takes as input a TOSCA template and produces a workflow that is ready to be executed by a workflow engine. The BPMN notation was used to represent both the workflow and the data that enrich the workflow. To support the viability of the proposed idea, a use case was developed and discussed in the paper.

Paper Nr: 96
Title:

Component-wise Application Migration in Bidimensional Cross-cloud Environments

Authors:

Jose Carrasco, Francisco Durán and Ernesto Pimentel

Abstract: We propose an algorithm for the migration of cloud applications’ components between different providers, possibly changing their service level between IaaS and PaaS. Our solution relies on three of the key ingredients of the trans-cloud approach: a unified API, agnostic topology descriptions, and mechanisms for the independent specification of providers. We show how our approach allows us to overcome some of the current interoperability and portability issues of cloud environments to propose a solution for migration, present an implementation of our proposed solution, and illustrate it with a case study and experimental results.

Posters
Paper Nr: 45
Title:

Investigating Cloud Adoption Model using Analytics: A Case Study of Saudi Government Agencies

Authors:

Mohammed Mreea, Dharmendra Sharma, Kumudu Munasinghe and Chaminda Hewamaddumage

Abstract: Cloud computing is an innovation in world technology. It is used to provide organization services as a utility service through internet and innovative uptake of cloud for improved effectiveness and efficiency. There are adoption challenges in uptake of cloud by government agencies .this paper seeks to identify the cloud computing attributes which are relevant to public organizations, investigate the factors affecting the adoption and develop a response to an effective model to address the challenges. Sample government agencies from Saudi Arabia are used to investigate the challenges and the characteristics for which an adoption model is motivated. The proposed model consists of case studies findings based on analysis of evidence from these organizations. Random samples from different categories in professionalism in Saudi Arabia participate in the questionnaire to extract and confirm influence factors from which insights are derived through classification. The results are then used to determine the tipping point for the uptake of the appropriate cloud model for services provided by government agencies in the Saudi Arabia context. This paper presents the context, motivations, data collection approach, analytics on survey results, tipping point parameters and the principle factors for cloud adoption. Some initial results are presented and future work is summarized.

Paper Nr: 83
Title:

HIPAA Compliant Cloud for Sensitive Health Data

Authors:

Valentina Salapura

Abstract: Cloud environments offer flexibility, elasticity, and low cost compute infrastructure. Electronic health records (EHRs) require infrastructure which is regulated under several IT compliances with security and data persistence and restore. To enable customers to bring sensitive medical data in the cloud, we enabled the IBM Watson Health Cloud (WHC) for compliance with the U.S. federal electronic health record regulation. This paper briefly outlines how we create HIPAA- (Health Insurance Portability and Accountability Act) compliant cloud computing. We focus on the privacy and security rules for protecting Protected Health Information (PHI) and use data encryption for data-in-motion and data-at-rest. To meet HIPAA requirements for data persistence, we implement data back-ups, archiving service and disaster recovery plan. In this paper, we discuss various challenges and lessons learned for implementing the diverse set of compliance features required by HIPAA in the IBM WHC cloud.

Paper Nr: 102
Title:

Business Resiliency Framework for Enterprise Workloads in the Cloud

Authors:

Valentina Salapura, Ruchi Mahindru and Richard Harper

Abstract: Businesses with enterprise-level workloads - such as Systems Applications and Products (SAP) workloads - require business level resiliency including high availability, clustering, or physical server appliances. To enable businesses to use enterprise workloads in a cloud, the IBM Cloud Managed Services (CMS) cloud offers many SAP enterprise-level workloads for both virtualized and non-virtualized cloud environments. Based on our experience with enabling resiliency for enterprise-level workloads like SAP and Oracle, we realize that as the end-to-end process is quite cumbersome, complex and expensive. Therefore, it would be highly beneficial for the customers and the cloud providers to have a systematic business resiliency framework in place, which would very well fit the cloud model with appropriate level of abstraction, automation, while allowing the desired cost benefits. In this paper, we introduce an end-to-end business resiliency framework and resiliency life cycle. We further introduce an algorithm to determine the optimal resiliency pattern for enterprise applications using a diverse set of platforms in the IBM CMS cloud offering.

Area 3 - Data as a Service

Posters
Paper Nr: 35
Title:

User-based Load Balancer in HBase

Authors:

Ahmad Ghandour, Mariam Moukalled, Mohamad Jaber and Yliès Falcone

Abstract: Latency of read and write operations is an important measure in relational and non-relational databases. Load balancing is one of the features that manages the distribution of the load between nodes in case of distributed servers to improve the overall performance. In this paper, we introduce a new load balancer to HBase (a non-relation database), which monitors the most requested keys and dynamically acts to redistribute the regions by splitting and moving them. Our load balancer takes into account the average response time of clients’ requests and the most requested keys. Our method is fully implemented and can be integrated in HBase distribution. Experimental results show that we get on average an improvement of latency of 15%, and up to 35% in some scenarios.

Area 4 - Edge Cloud and Fog Computing

Full Papers
Paper Nr: 90
Title:

Self-organizing Service Structures for Cyber-physical Control Models with Applications in Dynamic Factory Automation - A Fog/Edge-based Solution Pattern Towards Service-Oriented Process Automation

Authors:

Maximilian Engelsberger and Thomas Greiner

Abstract: The convergence of information technology and operational technology is a strong force in fabric automation. Service-oriented architectures and cloud computing technologies expand into next generation production systems. Those technologies enable a lot of new possibilities; such as high agility, global connectivity and high computing capacities. However, they also bring huge challenges regarding flexibility and reliability through increasing system dynamics, complexity and heterogenity. New solution patterns are needed to conquer those challenges. This paper proposes a new fog-oriented approach, which shows how future production systems, that are often called cyber-physical production systems, can deal with dynamically changing services and infrastructure elements. The goal is to provide an adequate degree of flexibility and reliability across the whole production lifecycle. Therefore, an event property model (“bubble model”), a multi-criterial evaluation metric and extensions to Kuhn-Munkres and Add algorithm are described. The overall concept is evaluated by an application example from the field of process engineering. With the help of practical case studies and dynamic system simulations, qualitative results are gained.

Short Papers
Paper Nr: 22
Title:

Internet of Things Out of the Box: Using TOSCA for Automating the Deployment of IoT Environments

Authors:

Ana C. Franco da Silva, Uwe Breitenbücher, Pascal Hirmer, Kálmán Képes, Oliver Kopp, Frank Leymann, Bernhard Mitschang and Ronald Steinke

Abstract: The automated setup of Internet of Things environments is a major challenge due to the heterogeneous nature of the involved physical components (i.e., devices, sensors, actuators). In general, IoT environments consist of (i) physical hardware components, (ii) IoT middlewares that bind the hardware to the digital world, and (iii) IoT applications that interact with the physical devices through the middlewares (e.g., for monitoring). Setting up each of these requires sophisticated means for software deployment. In this paper, we enable such a means by introducing an approach for automated deployment of entire IoT environments using the Topology and Orchestration Specification for Cloud Applications standard. Based on topology models, all components involved in the IoT environment (devices, IoT middlewares, applications) can be set up automatically. Moreover, to enable interchangeability of IoT middlewares, we show how they can be used as a service to deploy them individually and on-demand for separate use cases. This enables provisioning whole IoT environments out-of-the-box. To evaluate the approach, we present three case studies giving insights in the technical details.

Posters
Paper Nr: 55
Title:

Towards Bandwidth Optimization in Fog Computing using FACE Framework

Authors:

Rosangela de Fátima Pereira Marquesone, Érico Augusto da Silva, Nelson Mimura Gonzalez, Karen Langona, Walter Akio Goya, Fernando Frota Redígolo, Tereza Cristina Melo de Brito Carvalho, Jan-Erik Mångs and Azimeh Sefidcon

Abstract: The continuous growth of data created by Internet-connected devices has been posing a challenge for mobile operators. The increase in the network traffic has exceeded the network capacity to efficiently provide services, specially for applications that require low latency. Edge computing is a concept that allows lowering the network traffic by using cloud-computing resources closer to the devices that either consume or generate data. Based on this concept, we designed an architecture that offers a mechanism to reduce bandwidth consumption. The proposed solution is capable of intercepting the data, redirecting it to a processing node that is allocated between the end device and the server, in order to apply features that reduce the amount of data on the network. The architecture has been validated through a prototype using video surveillance. This area of application was selected due to the high bandwidth requirement to transfer video data. Results show that in the best scenario is possible to obtain about 97% of bandwidth gain, which can improve the quality of services by offering better response times.

Area 5 - Mobile Cloud Computing

Short Papers
Paper Nr: 74
Title:

Adaptive Computation Offloading in Mobile Cloud Computing

Authors:

Vibha Tripathi

Abstract: Mobile Computing has been in use for a while now. A mobile device is a concise tool with limited computational resources like battery, CPU and memory. Although these resources suffice the immediate traditional needs of its user, as the mobile devices are fast turning into personal computing devices, with the rapid development in Cloud-Based technologies like Machine Learning in the Cloud, Data as a Service, Software as a Service, and so on there is an emergent need to implement iteratively more effective ways to offload mobile computation to the Cloud in an on-demand, adaptable and opportunistic way. The major issue in implementing this requirement lies in the very fact that mobile devices are location and context sensitive, limited in battery capacity and need to be constantly reconnecting with their provider’s Base Transceivers while still providing efficient response time to its user. In this paper, we survey this issue and a few proposed solutions in this area and in the end; propose a model for adaptive computation offloading.

Posters
Paper Nr: 23
Title:

Temporal Isolation Among LTE/5G Network Functions by Real-time Scheduling

Authors:

Tommaso Cucinotta, Mauro Marinoni, Alessandra Melani, Andrea Parri and Carlo Vitucci

Abstract: Radio access networks for future LTE/5G scenarios need to be designed so as to satisfy increasingly stringent requirements in terms of overall capacity, individual user performance, flexibility and power efficiency. This is triggering a major shift in the Telcom industry from statically sized, physically provisioned network appliances towards the use of virtualized network functions that can be elastically deployed within a flexible private cloud of network operators. However, a major issue in delivering strong QoS levels is the one to keep in check the temporal interferences among co-located services, as they compete in accessing shared physical resources. In this paper, this problem is tackled by proposing a solution making use of a real-time scheduler with strong temporal isolation guarantees at the OS/kernel level. This allows for the development of a mathematical model linking major parameters of the system configuration and input traffic characterization with the achieved performance and response-time probabilistic distribution. The model is verified through extensive experiments made on Linux on a synthetic benchmark tuned according to data from a real LTE packet processing scenario.

Area 6 - Service Modelling and Analytics

Full Papers
Paper Nr: 64
Title:

A Framework for Certification of Large-scale Component-based Parallel Computing Systems in a Cloud Computing Platform for HPC Services

Authors:

Allberson Bruno de Oliveira Dantas, Francisco Heron de Carvalho Junior and Luís Soares Barbosa

Abstract: This paper addresses the verification of software components in the context of their orchestration to build cloud-based scientific applications with high performance computing requirements. In such a scenario, com- ponents are often supplied by different sources and their cooperation rely on assumptions of conformity with their published behavioral interfaces. Therefore, a faulty or ill-designed component, failing to obey to the envisaged behavioral requirements, may have dramatic consequences in practice. Certifier components, intro- duced in this paper, implement a verification as a service framework and are able to access the implementation of other components and verify their consistency with respect to a number of functional, safety and liveness requirements relevant to a specific application or a class of them. It is shown how certifier components can be smoothly integrated in HPC Shelf , a cloud-based platform for high performance computing in which different sorts of users can design, deploy and execute scientific applications.

Paper Nr: 95
Title:

Topology Splitting and Matching for Multi-Cloud Deployments

Authors:

Karoline Saatkamp, Uwe Breitenbücher, Oliver Kopp and Frank Leymann

Abstract: For automating the deployment of applications in cloud environments, a variety of deployment automation technologies have been developed in recent years. These technologies enable specifying the desired deployment in the form of deployment models, which can be automatically executed. However, changing internal or external conditions often lead to strategical decisions that must be reflected in all deployment models of a company’s IT. Unfortunately, while creating such deployment models is difficult, adapting them is even harder as typically a variety of technologies must be replaced. In this paper, we present the Split and Match Method that enables splitting a deployment model following a manually specified distribution on the business layer. The method also enables automatically deploying the resulting model without the need for a manual intervention and, thus, significantly eases reflecting strategical decisions on the technical deployment layer. We present a formalization and algorithms to automate the steps of the method. Moreover, we validate the practical feasibility of the presented concepts by a prototype based on the TOSCA standard and the OpenTOSCA ecosystem.

Short Papers
Paper Nr: 27
Title:

Towards a REST Cloud Computing Lexicon

Authors:

Fabio Petrillo, Philippe Merle, Naouel Moha and Yann-Gaël Guéhéneuc

Abstract: Cloud computing is a popular Internet-based computing paradigm that provides on-demand computational services and resources, generally offered by cloud providers' REpresentational State Transfer (REST) APIs. To the best of our knowledge, there has been no study on the analysis of the lexicon adopted by cloud providers, despite its importance for developers. In this paper, we studied three different and well-known REST APIs (Google Cloud Platform, OpenStack, and Open Cloud Computing Interface) to investigate and organise their lexicons. This study presents three main contributions: 1) a tooled approach, called CloudLex, for extracting and analysing REST cloud computing lexicons, 2) a dataset of services, resources, and terms used in the three studied REST APIs, 3) our analysis of this dataset, which represents a first attempt to provide a common REST cloud computing lexicon. After analysing our dataset, we observe that although the three studied REST APIs to describe the same domain (cloud computing), contrary to what one might expect, they do not share a large number of common terms, and only 5% of terms (17/352) are shared by two providers. Thus, the three APIs are lexically heterogeneous, and there is not a consensus on which terms to use on cloud computing systems. We discuss new avenues for cloud computing API designers and researchers.

Paper Nr: 49
Title:

Audio-visual Cues for Cloud Service Monitoring

Authors:

David Bermbach and Jacob Eberhardt

Abstract: When monitoring their systems’ states, DevOps engineers and operations teams alike, today, have to choose whether they want to dedicate their full attention to a visual dashboard showing monitoring results or whether they want to rely on threshold- or algorithm-based alarms which always come with false positive and false negative signals. In this work, we propose an alternative approach which translates a stream of cloud monitoring data into a continuous, normalized stream of score changes. Based on the score level, we propose to gradually change environment factors, e.g., music output or ambient lighting. We do this with the goal of enabling developers to subconsciously become aware of changes in monitoring data while dedicating their full attention to their primary task. We evaluate this approach through our proof-of-concept implementation AudioCues, which gradually adds dissonances to music output, and an empirical study with said prototype.

Paper Nr: 54
Title:

Towards Modeling Monitoring of Smart Traffic Services in a Large-scale Distributed System

Authors:

Andreea Buga and Sorana Tania Nemes

Abstract: Smart traffic solutions have become an important component of today’s cities, due to their aim of improving the quality of the life of inhabitants and reducing the time spent in transportation. They are deployed across large distributed systems and require a robust infrastructure. Their complex structure has been addressed numerous times in practice, but rarely in a formal manner. We propose in this paper a formal modeling approach for monitoring traffic systems and identifying possible failures of traffic sensors. Ensuring a safe and robust deployment and execution of services implies having a clear view on the system status, which is analysed by the monitoring framework. Our work focuses on availability aspects and makes use of the Abstract State Machines modeling technique for specifying the solution. The framework is defined as an Abstract State Machine agent and simulated in the ASMETA tool.

Paper Nr: 62
Title:

Anomaly Detection for Soft Security in Cloud based Auditing of Accounting Systems

Authors:

Mats Neovius and Bob Duncan

Abstract: Achieving information security in the cloud is not a trivial exercise. When the systems involved are accounting software systems, this becomes much more challenging in the cloud, due both to the systems architecture in use, the challenges of proper configuration, and to the multiplicity of attacks that can be made against such systems. A particular issue for accounting systems concerns maintaining a proper audit trail in order that an adequate level of audit may be carried out on the accounting records contained in the system. In this paper we discuss the implications of the traditional approach to such systems and propose a complementary soft security solution relying on detecting behavioural anomalies by evidence theory. The contribution is in conceptualising the anomalies and providing a somewhat theoretical solution for a difficult and challenging problem. The proposed solution is applicable within any domain consisting of rigorous processes and risk of tampering or data exfiltration, such as the cloud based accounting systems.

Paper Nr: 66
Title:

Cost-aware Application Development and Management using CLOUD-METRIC

Authors:

Alieu Jallow, Andreas Hellander and Salman Toor

Abstract: Traditional application development tends to focus on two key objectives: the best possible performance and a scalable system architecture. This application development logic works well on private resources but with the growing use of public IaaS it is essential to find a balance between the cost and the performance of an application. Here we propose CLOUD-METRIC: a lightweight framework for cost-aware development of applications to be deployed in public clouds. The key functionality of CLOUD-METRIC is to allow users to develop applications on private IaaS (a dedicated cluster or an in-house cloud) while estimating the cost of running them on public IaaS. We have demonstrated the strengths of CLOUD-METRIC using two challenging use-cases orchestrated on SNIC Science Cloud, a community OpenStack cloud in Sweden, providing recommendation for a deployment strategy and associated cost estimates in Amazon EC2 and Google Compute Platform. In addition to cost estimation, the framework can also be used for basic application monitoring and as a real-time programming support tool to find bottlenecks in the distributed architecture.

Paper Nr: 97
Title:

Model Driven Cloud Orchestration by Combining TOSCA and OCCI

Authors:

Fabian Glaser, Johannes Erbel and Jens Grabowski

Abstract: To tackle the problem of a cloud-provider lock-in, several standards have emerged in the recent years which aim to provide a unified interface to cloud resources. The Open Cloud Computing Interface (OCCI) thereby focuses on the standardization of a common API for Infrastructure-as-a-Service (IaaS) providers and the Topology and Orchestration Specification for Cloud Applications (TOSCA) focuses on the standardization of a template language to enable the proper definition of the topology of cloud applications and their orchestrations on top of an IaaS cloud. TOSCA thereby does not define how the application topologies are created on the cloud. Therefore, it is worthwhile to analyse the conceptual similarities between the two approaches and the possibilities to integrate both. In this paper, we provide an overview of the similarities between the two standardization approaches. Furthermore, we define a concept of a fully model driven cloud orchestrator based on the two standards.

Paper Nr: 98
Title:

A Review of Cloud Computing Simulation Platforms and Related Environments

Authors:

James Byrne, Sergej Svorobej, Konstantinos M. Giannoutakis, Dimitrios Tzovaras, P. J. Byrne, Per-Olov Östberg, Anna Gourinovitch and Theo Lynn

Abstract: Recent years have seen an increasing trend towards the development of Discrete Event Simulation (DES) platforms to support cloud computing related decision making and research. The complexity of cloud environments is increasing with scale and heterogeneity posing a challenge for the efficient management of cloud applications and data centre resources. The increasing ubiquity of social media, mobile and cloud computing combined with the Internet of Things and emerging paradigms such as Edge and Fog Computing is exacerbating this complexity. Given the scale, complexity and commercial sensitivity of hyperscale computing environments, the opportunity for experimentation is limited and requires substantial investment of resources both in terms of time and effort. DES provides a low risk technique for providing decision support for complex hyperscale computing scenarios. In recent years, there has been a significant increase in the development and extension of tools to support DES for cloud computing resulting in a wide range of tools which vary in terms of their utility and features. Through a review and analysis of available literature, this paper provides an overview and multi-level feature analysis of 33 DES tools for cloud computing environments. This review updates and extends existing reviews to include not only autonomous simulation platforms, but also on plugins and extensions for specific cloud computing use cases. This review identifies the emergence of CloudSim as a de facto base platform for simulation research and shows a lack of tool support for distributed execution (parallel execution on distributed memory systems).

Posters
Paper Nr: 46
Title:

Cloud Computing Financial and Cost Analysis: A Case Study of Saudi Government Agencies

Authors:

Mohammed Mreea, Kumudu Munasinghe and Dharmendra Sharma

Abstract: Cloud computing is an innovation in world technology. It is used to provide organization services as a utility service through internet and innovative uptake of cloud for improved effectiveness and efficiency. There is an absence of academic studies about the financial feasibility and implementation cost to uptake of cloud by government agencies. This paper seeks to identify the cloud computing financial indicators and implementation cost variables which are relevant to public organizations. The proposed model consists of Saudi case study findings based on analysis of evidence from Saudi government organization. Random samples from different categories in professionalism in Saudi Arabia participated in the questionnaire to extract and confirm financial indicators and implementation cost variables. The results indicate a return on investment (ROI) and total cost ownership (TCO) are the main financial indicators to study cloud adoption. Also, data centre variable and fixed cost parameters play the main role to calculate cloud implementation cost.

Paper Nr: 50
Title:

Clustering Goal-Driven Security Factors for Protecting Data in Cloud Storage using Exploratory Factor Analysis (EFA): An Empirical Study

Authors:

Fara Yahya, Robert J Walters and Gary B Wills

Abstract: The purpose of this paper is to explore the important security factors for protecting data in cloud storage from the perspective of security practitioners. The study consist of 43 security variables (or indicator items) from a survey participated by security practitioners in Malaysia. Exploratory factor analysis (EFA) is conducted to understand the clusters of variables (or indicator items) and the inter-relationships constructing the security factors (or components). Most of the respondents are from public sector organisations (government and higher education organisations) in Malaysia. The clusters of variables resulting from this analysis can be used as a reference for security practitioners planning to produce security policies to protect data stored in a cloud storage. The top security factors identified from this study are shown in terms of policy implementation and controls in confidentiality, integrity, availability, non-repudiation, authenticity, reliability, accountability and auditability of data services in cloud storage.

Paper Nr: 77
Title:

A Preliminary Systematic Review of Computer Science Literature on Cloud Computing Research using Open Source Simulation Platforms

Authors:

Theo Lynn, Anna Gourinovitch, James Byrne, P. J. Byrne, Sergej Svorobej, Konstaninos Giannoutakis, David Kenny and John Morrison

Abstract: Research and experimentation on live hyperscale clouds is limited by their scale, complexity, value and and issues of commercial sensitivity. As a result, there has been an increase in the development, adaptation and extension of cloud simulation platforms for cloud computing to enable enterprises, application developers and researchers to undertake both testing and experimentation. While there have been numerous surveys of cloud simulation platforms and their features, few surveys examine how these cloud simulation platforms are being used for research purposes. This paper provides a preliminary systematic review of literature on this topic covering 256 papers from 2009 to 2016. The paper aims to provide insights into the current status of cloud computing research using open source cloud simulation platforms. Our two-level analysis scheme includes a descriptive and synthetic analysis against a highly cited taxonomy of cloud computing. The analysis uncovers some imbalances in research and the need for a more granular and refined taxonomy against which to classify cloud computing research using simulators. The paper can be used to guide literature reviews in the area and identifies potential research opportunities for cloud computing and simulation researchers, complementing extant surveys on cloud simulation platforms.

Area 7 - Services Science

Full Papers
Paper Nr: 17
Title:

Cost Optimization on Public Cloud Provider for Big Geospatial Data

Authors:

Joao Bachiega Junior, Marco Antonio Sousa Reis, Aleteia P. F. de Araujo and Maristela Holanda

Abstract: Big geospatial data is the emerging paradigm for the enormous amount of information made available by the development and widespread use of Geographical Information System (GIS) software. However, this new paradigm presents challenges in data management, which requires tools for large-scale processing, due to the great volumes of data. Spatial Cloud Computing offers facilities to overcome the challenges of a big data environment, providing significant computer power and storage. SpatialHadoop, a fully-fledged MapReduce framework with native support for spatial data, serves as one such tool for large-scale processing.  However, in cloud environments, the high cost of processing and system storage in the providers is a central challenge. To address this challenge, this paper presents a cost-efficient method for processing geospatial data in public cloud providers. The data validation software used was Open Street Map (OSM). Test results show that it can optimize the use of computational resources by up to 263% for available SpatialHadoop datasets.

Paper Nr: 18
Title:

Towards Semantic KPI Measurement

Authors:

Kyriakos Kritikos, Dimitris Plexousakis and Robert Woitsch

Abstract: Linked Data (LD) represent a great mechanism towards integrating information across disparate sources. The respective technology can also be exploited to perform inferencing for deriving added-value knowledge. As such, LD technology can really assist in performing various analysis tasks over information related to business process execution. In the context of Business Process as a Service (BPaaS), the first real challenge is to collect and link information originating from different systems by following a certain structure. As such, this paper proposes two main ontologies that serve this purpose: a KPI and a Dependency one. Based on these well-connected ontologies, an innovative Key Performance Indicator (KPI) analysis system is then built which exhibits two main analysis capabilities: KPI assessment and drill-down, where the second can be exploited to find root causes of KPI violations. Compared to other KPI analysis systems, LD usage enables the flexible construction and assessment of any KPI kind allowing experts to better explore the possible KPI space.

Paper Nr: 24
Title:

Thoth: Automatic Resource Management with Machine Learning for Container-based Cloud Platform

Authors:

Akkarit Sangpetch, Orathai Sangpetch, Nut Juangmarisakul and Supakorn Warodom

Abstract: Platform-as-a-Service (PaaS) providers often encounter fluctuation in computing resource usage due to workload changes, resulting in performance degradation. To maintain acceptable service quality, providers may need to manually adjust resource allocation according to workload dynamics. Unfortunately, this approach will not scale well as the number of applications grows. We thus propose Thoth, a dynamic resource management system for PaaS using Docker container technology. Thoth automatically monitors resource usage and dynamically adjusts appropriate amount of resources for each application. To implement the automatic-scaling algorithm, we select three algorithms, namely Neural Network, Q-Learning and our rule-based algorithm, to study and evaluate. The experimental results suggest that Q-Learning can the best adapt to the load changes, followed by a rule-based algorithm and NN. With Q-Learning, Thoth can save computing resources by 28.95% and 21.92%, compared to Neural Network and the rule-based algorithm respectively, without compromising service quality.

Paper Nr: 60
Title:

Anything to Topology - A Method and System Architecture to Topologize Technology-specific Application Deployment Artifacts

Authors:

Christian Endres, Uwe Breitenbücher, Frank Leymann and Johannes Wettinger

Abstract: In recent years, many application deployment technologies have emerged such as configuration management tools, e.g., Chef and Juju, infrastructure and platform technologies, e.g., Cloud Foundry and OpenStack, as well as container-based approaches, e.g., Docker. As a result, many repositories exist which contain executable and heavily used artifacts that can be used with these technologies, e.g., to deploy a WordPress application. However, to automate the deployment of more complex applications, typically, multiple of these technologies have to be used in combination. Thus, often, diverse artifacts stored in different repositories need to be integrated. This requires expertise about each technology and leads to a manual, complex, and error-prone integration step. In this paper, we tackle these issues: We present a method and system architecture that enables crawling repositories in order to transform the contained artifacts into technology-agnostic topology models, each describing the components that get installed as well as their dependencies. We show how these topologies can be combined to model the deployment of complex applications and how the resulting topology can be deployed automatically by one runtime. To prove the feasibility, we developed and evaluated a prototype based on the TOSCA standard and conducted a case study for Chef artifacts.

Paper Nr: 69
Title:

Predictive Failure Recovery in Constraint-aware Web Service Composition

Authors:

Touraj Laleh, Joey Paquet, Serguei Mokhov and Yuhong Yan

Abstract: A large number of web service composition methods have been proposed. Most of them are based on the matching of input/output and QoS parameters. However, most services in the real world have conditions or restrictions that are imposed by their providers. These condition should be met to ensure the correct execution of the service. Therefore, constraint-aware service composition methods are proposed to take care of constraints both at composition and execution time. Failure to meet constraints inside a composite plan results in the failure of execution of the whole composite service. Recovery from such failures implies service usage rollback as an alternate plan is found to continue the execution to completion. In this paper, a constraint-aware failure recovery approach is proposed to predict failures inside a composite service. Then, a method is proposed to do failure recovery based on those predictions and minimize the number of service rollbacks. The proposed solution includes an AI-planning-based algorithm and a novel constraint processing method for service failure prediction and recovery. A publicly available test set generator is used to evaluate and analyze the proposed solution.

Short Papers
Paper Nr: 4
Title:

Double Auction for Resource Allocation in Cloud Computing

Authors:

Zhichao Zhao, Fei Chen, T-H. Hubert Chan and Chuan Wu

Abstract: Cloud computing has become more and more popular as more companies choose to deploy their services and applications to the cloud. Particularly, trading unused cloud resources provides extra profits for companies with rapidly changing needs. Cloud market enables trading additional resource between buyers and sellers, where a buyer may have different valuations for different instances of the same resource due to factors such as geographical location, configuration, etc. In this paper, we study double auctions with non-identical items for cloud resource allocation, and develop a framework to decompose the design of truthful double auctions. We propose two auctions based on the framework that achieve: (i) truthfulness; (ii) individual rationality; and (iii) budget balance. We prove that the social welfare is constant-competitive to the (not necessarily truthful) optimal auction under certain distributions. We run simulations to investigate the social welfare achieved by our auctions. We use different probability distributions to capture various scenarios in the real world. Results show that our mechanisms generally achieve at least half of the optimal social welfare, while one auction achieves over a 0.9 fraction of the optimal in some circumstances.

Paper Nr: 14
Title:

DISCO: A Dynamic Self-configuring Discovery Service for Semantic Web Services

Authors:

Islam Elgedawy

Abstract: The service discovery process involves many complex tasks such as service identification, composition, selection, and adaptation. Currently, there exist many discovery schemes that separately handle such discovery tasks. When a company needs to build a discovery service, it manually selects the suitable discovery schemes, encapsulates them as services, then invokes them as a composite web service. However, when different discovery tasks/schemes are needed, such composite discovery service needs to be manually reconfigured, and different versions of the discovery service are created and managed. To overcome such problems, we propose to build a dynamic self-configuring discovery service (i.e., DISCO), that takes the required discovery policy from users, then automatically finds the suitable discovery schemes in a context-sensitive manner, and finally arranges them as a collection of executable BPEL processes. This is done by adopting different types of knowledge regarding the services’ aspects, discovery schemes, and the adopted software ontologies. Such different knowledge types are captured and managed by the previously proposed JAMEJAM framework. Experimental results show that DISCO successfully managed to reconfigure itself for different discovery policies.

Paper Nr: 37
Title:

Privacy-aware Data Storage in Cloud Computing

Authors:

Rémy Pottier and Jean-Marc Menaud

Abstract: The increasing number of cloud storage services like Dropbox or Google Drive allows users to store more and more data on the Internet. However, these services do not give users enough guarantees in protecting the privacy of their data. In order to limit the risk that the storage service scans user documents, for example, for commercial purposes, we propose a storage service that stores data on several cloud providers while prohibing these providers to read user documents. Indeed, the proposed sky storage service (i.e., a service composed of several cloud services) named SkyStore, protects the user privacy by breaking user documents into blocks and spreading these blocks over many cloud storage providers. The architecture of this service ensures that SkyStore can not read user documents. It connects directly users to cloud providers in order to avoid trusting a third-party. This paper consists of two parts. First, the sky service architecture is described to detail the different protections provided to secure user documents. Second, the consequences of this architecture on the performance are discussed.

Paper Nr: 40
Title:

A LRAAM-based Partial Order Function for Ontology Matching in the Context of Service Discovery

Authors:

Hendrik Ludolph, Peter Kropf and Gilbert Babin

Abstract: The demand for Software as a Service is heavily increasing in the era of Cloud. With this demand comes a proliferation of third-party service offerings to fulfill it. It thus becomes crucial for organizations to find and select the right services to be integrated into their existing tool landscapes. Ideally, this is done automatically and continuously. The objective is to always provide the best possible support to changing business needs. In this paper, we explore an artificial neural network implementation, an LRAAM, as the specific oracle to control the selection process. We implemented a proof of concept and conducted experiments to explore the validity of the approach. We show that our implementation of the LRAAM performs correctly under specific parameters. We also identify limitations in using LRAAM in this context.

Paper Nr: 44
Title:

Checking Realizability of a Timed Business Processes Choreography

Authors:

Manuel I. Capel

Abstract: A business process (BP) can be understood as a set of related, structured, interacting services acting as peers, according to an intended choreography that is capable of giving complex functionality to customers. Several authors have made progress in solving the ”choreography realization” problem. The research work carried out in this paper amounts to analyzing and automatically checking the realizability of the defined choreography for services that communicate through messages in a general, distributed, and highly parallel system.

Paper Nr: 63
Title:

A Cross-layer Monitoring Solution based on Quality Models

Authors:

Damianos Metallidis, Chrysostomos Zeginis, Kyriakos Kritikos and Dimitris Plexousakis

Abstract: In order to implement cross-organizational workflows and to realize collaborations between small and medium enterprises (SMEs), the use ofWeb service technology, Service-Oriented Architecture and Infrastructure-as-a-Service (IaaS) has become a necessity. Based on these technologies, the need for monitoring the quality of (a) the acquired resources, (b) the services offered to the final users and (c) the workflow-based procedures used by SMEs in order to use services, has come to the fore. To tackle this need, we propose four metric Quality Models that cover quality terms for the Workflow, Service and Infrastructure layers and an additional one for expressing the equality and inter-dependency relations between the previous ones. To support these models we have implemented a cross-layer monitoring system, whose main advantages are the layer-specific metric aggregators and an event pattern discoverer for processing the monitoring log. Our evaluation is based on the performance and accuracy aspects of the proposed cross-layer monitoring system.

Paper Nr: 68
Title:

On the Implicit Cost Structure of Service Levels from the Perspective of the Service Consumer

Authors:

Maximilian Christ, Julius Neuffer and Andreas W. Kempa-Liehr

Abstract: As services are ubiquitous in the modern business landscape, there is the need to define them in a binding legal framework, the Service Level Agreement (SLA). The most important aspect of a SLA is the agreed service level, which specifies the availability of the service. In this work, we discuss a simple mathematical service model, where the availability of a service is based on a singular resource. In this model one can relate the parameter of a linear cost structure to the purchased service level. Based on this relation we formulate a rule of thumb enabling a service consumer to check if an agreed service level fits their cost structure.

Paper Nr: 79
Title:

Designing Uniform Database Representations for Cloud Data Interchange Services

Authors:

Alina Andreica

Abstract: The paper proposes design principles for data representation in cloud data interchange services among various information systems. We apply equivalence algorithms and canonical representation in order to ensure the uniform representation in the cloud database. The solution we describe, proposed to be provided within cloud architectures, brings important advantages in organizational communication and cooperation, with important societal benefits. The generic design principles we apply bring important advantages in the design of the cloud interchange services.

Paper Nr: 85
Title:

PrEstoCloud: Proactive Cloud Resources Management at the Edge for Efficient Real-Time Big Data Processing

Authors:

Yiannis Verginadis, Iyad Alshabani, Gregoris Mentzas and Nenad Stojanovic

Abstract: Among the greatest challenges of cloud computing is to automatically and efficiently exploit infrastructural resources in a way that minimises cloud fees without compromising the performance of resource demanding cloud applications. In this aspect the consideration of using processing nodes at the edge of the network, increases considerably the complexity of these challenges. PrEstoCloud idea encapsulates a dynamic, distributed, self-adaptive and proactively configurable architecture for processing Big Data streams. In particular, PrEstoCloud aims to combine real-time Big Data, mobile processing and cloud computing research in a unique way that entails proactiveness of cloud resources use and extension of the fog computing paradigm to the extreme edge of the network. The envisioned PrEstoCloud solution is driven by the microservices paradigm and has been structured across five different conceptual layers: i) Meta-management; ii) Control; iii) Cloud infrastructure; iv) Cloud/Edge communication and v) Devices, layers.

Paper Nr: 108
Title:

A Cloud-driven View on Business Process as a Service

Authors:

Jöerg Domaschka, Frank Griesinger, Daniel Seybold and Stefan Wesner

Abstract: Cloud computing is the promise to provide flexible IT solutions. This correlates with an increasing demand in flexibility of business processes in companies. However, there is still a huge gap between business and IT management. The evolution of cloud service models tries to bridge this by bringing up fine grained and multi-dimensional service models. One of the new service models is Business Process as a Service (BPaaS), which promises to bridge the gap from business process to cloud computing. Yet, the BPaaS paradigm is not thoroughly classified with respect to the cloud computing characteristics. In this paper we introduce a first classification of the BPaaS paradigm with the focus on the common cloud characteristics. Therefore, we analyze the traditional path from a business process model to its execution via on-demand resources and derive a leveled model for BPaaS. For each level, we introduce the entities on that level in terms of (i) correlation to cloud characteristics, (ii) concepts and (iii) tools, and evaluate its cloudification options, i.e. the ability to support the provision of a business process as a service. The presented work enables the categorisation of items in the BPaaS paradigm and outlines how traditional business processes can be enabled for cloud delivery. This classification and analysis will be extended, once the BPaaS paradigm reached wider acceptance in academia and industry, and more standards evolved.

Posters
Paper Nr: 30
Title:

Federated Cloud Service Broker (FCSB): An Advanced Cloud Service Intermediator for Public Administrations

Authors:

Juncal Alonso Ibarra, Leire Orue-Echevarria, Marisa Escalante, Gorka Benguria and Gorka Echevarria

Abstract: A cloud services brokerage is third-party software that adds value to cloud services on behalf of cloud service consumers. Their goal is to make the services more specific to a company, or to integrate or aggregate services, to enhance their security, or to do anything which adds a significant layer of value (i.e. capabilities) to the original cloud services being offered (Plummer, 2012). There exist several solutions focused on providing service brokerage of Cloud Service Providers (CSP), mainly VM’s and virtualized resources, but not of other services offered (e.g. Data Processing as a Service) or SaaS applications which are certified and legally compliant. This paper proposes a solution for a Federated Cloud Service Broker (FCSB) overcoming existing challenges in the public sector such as Governance, Interoperability and portability, SLAs compliance and assessment, Intelligent discovery of cloud services, cross border interoperability and legislation awareness . The analysis of the existing solutions, and the presentation of the approach made in (Alonso, et al., 2016) is complemented with a technical design including the main functionalities and the modules that will implement them.

Paper Nr: 91
Title:

Semantic IoT Middleware-enabled Mobile Complex Event Processing for Integrated Pest Management

Authors:

Francesco Nocera, Tommaso di Noia, Marina Mongiello and Eugenio Di Sciascio

Abstract: Agricultural domain presents challenges typical of the Cyber-Physical Systems field and the hard-core of information technology industry of the new generation, such as Cloud computing and Internet of Things. In fact, modern agricultural management strongly relies on many different sensing methodologies to provide accurate information about crop, climate, and environmental conditions. In this paper we propose an approach to model a mobile-driven and distributed Complex Event Processing solution which is enabled by an IoT middleware. The proposed framework is robust with reference to contextual and environmental changes also thanks to the exploitation of an ontological model.

Paper Nr: 92
Title:

Together, Yet Apart - The Research Prototype Architecture Dilemma

Authors:

Falko Koetter, Monika Kochanowski, Florian Maier and Thomas Renner

Abstract: Distributed research projects combine the know-how of industry and research partners from different organizations and countries. In IT, joint software development of research prototypes is an integral part of such projects. However, project partners have individual interests in the developments, ranging from creating new projects to finishing a PhD thesis. This leads to a dilemma - components need to work together within the projects, but have an individual purpose apart from the projects. In this work, we investigate the impact of research project characteristics, in particular the aforementioned dilemma, on software architecture in research projects. From expert interviews, we identify unique architectural challenges inherent to research projects. In this position paper, we argue that these challenges must be considered when planning and executing research projects.

Paper Nr: 101
Title:

A Concept for Interoperable IoT Intercloud Architectures

Authors:

Philipp Grubitzsch, Thomas Springer, Tenshi Hara, Iris Braun and Alexander Schill

Abstract: Cloud platforms have evolved over the last years as means to provide value-added services for Internet of Things (IoT) infrastructures, particularly smart home applications. From different use cases the necessity arises to connect IoT cloud solutions of different vendors. While some established platforms support an integration of other vendors’ systems into their own infrastructure, solutions to federate IoT cloud platforms can hardly be found. In this paper, we analyze existing IoT cloud platforms with respect to their similarities and derive a concept of an Intercloud Broker (IB) that enables the establishment of an IoT Intercloud to support interoperability of cloud-based IoT platforms from different vendors. To demonstrate the feasibility of our approach we evaluated the overhead introduced by the Intercloud Broker. As the results show, the IB can be implemented with minimal overhead in terms of throughput and delay even on commodity hardware.

Area 8 - Cloud Computing Platforms and Applications

Full Papers
Paper Nr: 8
Title:

Smuggling Multi-cloud Support into Cloud-native Applications using Elastic Container Platforms

Authors:

Nane Kratzke

Abstract: Elastic container platforms (like Kubernetes, Docker Swarm, Apache Mesos) fit very well with existing cloud-native application architecture approaches. So it is more than astonishing, that these already existing and open source available elastic platforms are not considered more consequently in multi-cloud research. Elastic container platforms provide inherent multi-cloud support that can be easily accessed. We present a solution proposal of a control process which is able to scale (and migrate as a side effect) elastic container platforms across different public and private cloud-service providers. This control loop can be used in an execution phase of self-adaptive auto-scaling MAPE loops (monitoring, analysis, planning, execution). Additionally, we present several lessons learned from our prototype implementation which might be of general interest for researchers and practitioners. For instance, to describe only the intended state of an elastic platform and let a single control process take care to reach this intended state is far less complex than to define plenty of specific and necessary multi-cloud aware workflows to deploy, migrate, terminate, scale up and scale down elastic platforms or applications.

Paper Nr: 47
Title:

Managing and Unifying Heterogeneous Resources in Cloud Environments

Authors:

Dapeng Dong, Paul Stack, Huanhuan Xiong and John P. Morrison

Abstract: A mechanism for accessing heterogeneous resources through the integration of various cloud management platforms is presented. In this scheme, hardware resources are offered using virtualization, containerization and as bare metal. Traditional management frameworks for managing these offerings are employed and invoked using a novel resource coordinator. This coordinator also provides an interface for cloud consumers to deploy applications on the underlying heterogeneous resources. The realization of this scheme in the context of the CloudLightning project is presented and a demonstrative use case is given to illustrate the applicability of the proposed solution.

Short Papers
Paper Nr: 12
Title:

Uncertainty-aware Optimization of Resource Provisioning, a Cloud End-user Perspective

Authors:

Masoumeh Tajvidi, Michael J. Maher and Daryl Essam

Abstract: Cloud computing offers a customer the possibility of the availability of large computational resources, while paying only for the resources used. However, because of uncertainty in the customers future demand and the future market price for the computational resources, obtaining these resources in a cost-effective and robust way is a difficult problem. The variety of pricing plans is a further complication. In this paper we solve this problem using two-stage stochastic programming, for the first time considering all three available pricing plans, i.e. on-demand, reservation, and spot pricing. Through our experimental implementation, we find that our model can lower the total operational cost by up to 1.5 percent compared to other solutions.

Paper Nr: 19
Title:

An on Demand Virtual CPU Arhitecture based on Cloud Infrastructure

Authors:

Erhan Gokcay

Abstract: Cloud technology provides different computational models like, including but not limited to, infrastructure, platform and software as a service. The motivation of a cloud system is based on sharing resources in an optimal and cost effective way by creating virtualized resources that can be distributed easily but the distribution is not necessarily parallel. Another disadvantage is that small computational units like smart devices and less powerful computers, are excluded from resource sharing. Also different systems may have interoperability problems, since the operating system and CPU design differs from each other. In this paper, an on demand dynamically created computational architecture, inspired from the CPU design and called Cloud CPU, is described that can use any type of resource including all smart devices. The computational and data transfer requirements from each unit are minimized. Because of this, the service can be created on demand, each time with a different functionality. The distribution of the calculation over not-so-fast internet connections is compensated by a massively parallel operation. The minimized computational requirements will also reduce the interoperability problems and it will increase fault tolerance because of increased number of units in the system.

Paper Nr: 39
Title:

High Availability and Performance of Database in the Cloud - Traditional Master-slave Replication versus Modern Cluster-based Solutions

Authors:

Raju Shrestha

Abstract: High availability (HA) of database is critical for the high availability of cloud-based applications and services. Master-slave replication has been traditionally used since long time as a solution for this. Since master-slave replication uses either asynchronous or semi-synchronous replication, the technique suffers from severe problem of data inconsistency when master crashes during a transaction. Modern cluster-based solutions address this through multi-master synchronous replication. These two HA database solutions have been investigated and compared both qualitatively and quantitatively. They are evaluated based on availability and performance through implementation using the most recent version of MariaDB server, which supports both the traditional master-slave replication, and cluster based replication via Galera cluster. The evaluation framework and methodology used in this paper would be useful for comparing and analyzing performance of different high availability database systems and solutions, and which in turn would be helpful in picking an appropriate HA database solution for a given application.

Paper Nr: 81
Title:

MyMinder: A User-centric Decision Making Framework for Intercloud Migration

Authors:

Esha Barlaskar, Peter Kilpatrick, Ivor Spence and Dimitrios S. Nikolopoulos

Abstract: Each cloud infrastructure-as-a-service (IaaS) provider offers its own set of virtual machine (VM) images and hypervisors. This creates a vendor lock-in problem when cloud users try to change cloud provider (CP). Although, recently a few user-side inter-cloud migration techniques have been proposed (e.g. nested virtualisation), these techniques do not provide dynamic cloud management facilities which could help users to decide whether or not to proceed with migration, when and where to migrate, etc. Such decision-making support in the post-deployment phase is crucial when the current CP’s Quality of Service (QoS) degrades while other CPs offer better QoS or the same service at a lower price. To ensure that users’ required QoS constraints are achieved, dynamic monitoring and management of the acquired cloud services are very important and should be integrated with the inter-cloud migration techniques. In this paper, we present the problem formulation and the architecture of a Multi-objective dYnamic MIgratioN Decision makER (MyMinder) framework that enables users to monitor and appropriately manage their deployed applications by providing decisions on whether to continue with the currently selected CP or to migrate to a different CP. The paper also discusses experimental results obtained when running a Spark linear regression application in Amazon EC2 and Microsoft Azure as an initial investigation to understand the motivating factors for live-migration of cloud applications across cloud providers in the post-deployment phase.

Paper Nr: 84
Title:

Predicting the Stability of Large-scale Distributed Stream Processing Systems on the Cloud

Authors:

Tri Minh Truong, Aaron Harwood and Richard O. Sinnott

Abstract: Large-scale topology-based stream processing systems are non-trivial to build and deploy. They require understanding of the performance, cost of deployment and considerations of potential downtime. Our work considers stability as a primary characteristic of these systems. By stability, we mean that unstable systems exhibit large-spikes in latency and can drop throughput frequently or unpredictably. Such instabilities can be due to variations of workloads or underlying hardware platforms that are often difficult to predict. To understand and tackle this for large-scale stream processing systems, we apply queueing theory and simulate the results through a series of experiments on the Cloud.

Paper Nr: 89
Title:

Towards a Fuzzy-oriented Framework for Service Selection in Cloud e-Marketplaces

Authors:

Azubuike Ezenwoke, Olawande Daramola and Matthew Adigun

Abstract: The growing popularity of cloud services requires service selection platforms that offer enhanced user experience in terms of handling complex user requirements, elicitation of quality of service (QoS) requirements, and presentation of search results to aid decision making. So far, none of the existing cloud service selection approaches has provided a framework that wholly possesses these attributes. In this paper, we proposed a fuzzy-oriented framework that could facilitate enhanced user experience in cloud e-marketplaces through formal composition of atomic services to satisfy complex user requirements, elicitation and processing of subjective user QoS requirements, and presentation of search results in a visually intuitive way that aids users’ decision making. To do this, an integration of key concepts such as constrained-based reasoning on feature models, fuzzy pairwise comparison of QoS attributes, fuzzy decision making, and information visualization have been used. The applicability of the framework is illustrated with an example of Customer Relationship Management as a Service.

Paper Nr: 93
Title:

The SePaDe System: Packaging Entire XaaS Layers for Automatically Deploying and Managing Applications

Authors:

Kálmán Képes, Uwe Breitenbücher and Frank Leymann

Abstract: The multitude of cloud providers and technologies diminish the interoperability and portability of applications by offering diverse and heterogeneous functionalities, APIs, and data models. Although there are integration technologies that provide uniform interfaces that wrap proprietary APIs, the differences regarding the services offered by providers, their functionality, and their management features are still major issues that impede portability. In this paper, we tackle these issues by introducing the SePaDe System, which is a pluggable deployment framework that abstracts from proprietary services, APIs, and data models in a new way: The system builds upon reusable archive templates that contain (i) a deployment model for a certain kind of application and (ii) all deployment and management logic required to provide defined functionalities and management features. Thus, by selecting appropriate templates, an application can be deployed on any infrastructure providing the specified features. We validate the practical feasibility of the approach by a prototypical implementation that is based on the TOSCA standard and present several case studies to evaluate its relevance.

Paper Nr: 94
Title:

Applications Deployment in Multiple PaaS Environments: Requirements, Challenges and Solutions

Authors:

Rami Sellami, Mehdi Ahmed-Nacer and Stéphane Mouton

Abstract: Cloud computing has recently attracted full attention of many organizations due to its economic, business and technical benefits. Indeed, we observe that the proliferation of offers by cloud providers raises several challenges. One of these innovative challenges is applications deployment in multiple PaaS providers. In fact, developers need to provision components of the same application across multiple PaaS depending on their related requirements and PaaS capabilities. They will not only have to deploy their applications, but they will also have to consider migrating services from one PaaS to another, and to manage distributed applications spanning multiple environments. In this paper, we present and discuss the requirements of applications deployment in multiple PaaS providers and we analyze current state of the art.

Paper Nr: 109
Title:

Separation of Concerns in Heterogeneous Cloud Environments

Authors:

Dapeng Dong, Huanhuan Xiong and John Morrison

Abstract: The majority of existing cloud service management frameworks implement tools, APIs, and strategies for managing the lifecycle of cloud applications and/or resources. They are provided as a self-service interface to cloud consumers. This self-service approach implicitly allows cloud consumers to have full control over the management of applications as well as the underlying resources such as virtual machines and containers. This subsequently narrows down the opportunities for Cloud Service Providers to improve resource utilization, power efficiency and potentially the quality of services. This work introduces a service management framework centred around the notion of Separation of Concerns. The proposed service framework addresses the potential conflicts between cloud service management and cloud resource managment while maximizing user experience and cloud efficiency on each side. This is particularly useful as the current homogeneous cloud is evolving to include heterogeneous resources.

Area 9 - Cloud Computing Enabling Technology

Full Papers
Paper Nr: 7
Title:

A CKAN Plugin for Data Harvesting to the Hadoop Distributed File System

Authors:

Robert Scholz, Nikolay Tcholtchev, Philipp Lämmel and Ina Schieferdecker

Abstract: Smart Cities will mainly emerge around the opening of large amounts of data, which are currently kept closed by various stakeholders within an urban ecosystem. This data requires to be cataloged and made available to the community, such that applications and services can be developed for citizens, companies and for optimizing processes within a city itself. In that scope, the current work seeks to develop concepts and prototypes, in order to enable and demonstrate, how data cataloging and data storage can be merged towards the provisioning of large amounts of data in urban environments. The developed concepts, prototype, case study and belonging evaluations are based on the integration of common technologies from the domains of Open Data and large scale data processing in data centers, namely CKAN and Hadoop.

Paper Nr: 10
Title:

A Computation- and Network-Aware Energy Optimization Model for Virtual Machines Allocation

Authors:

Claudia Canali, Riccardo Lancellotti and Mohammad Shojafar

Abstract: Reducing energy consumption in cloud data center is a complex task, where both computation and network related effects must be taken into account. While existing solutions aim to reduce energy consumption considering separately computational and communication contributions, limited attention has been devoted to models integrating both parts. We claim that this lack leads to a sub-optimal management in current cloud data centers, that will be even more evident in future architectures characterized by Software-Defined Network approaches. In this paper, we propose a joint computation-plus-communication model for Virtual Machines (VMs) allocation that minimizes energy consumption in a cloud data center. The contribution of the proposed model is threefold. First, we take into account data traffic exchanges between VMs capturing the heterogeneous connections within the data center network. Second, the energy consumption due to VMs migrations is modeled by considering both data transfer and computational overhead. Third, the proposed VMs allocation process does not rely on weight parameters to combine the two (often conflicting) goals of tightly packing VMs to minimize the number of powered-on servers and of avoiding an excessive number of VM migrations. An extensive set of experiments confirms that our proposal, which considers both computation and communication energy contributions even in the migration process, outperforms other approaches for VMs allocation in terms of energy reduction.

Paper Nr: 41
Title:

Provisioning of Component-based Applications Across Multiple Clouds

Authors:

Mehdi Ahmed-Nacer, Sami Yangui, Samir Tata and Roch H. Glitho

Abstract: The several existing Platform-as-a-Service (PaaS) solutions are providing application developers with different and various offers in terms of functional properties (e.g. storage), as well as, non-functional properties (e.g. cost, security). Consequently, developers may need to provision components of the same application across several PaaS depending on their related requirements and/or PaaS capabilities. This paper proposes generic mechanisms that allow seamless component-based applications provisioning across several PaaS. These mechanisms are based on the COAPS API; an already defined OCCI-compliant API that allows provisioning of monolithic applications in PaaS using generic descriptors and operations. To illustrate the proposed mechanisms, the paper showcases a realistic use case of provisioning of a JEE-based simulation application across Elastic Beanstalk and Cloud Foundry platforms.

Paper Nr: 51
Title:

Highly Reconfigurable Computing Platform for High Performance Computing Infrastructure as a Service: Hi-IaaS

Authors:

Akihiro Misawa, Susumu Date, Keichi Takahashi, Takashi Yoshikawa, Masahiko Takahashi, Masaki Kan, Yasuhiro Watashiba, Yoshiyuki Kido, Chonho Lee and Shinji Shimojo

Abstract: It has become increasingly difficult for high performance computing (HPC) users to own a HPC platform for themselves. As user needs and requirements for HPC have diversified, the HPC systems have the capacity and ability to execute diverse applications. In this paper, we present computer architecture for dynamically and promptly delivering high performance computing infrastructure as a cloud computing service in response to users’ requests for the underlying computational resources of the cloud. To obtain the flexibility to accommodate a variety of HPC jobs, each of which may require a unique computing platform, the proposed system reconfigures software and hardware platforms, taking advantage of the synergy of Open Grid Scheduler/Grid Engine and OpenStack. An experimental system developed in this research shows a high degree of flexibility in hardware reconfigurability as well as high performance for a benchmark application of Spark. Also, our evaluation shows that the experimental system can execute twice as many as jobs that need a graphics processing unit (GPU), in addition to eliminating the worst case of resource congestion in the real-world operational record of our university’s computer center in the previous half a year.

Paper Nr: 61
Title:

ROP Defense in the Cloud through LIve Text Page-level Re-ordering - The LITPR System

Authors:

Angelo Sapello, C. Jason Chiang, Jesse Elwell, Abhrajit Ghosh, Ayumu Kubota and Takashi Matsunaka

Abstract: As cloud computing environments move towards securing against simplistic threats, adversaries are moving towards more sophisticated attacks such as ROP (Return Oriented Programming). In this paper we propose the LIve Text Page-level Re-ordering (LITPR) system for prevention of ROP style attacks and in particular the largely unaddressed Blind ROP attacks on applications running on cloud servers. ROP and BROP, respectively, bypass protections such as DEP (Data Execution Prevention) and ASLR (Address Space Layout Randomization) that are offered by the Linux operating system and can be used to perform arbitrary malicious actions against it. LITPR periodically randomizes the in-memory locations of application and kernel code, at run time, to ensure that both ROP and BROP style attacks are unable to succeed. This is a dramatic change relative to ASLR which is a load time randomization technique.

Short Papers
Paper Nr: 13
Title:

The Role of Experimental Exploration in Cloud Migration for SMEs

Authors:

Frank Fowley, Divyaa Manimaran Elango, Hany Magar and Claus Pahl

Abstract: The migration of IT systems to the cloud is still a problem, in particular for smaller companies without much cloud expertise. Generally, some expected benefits are defined and an awareness of potential problems does exist to some extent in the organisations. However, this is often not sufficient to confidently embark on a full migration project in the cloud. While discussions and conceptual analyses can help to some extent, we explore here the suitability of feasibility studies with experimental explorations at the core. These studies would typically cost 5% of the overall migration cost based on our use cases, but can help with a reliable risk assessment. It can clarify how much of the expectations and intentions can materialise in the cloud. The cost of the migration, but, more importantly, the cost of operating an IT system in the cloud can be estimated. Using a feasibility study with an experimental core based on a partial prototype delivers much more reliable figures regarding configurations, quality-of-service and costing than a theoretical analysis could deliver.

Paper Nr: 16
Title:

A Study of Virtual Machine Placement Optimization in Data Centers

Authors:

Stephanie Challita, Fawaz Paraiso and Philippe Merle

Abstract: In recent years, cloud computing has shown a valuable way for accommodating and providing services over the Internet such that data centers rely increasingly on this platform to host a large amount of applications (web hosting, e-commerce, social networking, etc.). Thus, the utilization of servers in most data centers can be improved by adding virtualization and selecting the most suitable host for each Virtual Machine (VM). The problem of VM placement is an optimization problem aiming for multiple goals. It can be covered through various approaches. Each approach aims to simultaneously reduce power consumption, maximize resource utilization and avoid traffic congestion. The main goal of this literature survey is to provide a better understanding of existing approaches and algorithms that ensure better VM placement in the context of cloud computing and to identify future directions.

Paper Nr: 43
Title:

Semantic SLA for Clouds: Combining SLAC and OWL-Q

Authors:

Kyriakos Kritikos and Rafael Brundo Uriarte

Abstract: Several SLA languages have been proposed, some specifically for the cloud domain. However, after extensively analysing the domain’s requirements considering the SLA lifecycle, we conclude that none of them covers the necessary aspects for application in diverse real-world scenarios. In this paper, we propose SSLAC, where we combine the capabilities of two prominent service specification and SLA languages: OWL-Q and SLAC. These languages have different scopes but complementary features. SLAC is domain specific with validation and verification capabilities. OWL-Q is a higher level language based on ontologies and well defined semantics. Their combination advances the state of the art in many perspectives. It enables the SLA’s semantic verification and inference and, at the same time, its constraint-based modelling and enforcement. It also provides a complete formal approach for defining non-functional terms and an enforcement framework covering real-world scenarios. The advantages of SSLAC, in terms of expressiveness and features, are then shown in a use case modelled by it.

Paper Nr: 67
Title:

Making Cloud-based Systems Elasticity Testing Reproducible

Authors:

Michel Albonico, Jean-Marie Mottu, Gerson Sunyé and Frederico Alvares

Abstract: Elastic Cloud infrastructures are capable of dynamically varying computational resources at large scale, which is error-prone by nature. Elasticity-related errors are detected thanks to tests that should run deterministically many times all along the development. However, elasticity testing reproduction requires several features not supported natively by the main cloud providers, such as Amazon EC2. We identify three requirements that we claim to be indispensable to ensure elasticity testing reproducibility: to control the elasticity behavior, to select specific resources to be unallocated, and coordinate events parallel to elasticity. In this paper, we propose an approach fulfilling those requirements and making the elasticity testing reproducible. Experimental results show that our approach successfully reproduces elasticity-related bugs that need the requirements we claim in this paper.

Paper Nr: 104
Title:

An Automatic Tool for Benchmark Testing of Cloud Applications

Authors:

Valentina Casola, Alessandra De Benedictis, Massimiliano Rak and Umberto Villano

Abstract: The performance testing of cloud applications is a challenging research topic, due to the multiplicity of different possibilities to allocate application services to Cloud Service Providers (CSPs). Currently available benchmarks mainly focus on evaluating specific services or infrastructural resources offered by different CSPs, but are not always useful to evaluate complete cloud applications and to discover performance bugs. This paper proposes a methodology to define an evaluation performance process, particularly suited for cloud applications, and an automatic procedure to set up and to execute benchmark tests. The methodology is based on the evaluations of two performance indexes, and is validated by presenting a complete case study application, developed within the FP7-EU-SPECS project. The analysis of the measurement results, produced automatically, can help the developer to discover possible bottlenecks and to take actions to improve both the usability and the performance of a cloud application.

Posters
Paper Nr: 72
Title:

WFCF - A Workflow Cloud Framework

Authors:

Eric Kübler and Mirjam Minor

Abstract: Using cloud resources for execution of workflows is common nowadays. However, there is a lack of concepts for flexible integration of workflow management tools and clouds for resource usage optimization. While traditional methods such as running a workflow management tool monolithically on cloud resources lead to over- and under-provisioning problems, other concepts include a very deep integration, where the options for changing the involved workflow management tools and clouds are very limited. In this work, we present the architecture of WFCF, a connector-based integration framework for workflow management tools and clouds to optimize the resource utilization of cloud resources for workflow. Case-based reasoning is used to optimize resource provisioning based on solutions for past resource provisioning problems. The approach is illustrated by real sample workflow’s from the music mastering domain.

Paper Nr: 88
Title:

Cloud Suitability Assessment Method for Application Software

Authors:

Mesfin Workineh, Nuno M. Garcia and Dida Midekso

Abstract: The advantages and initial adoption success stories of the Cloud computing inspire enterprises to migrate their existing applications to the Cloud computing technology. As a result, the trend of migrating existing application software to the Cloud grows steadily. However, not all applications are ideal candidates to be ported. Moreover, very often client organizations do not have the appropriate methods to determine which of their IT services are appropriate for migration. In this respect, a method is required to assess the suitability of the existing applications before embarking on migration. This study designs a method to assess Cloud suitability of exiting application software following the design science approach. The method is a multi-step approach composed of seven activities, devised with the goal of reducing the risk of making wrong migration decisions. Further research will be used to validate and refine the proposed method.

Paper Nr: 99
Title:

Generic Refactoring Methodology for Cloud Migration - Position Paper

Authors:

Manoj Kesavulu, Marija Bezbradica and Markus Helfert

Abstract: Cloud migration has attracted a lot of attention in both industry and academia due to the on-demand, high availability, dynamic scalable nature. Organizations choose to move their on-premise applications to adapt to the virtualized environment of the cloud where the services are accessed remotely over the internet. These applications need to be re-engineered to completely exploit the cloud infrastructure such as performance and scalability improvements over the on-premise infrastructure. This paper proposes a re-engineering approach called architectural refactoring for restructuring on-premise application components to adopt to the cloud environment with the aim of achieving significant increase in non-functional quality attributes such as performance, scalability and maintainability of the cloud architectures. This paper proposes, when needed to migrate to cloud, the application is divided into smaller components, converted into services and deployed to cloud. The paper discusses existing issues faced by software developers and engineers during cloud migration, introduces architectural refactoring as a solution and explains the generic refactoring process at an architectural level.

Paper Nr: 105
Title:

EME: An Automated, Elastic and Efficient Prototype for Provisioning Hadoop Clusters On-demand

Authors:

Feras M. Awaysheh, Tomás F. Pena and José C. Cabaleiro

Abstract: Aiming at enhancing the MapReduce-based applications Quality of Service (QoS), many frameworks suggest a scale-out approach, statically adding new nodes to the cluster. Such frameworks are still expensive to acquire and does not consider the optimal usage of available resources in a dynamic manner. This paper introduces a prototype to address with this issue, by extending MapReduce resource manager with dynamic provisioning and low-cost resources capacity uplift on-demand. We propose an Enhanced Mapreduce Environment (EME), to support heterogeneous environments by extending Apache Hadoop to an opportunistically containerized environment, which enhances system throughput by adding underused resources to a local or cloud based cluster. The main architectural elements of this framework are presented, as well as the requirements, challenges, and opportunities of a first prototype.