CLOSER 2018 Abstracts


Full Papers
Paper Nr: 98
Title:

Smart Connected Digital Factories - Unleashing the Power of Industry 4.0 and the Industrial Internet

Authors:

Michael P. Papazoglou

Abstract: Traditional manufacturing has been characterized by limited data exchange between systems, machines and processes throughout the product development life cycle. Recent initiatives such as the Industrial IoT, or Industry 4.0, as it has been dubbed, are fundamentally reshaping the industrial landscape by promoting connected manufacturing solutions that realize a “digital thread” which connects all aspects of manufacturing including all data and operations involved in the production of goods and services. This paper focuses on Industry 4.0 technologies and how they support the emergence of highly-connected, knowledge-enabled factories, referred to as Smart Manufacturing Networks. Smart Manufacturing Networks comprise an ecosystem of connected factory sites, plants, and self-regulating machines able to customize output, allocate resources over manufacturing clouds optimally to offer a seamless transition between the physical and digital worlds of product design and production. These sophisticated capabilities are possible today because technologies facilitating the IoT and data analytics engines are mature and can be implemented at scale, thanks to pervasive connectivity, sensors, cloud computing, and storage.

Area 1 - Cloud Computing Fundamentals

Full Papers
Paper Nr: 14
Title:

Framework for Searchable Encryption with SQL Databases

Authors:

Monir Azraoui, Melek Önen and Refik Molva

Abstract: In recent years, the increasing popularity of outsourcing data to third-party cloud servers sparked a major concern towards data breaches. A standard measure to thwart this problem and to ensure data confidentiality is data encryption. Nevertheless, organizations that use traditional encryption techniques face the challenge of how to enable untrusted cloud servers perform search operations while the actually outsourced data remains confidential. Searchable encryption is a powerful tool that attempts to solve the challenge of querying data outsourced at untrusted servers while preserving data confidentiality. Whereas the literature mainly considers searching over an unstructured collection of files, this paper explores methods to execute SQL queries over encrypted databases. We provide a complete framework that supports private search queries over encrypted SQL databases, in particular for PostgreSQL and MySQL databases. We extend the solution for searchable encryption designed by Curtmola et al., to the case of SQL databases. We also provide features for evaluating range and boolean queries. We finally propose a framework for implementing our construction, validating its practicality.

Paper Nr: 35
Title:

Model-driven Configuration Management of Cloud Applications with OCCI

Authors:

Fabian Korte, Stéphanie Challita, Faiez Zalila, Philippe Merle and Jens Grabowski

Abstract: To tackle the cloud-provider lock-in, the Open Grid Forum (OGF) is developing the Open Cloud Computing Interface (OCCI), a standardized interface for managing any kind of cloud resources. Besides the OCCI Core model, which defines the basic modeling elements for cloud resources, the OGF also defines extensions that reflect the requirements of different cloud service levels, such as IaaS and PaaS. However, so far the OCCI PaaS extension is very coarse grained and lacks of supporting use cases and implementations. Especially, it does not define how the components of the application itself can be managed. In this paper, we present a model-driven framework that extends the OCCI PaaS extension and is able to use different configuration management tools to manage the whole lifecycle of cloud applications. We demonstrate the feasibility of the approach by presenting four different use cases and prototypical implementations for three different configuration management tools.

Paper Nr: 37
Title:

Power and Cost-aware Virtual Machine Placement in Geo-distributed Data Centers

Authors:

Soha Rawas, Ahmed Zekri and Ali El Zaart

Abstract: The proliferation of cloud computing due to its attracting on-demand services leads to the establishment of geo-distributed data centers (DCs) with thousands of computing and storage nodes. Consequently, many challenges exist for cloud providers to run such an environment. One important challenge is to minimize cloud users’ network latency while accessing services from the DCs. The other is to decrease the DCs’ energy consumption that contributes to high operational cost rates, low profits for cloud providers, and high carbon non-environment friendly emissions. In this paper, we studied the problem of virtual machine placement that results in less energy consumption, less CO2 emission, and less access latency towards largescale cloud providers operational cost minimization. The problem was formulated as multi-objective function and an intelligent machine-learning model constructed to improve the performance of the proposed model. To evaluate the proposed model, extensive simulation is conducted using the CloudSim simulator. The simulation results reveal the effectiveness of PCVM model compared to other competing virtual machine placement methods in terms of network latency, energy consumption, CO2 emission and operational cost minimization.

Short Papers
Paper Nr: 6
Title:

Leveraging Cloud Computing, Virtualisation and Solar Technologies to Increase Performance and Reduce Cost in Small to Medium-sized Businesses

Authors:

Emmanuel Kayode Akinshola Ogunshile

Abstract: Cloud computing has been available for some time and is used in large organisations across the globe. We know that cloud computing is an economically viable concept in these large global organisations, however we do not understand sufficiently whether it can reduce costs in smaller organisations, given the traditionally large investment costs. This paper performs an investigation into cloud computing concepts to understand if it is a cost-effective solution to a medium-sized business. It draws on a business case which outlines the problems and requirements for an organisation, covers the technologies that are available and then evaluates which, if any, are the most economically viable. Alongside cloud computing it also analyses virtualisation technologies for system consolidation, and also solar energy solutions to reduce energy consumption costs. It then concludes which solutions meet the business requirements.

Paper Nr: 13
Title:

Evaluation of Cloud Computing Offers through Security Risks - An Industrial Case Study

Authors:

Jean-Michel Remiche, Jocelyn Aubert, Nicolas Mayer and David Petrocelli

Abstract: Cloud provider selection is a difficult task, even more when security is a critical aspect of the processes to be moved on the cloud. To support cloud offer selection by a cloud consumer, we have introduced an innovative risk-based approach, proposing to distribute risk assessment activities between the cloud provider and the cloud consumer. This paper proposes an evaluation of this approach by assessing and comparing the portfolio of offers of POST Telecom, a cloud provider in Luxembourg. The case study will cover the evaluation of the offers with the help of standard security controls provided by three leading cloud organizations: Cloud Security Alliance, ISO/IEC and SANS Institute.

Paper Nr: 23
Title:

Performance and Energy-based Cost Prediction of Virtual Machines Live Migration in Clouds

Authors:

Mohammad Aldossary and Karim Djemame

Abstract: Virtual Machines (VMs) live migration is one of the important approaches to improve resource utilisation and support energy efficiency in Clouds. However, VMs live migration leads to performance loss and additional costs due to increased migration time and energy overhead. This paper introduces a Performance and Energy-based Cost Prediction Framework to estimate the total cost of VMs live migration by considering the resource usage and power consumption, while maintaining the expected level of performance. A series of experiments conducted on a Cloud testbed show that this framework is capable of predicting the workload, power consumption and total cost for heterogeneous VMs before and after live migration, with the possibility of recovering the migration cost e.g. 28.48% for the predicted cost recovery of the VM.

Paper Nr: 24
Title:

Operations Security Evaluation of IaaS-Cloud Backend for Industry 4.0

Authors:

Oliver Schluga, Elisabeth Bauer, Ani Bicaku, Silia Maksuti, Markus Tauber and Alexander Wöhrer

Abstract: The fast growing number of cloud based Infrastructure-as-a-Service instances raises the question, how the operations security depending on the underlying cloud computing infrastructure can be sustained and guaranteed. Security standards provide guidelines for information security controls applicable to the provision and use of the cloud services. The objectives of operations security are to support planning and sustaining of day-to-day processes that are critical with respect to security of information environments. In this work we provide a detailed analysis of ISO 27017 standard regarding security controls and investigate how well popular cloud platforms can cater for them. The resulting gap of support for individual security controls is furthermore compared with outcomes of recent cloud security research projects. Hence the contribution is twofold, first we identify a set of topics that still require research and development and secondly, as a practical output, we provide a comparison of popular industrial and open-source platforms focusing on private cloud environments, which are important for Industry 4.0 use cases.

Paper Nr: 67
Title:

Comparison and Runtime Adaptation of Cloud Application Topologies based on OCCI

Authors:

Johannes Erbel, Fabian Korte and Jens Grabowski

Abstract: To tackle the cloud provider lock-in, multiple standards have emerged to enable the uniform management of cloud resources across different providers. One of them is the Open Cloud Computing Interface (OCCI) which defines, in addition to a REST API, a metamodel that enables the modelling of cloud resources on different service layers. Even though the standard defines how to manage single cloud resources, no process exists that allows for the automated provisioning of full application topologies and their adaptation at runtime. Therefore, we propose a model-based approach to adapt running cloud application infrastructures, allowing a management on a high abstraction level. Hereby, we check the differences between the runtime and target state of the topology using a model comparison, matching their resources. Based on this match, we mark each resource indicating required management calls that are systematically executed by an adaptation engine. To show the feasibility of our approach, we evaluate the comparison, as well as the adaptation process on a set of example infrastructures.

Paper Nr: 72
Title:

Anomaly Detection Approaches for Secure Cloud Reference Architectures in Legal Metrology

Authors:

Alexander Oppermann, Federico Grasso Toro, Florian Thiel and Jean-Pierre Seifert

Abstract: Securing Computer Systems against all kind of threats is an impossible challenge to fulfill. Although, in the field of Legal Metrology, it shall be assured that one can rely on the measurement carried out by a trusted computer system. In a distributed environment, a measurement instrument cannot be simply disconnected to gurantee its security. However, being able to monitor the computer systems constantly in order to deduce a normal system behaviour, can be a particular promising approach to secure such systems. In cases of detected anomalies, the system evaluates them to measure the severity of the detected incident and place it into three different categories: green, yellow and red. The presented Anomaly Detection Module can detect attacks against distributed applications in an cloud computing environment, using pattern recognition for clustering as well as statistical approaches. Both, inexperienced and experienced attacks have been tested and results are presented.

Paper Nr: 75
Title:

A Taxonomy Model for Single Sign-on Oriented towards Cloud Computing

Authors:

Glauber C. Batista, Maurício A. Pillon, Guilherme P. Koslovski, Charles C. Miers, Marcos A. Simplício Jr. and Nelson M. Gonzalez

Abstract: Clouds can be seen as a natural evolution of the Internet, allowing the utilization of computing capabilities maintained by third parties for optimizing resource usage. There are several elements that compose the cloud infrastructure and its services, and all of them must operate harmoniously. In particular, to allow the creation and deployment of services resilient to internal and external threats, the observance of security aspects is essential. This includes the deployment of authentication and authorization mechanisms to control the access to resources allocated on-demand, a strong requirement for any cloud-based solution. With this issue in mind, several providers have recently started using some form of Single Sign-On (SSO) mechanism to simplify the process of handling credentials inside the cloud. In this work, aiming to provide a structured overview of the wide variety of mechanisms that can be employed with this purpose, we propose a classification of SSO systems for cloud services, which can be used as a model for comparing current and future designing instances of such mechanisms. In addition, to validate the usefulness of the proposed taxonomy, we provide a classification of existing cloud-oriented SSO solutions.

Paper Nr: 84
Title:

Security Considerations for Microservice Architectures

Authors:

Daniel Richter, Tim Neumann and Andreas Polze

Abstract: Security is an important and difficult topic in today’s complex computer systems. Cloud-based systems adopting microservice architectures complicate that analysis by introducing additional layers. In the test system analyzed, base layers are combined into three groups (compute provider, encapsulation technology, and deployment) and possible security risks introduced by technologies used in these layers are analyzed. The application layer focuses on security concerns that concern authorization and authentication. The analysis is based on a microservice-based rewritten version of the seat reservation system of the Deutsche Bahn using technologies such as Amazon Web Services, Docker, and Kubernetes. The comparison concludes that the security of communication in the test system could be significantly improved with little effort. If security is not considered as an integral part from the beginning of a project, it can easily be neglected and be expensive to add later on.

Posters
Paper Nr: 5
Title:

Developing a Power Efficient Private Cloud Ready Infrastructure for Small-Medium Sized Enterprises

Authors:

Emmanuel Kayode Akinshola Ogunshile

Abstract: Digital technology is advancing and the means of powering it so. For small-medium enterprise (SME) to remain competitive in today’s economic climate it is paramount they can respond to business challenges with agility and efficiency. Despite knowing this, many of today’s SMEs retain legacy hardware and siloed infrastructures that are both expensive to maintain and incapable of being agile. These heterogeneous infrastructures offer no elasticity for its consumers and act as a barrier to its own innovation. Acquiring requisite budget to transform such digital infrastructure with high operational energy costs has proven an uphill struggle as there is a distinct lack of perceived benefits from undergoing such transformation program. However, amidst the various comparable options, claims, and features from different technology vendors available in the market there are true benefits applicable to all SMEs. To demonstrate how a solution such as moving to the cloud and or adopting solar power could benefit a SME’s infrastructure, and operational costs, the requirements of a fictitious Marketing Agency have been analysed by a company specialised in cloud, virtualisation and solar power to introduce a framework suitable for any SME curious of the benefits presented by basic cloud principles, virtualised resources and renewable energy.

Paper Nr: 17
Title:

Energy Efficiency Policies for Smart Digital Cloud Environment based on Heuristics Algorithms

Authors:

Awatif Ragmani, Amina El Omri, Noreddine Abghour, Khalid Moussaid and Mohamed Rida

Abstract: The Cloud computing model is based on the use of virtual resources and their placement on physical servers hosted in the different data centers. Those data centers are known to be big energy consumers. The allocation of virtual machines within servers has a paramount role in optimizing energy consumption of the underlying infrastructure in order to satisfy the environmental and economic constraints. Since then, various hardware and software solutions have emerged. Among these strategies, we highlight the optimization of virtual machine scheduling in order to improve the quality of service and the energy efficiency. Through this paper, we propose firstly, to study energy consumption in the Cloud environment based on the GreenCloud simulator. Secondly, we define a scheduling solution aimed at reducing energy consumption via a better resource allocation strategy by privileging data center powered by clean energy. The main contributions of this paper are the use of the Taguchi concept to evaluate the Cloud model and the introduction of scheduling policy based on the simulated annealing algorithm.

Area 2 - Cloud Operations

Full Papers
Paper Nr: 58
Title:

Challenges and Research Directions in Big Data-driven Cloud Adaptivity

Authors:

Andreas Tsagkaropoulos, Nikos Papageorgiou, Dimitris Apostolou, Yiannis Verginadis and Gregoris Mentzas

Abstract: Mainstream cloud technologies are challenged by real-time, big data processing requirements or emerging applications. This paper surveys recent research efforts on advancing cloud computing virtual infrastructures and real-time big data technologies in order to provide dynamically scalable and distributed architectures over federated clouds. We examine new methods for developing cloud systems operating in a real-time, big data environment that can sense the context of the application environment and can adapt the infrastructure accordingly. We describe research topics linked to the challenge of adaptivity such as situation awareness, context detection, service-level objectives, and the capability to predict extraordinary situations requiring remedying action. We also describe research directions for realising adaptivity in cloud computing and we present a conceptual framework that represents research directions and shows interrelations.

Paper Nr: 89
Title:

Runtime Attestation for IAAS Clouds

Authors:

Jesse Elwell, Angelo Sapello, Alexander Poylisher, Giovanni Di Crescenzo, Abhrajit Ghosh, Ayumu Kubota and Takashi Matsunaka

Abstract: We present the RIC (Runtime Attestation for I aas Clouds) system which uses timing-based attestation to verify the in- tegrity of a running Xen Hypervisor as well as the guest virtual machines running on top of it. As part of the RIC system we present a novel attestation technique which in- cludes not only the guest operating system's static code and read-only data sections but also the guest OS' dynamically loadable kernel modules. These attestations are conducted periodically at run-time to provide a stronger guarantee of correctness than that o ered by load-time veri cation tech- niques. A system such as RIC can be used in cloud comput- ing scenarios to verify the environment in which the cloud services ultimately run. Furthermore we o er a method to decrease the performance impact that this process has on the virtual machines that run the cloud services since these ser- vices often have very strict performance and availability re- quirements. This scheme e ectively extends the root of trust on the cloud machines from the Xen hypervisor upward to include the guest OS that runs within each virtual machine. This work represents an important step towards secure cloud computing platforms which can help cloud providers o er new services that require higher levels of security than are possible in cloud data centers today.

Short Papers
Paper Nr: 70
Title:

Exploiting BPMN Features to Design a Fault-aware TOSCA Orchestrator

Authors:

Domenico Calcaterra, Vincenzo Cartelli, Giuseppe Di Modica and Orazio Tomarchio

Abstract: Cloud computing is nowadays a consolidated paradigm that enables scalable access to computing resources and complex services. One of the greatest challenges Cloud providers have to deal with is to efficiently automate the service “provisioning” activities through Cloud orchestration techniques. By relying on TOSCA, a well-known standard specification for the interoperable description of Cloud services, we developed a fault-aware orchestrator capable of automating the workflow for service provisioning. The BPMN notation was used to define both the workflow and the data associated with workflow elements. To corroborate the proposal, a software prototype was developed and tested with a sample use case which is discussed in the paper.

Paper Nr: 95
Title:

A Hadoop Open Source Backup Solution

Authors:

Heitor Faria, Rodrigo Hagstrom, Marco Reis, Breno G. S. Costa, Edward Ribeiro, Maristela Holanda, Priscila Solis Barreto and Aletéia P. F. Araújo

Abstract: Backup is a traditional and critical business service with increasing challenges, such as the snowballing of constantly increasing data. Distributed data-intensive applications, such as Hadoop, can give a false impression that they do not need backup data replicas, but most researchers agree this is still necessary for the majority of its components. A brief survey reveals several disasters that can cause data loss in Hadoop HDFS clusters, and previous studies propose having an entire second Hadoop cluster to host a backup replica. However, this method is much more expensive than using traditional backup software and media, such a tape library, a Network Attached Storage (NAS) or even a Cloud Object Storage. To address these problems, this paper introduces a cheaper and faster Hadoop backup and restore solution. It compares the traditional redundant cluster replica technique with an alternative one that consists of using Hadoop client commands to create multiple streams of data from HDFS files to Bacula – the most popular open source backup software and that can receive information from named pipes (FIFO). The new mechanism is roughly 51% faster and consumed 75% less backup storage when compared with the previous solutions.

Posters
Paper Nr: 4
Title:

qvm: A Command Line Tool for the Provisioning of Virtual Machines

Authors:

Emmanuel Kayode Akinshola Ogunshile

Abstract: The purpose of this paper is to create and demonstrate a command line utility that uses freely available cloud images—typically intended for deployment within public and private cloud environments—to rapidly provision virtual machines on a local server, taking advantage of the ZFS file system. This utility, qvm, aims to provide syntactical consistency for both potential contributors and users alike—it is written in Python and uses YAML for all user configuration; exactly like cloud-init, the post-deployment configuration system featured in the cloud images used by qvm to allow its rapid provisioning. qvm itself does not use the libvirt API to create virtual machines, instead parsing pre-defined templates containing options for the commonly used virt-install tool, installed alongside virt-manager, the de facto graphical libvirt client. The utility is capable of importing cloud images into zvols and creating clones for each virtual machine using the pyzfs Python wrapper for the libzfs_core C library, as well as a custom recreation of pyzfs based on the zfs command line utility. qvm aims to introduce some basic IaC constructs to the provisioning of local virtual machines using the aforementioned common tools, requiring no prior experience beyond the usage of these tools. Its use of cloud-init allows for portability into existing cloud infrastructure, with no requirements on common Linux distributions, such as Red Hat Enterprise Linux, Debian, or SUSE, and their derivatives, beyond their base installation with virtualisation server packages and the prerequisite Python libraries required by qvm.

Area 3 - Data as a Service

Short Papers
Paper Nr: 80
Title:

A Stream Clustering Algorithm using Information Theoretic Clustering Evaluation Function

Authors:

Erhan Gokcay

Abstract: There are many stream clustering algorithms that can be divided roughly into density based algorithms and hyper spherical distance based algorithms. Only density based algorithms can detect nonlinear clusters and all algorithms assume that the data stream is an ordered sequence of points. Many algorithms need to receive data in buckets to start processing with online and offline iterations with several passes over the data. In this paper we propose a streaming clustering algorithm using a distance function which can separate highly nonlinear clusters in one pass. The distance function used is based on information theoretic measures and it is called Clustering Evaluation Function. The algorithm can handle data one point at a time and find the correct number of clusters even with highly nonlinear clusters. The data points can arrive in any random order and the number of clusters does not need to be specified. Each point is compared against already discovered clusters and each time clusters are joined or divided using an iteratively updated threshold.

Area 4 - Edge Cloud and Fog Computing

Full Papers
Paper Nr: 18
Title:

Deploying Fog Applications: How Much Does It Cost, By the Way?

Authors:

Antonio Brogi, Stefano Forti and Ahmad Ibrahim

Abstract: Deploying IoT applications through the Fog in a QoS-, context-, and cost-aware manner is challenging due to the heterogeneity, scale and dynamicity of Fog infrastructures. To decide how to allocate app functionalities over the continuum from the IoT to the Cloud, app administrators need to find a trade-off among QoS, resource consumption and cost. In this paper, we present a novel cost model for estimating the cost of deploying IoT applications to Fog infrastructures. We show how the inclusion of the cost model in the FogTorchP open-source prototype permits to determine eligible deployments of multi-component applications to Fog infrastructures and to rank them according to their QoS-assurance, Fog resource consumption and cost. We run the extended prototype on a motivating scenario, showing how it can support IT experts in choosing the deployments that best suit their desiderata.

Paper Nr: 48
Title:

Scheduling Latency-Sensitive Applications in Edge Computing

Authors:

Vincenzo Scoca, Atakan Aral, Ivona Brandic, Rocco De Nicola and Rafael Brundo Uriarte

Abstract: Edge computing is an emerging technology that aims to include latency-sensitive and data-intensive applications such as mobile or IoT services, into the cloud ecosystem by placing computational resources at the edge of the network. Close proximity to producers and consumers of data brings significant benefits in latency and bandwidth. However, edge resources are, by definition, limited in comparison to cloud counterparts, thus, a trade-off exists between deploying a service closest to its users and avoiding resource overload. We propose a score-based edge service scheduling algorithm that evaluates both network and computational capabilities of edge nodes and outputs the maximum scoring mapping between services and resources. Our extensive simulation based on a live video streaming service, demonstrates significant improvements in both network delay and service time. Additionally, we compare edge computing technology with the state-of-the-art cloud computing and content delivery network solutions within the context of latency-sensitive and data-intensive applications. Our results show that edge computing enhanced with suggested scheduling algorithm is a viable solution for achieving high quality of service and responsivity in deploying such applications.

Short Papers
Paper Nr: 53
Title:

BUDaMaF - Data Management in Cloud Federations

Authors:

Evangelos Psomakelis, Konstantinos Tserpes, Dimosthenis Anagnostopoulos and Theodora Varvarigou

Abstract: Data management involves quality of service, security, resource management, cost management, incident identification, disaster avoidance and/or recovery, as well as many other concerns. This situation gets ever more complicated because of the divergent nature of cloud federations. The BASMATI Unified Data Management Framework (BUDaMaF) creates an automated uniform way of managing all data transactions, as well as the data stores themselves, in a polyglot multi-cloud consisting of a plethora of different machines and data store systems. It provides a context independent platform providing automated scaling and data migration, tackling in real time disaster scenarios, like sudden usage spikes.

Paper Nr: 69
Title:

A Private Gateway for Investigating IoT Data Management

Authors:

Tamas Pflanzner and Attila Kertesz

Abstract: By responding to the new trend represented by the appearance of the Internet of Things (IoT), several cloud providers have started to offer specific management services. In recent years, we have already seen that cloud computing has managed to serve IoT needs by making data generation, processing and visualization tasks transparent to the users. In IoT Cloud systems developers do not only have to buy and configure sensor devices, but they also have to develop so-called gateway applications to manage the data comings from these devices. In this paper we show how to develop such a private gateway, and present a comprehensive simulation environment, where IoT Cloud applications can be investigated without initial investments. Finally, we evaluate the proposed gateway with real sensor data.

Area 5 - Mobile Cloud Computing

Full Papers
Paper Nr: 40
Title:

MOCCAA: A Delta-synchronized and Adaptable Mobile Cloud Computing Framework

Authors:

Harun Baraki, Corvin Schwarzbach, Malte Fax and Kurt Geihs

Abstract: Mobile Cloud Computing (MCC) requires an infrastructure that is merging the capabilities of resourceconstrained but mobile and context-aware devices with that of immovable but powerful resources in the cloud. Application execution shall be boosted and battery consumption reduced. However, a solution’s practicability is only ensured, if the provided tools, environment and framework themselves are performant too and if developers are able to adopt, extend and apply it easily. In this light, we introduce our comprehensive and extendable framework MOCCAA (MObile Cloud Computing AdaptAble) and demonstrate its effectiveness. Its performance gain is mainly achieved through minimized monitoring efforts for resource consumption prediction, scalable and location-aware resource discovery and management, and, in particular, through our graph-based delta synchronization of local and remote object states. This allows us to reduce synchronization costs significantly and improve quality dimensions such as latency and bandwidth consumption.

Short Papers
Paper Nr: 82
Title:

Citizen Empowerment by a Technical Approach for Privacy Enforcement

Authors:

Sascha Alpers, Stefanie Betz, Andreas Fritsch, Andreas Oberweis, Gunther Schiefer and Manuela Wagner

Abstract: It is a fundamental right of every natural person to control which personal information is collected, stored and processed by whom, for what purposes and how long. In fact, many (cloud based) services can only be used if the user allows them broad data collection and analysis. Often, users can only decide to either give their data or not to participate in communities. The refusal to provide personal data results in significant drawbacks for social interaction. That is why we believe that there is a need for tools to control one's own data in an easy and effective way as protection against economic interest of global companies and their cloud computing systems (as data collector from apps, mobiles and services). Especially, as nowadays everybody is permanently online using different services and devices, users are often lacking the means to effectively control the access to their private data. Therefore, we present an approach to manage and distribute privacy settings: PRIVACY-AVARE is intended to enable users to centrally determine their data protection preferences and to apply them on different devices. Thus, users gain control over their data when using cloud based services. In this paper, we present the main idea of PRIVACY-AVARE.

Area 6 - Service Modelling and Analytics

Full Papers
Paper Nr: 93
Title:

Exploiting Load Imbalance Patterns for Heterogeneous Cloud Computing Platforms

Authors:

Eduardo Roloff, Matthias Diener, Luciano P. Gaspary and Philippe O. A. Navaux

Abstract: Cloud computing providers offer a variety of instance sizes, types, and configurations that have different prices but can interoperate. As many parallel applications have heterogeneous computational demands, these different instance types can be exploited to reduce the cost of executing a parallel application while maintaining an acceptable performance. In this paper, we perform an analysis of load imbalance patterns with an intentionally-imbalanced artificial benchmark to discover which patterns can benefit from a heterogeneous cloud system. Experiments with this artificial benchmark as well as applications from the NAS Parallel Benchmark suite show that the price of executing an imbalanced application can be reduced substantially on a heterogeneous cloud for a variety of imbalance patterns, while maintaining acceptable performance. By using a heterogeneous cloud, cost efficiency was improved by up to 63%, while performance was reduced by less than 7%.

Short Papers
Paper Nr: 19
Title:

Making the Cloud Work for Software Producers: Linking Architecture, Operating Cost and Revenue

Authors:

Pierangelo Rosati, Frank Fowley, Claus Pahl, Davide Taibi and Theo Lynn

Abstract: Cloud migration is concerned with moving an on-premise software system into the cloud. In this paper we focus on software producers adopting the cloud to provide their solutions to enterprise customers. Their challenge is to migrate a software product, developed in-house and traditionally delivered on-premise, to an Infrastructure-as-a-Service or Platform-as-a-Service solution, while also mapping an existing traditional licensing model on to a cloud monetization model. The analysis of relevant cost types and factors of cloud computing generate relevant information for the software producers when deciding to adopt cloud computing, and defining software pricing. We present an integrated framework for informing cloud monetization based on operational cost factors for migrating to the cloud and test it in a real-life case study. Differences between basic virtualization of the software product and using fully cloud-native platform services for re-architecting the product in question are discussed.

Paper Nr: 26
Title:

Towards a Lightweight Multi-Cloud DSL for Elastic and Transferable Cloud-native Applications

Authors:

Peter-Christian Quint and Nane Kratzke

Abstract: CCloud-native applications are intentionally designed for the cloud in order to leverage cloud platform features like horizontal scaling and elasticity – benefits coming along with cloud platforms. In addition to classical (and very often static) multi-tier deployment scenarios, cloud-native applications are typically operated on much more complex but elastic infrastructures. Furthermore, there is a trend to use elastic container platforms like Kubernetes, Docker Swarm or Apache Mesos. However, especially multi-cloud use cases are astonishingly complex to handle. In consequence, cloud-native applications are prone to vendor lock-in. Very often TOSCA-based approaches are used to tackle this aspect. But, these application topology defining approaches are limited in supporting multi-cloud adaption of a cloud-native application at runtime. In this paper, we analyzed several approaches to define cloud-native applications being multi-cloud transferable at runtime. We have not found an approach that fully satisfies all of our requirements. Therefore we introduce a solution proposal that separates elastic platform definition from cloud application definition. We present first considerations for a domain specific language for application definition and demonstrate evaluation results on the platform level showing that a cloud-native application can be transfered between different cloud service providers like Azure and Google within minutes and without downtime. The evaluation covers public and private cloud service infrastructures provided by Amazon Web Services, Microsoft Azure, Google Compute Engine and OpenStack.

Paper Nr: 30
Title:

Classifying Malicious Thread Behavior in PaaS Web Services

Authors:

Cemile Diler Özdemir, Mehmet Tahir Sandıkkaya and Yusuf Yaslan

Abstract: Multitenant structure of PaaS cloud delivery model allows customers to share the platform resources in the cloud. However, this structure requires a strong security mechanism that isolates customer applications to prevent interference between different applications. In this paper, a malicious thread behavior detection framework using machine learning algorithms is proposed to classify whether user requests are malicious. The framework uses thread metrics of worker threads and N-Gram frequencies of operations as its features. Test results are evaluated on a real-life scenario using Random Forest, Adaboost and Bagging ensemble learning algorithms and evaluated using different accuracy metrics. It is found that the malicious request detection accuracy of the proposed system is 87.6%.

Paper Nr: 71
Title:

A Proposal for the Specification of Data Mining Services in Cloud Computing

Authors:

Manuel Parra-Royon and Jose M. Benitez

Abstract: For more than a decade, languages such as WSDL, SA-WSDL, OWL-S and others have been proposed to tackle the problem of service description. These service description languages do not take into account key aspects of cloud computing. Inherent features such as interaction techniques between entities, service-level agreement or pricing are necessary when defining a cloud computing service. Regarding cloud data mining services, specific issues of experimentation and the execution process should be included, among others. Following the Linked Data proposal, it is possible to design a specification for the exchange of data mining services and achieve the highest level of interoperability. In this paper we propose a schema of definition of data mining service in cloud computing using Linked Data and validate its operation by defining a complete service. Our proposal is suitable for fully defining data mining services in a comprehensive approach, including all aspects associated with an on-demand cloud service.

Paper Nr: 92
Title:

Trading Network Performance for Cash in the Bitcoin Blockchain

Authors:

Enrico Tedeschi, Håvard D. Johansen and Dag Johansen

Abstract: Public blockchains have emerged as a plausible cloud-like substrate for applications that require resilient communication. However, sending messages over existing public blockchains can be cumbersome and costly as miners require payment to establish consensus on the sequence of messages. In this paper we analyze the network performance of the Bitcoin public ledger when used as a massaging substrate. We present several real-world observations on its characteristics, transaction visibility, and fees paid to miners; and we propose two models for fee-cost estimation. We find that applications to some extent can improve messaging latency by paying transaction fees. We also suggest that spendings should be kept below 300 Satoshi per byte.

Posters
Paper Nr: 97
Title:

Towards a Security-Aware Benchmarking Framework for Function-as-a-Service

Authors:

Roland Pellegrini, Igor Ivkic and Markus Tauber

Abstract: In a world, where complexity increases on a daily basis the Function-as-a-Service (FaaS) cloud model seams to take countermeasures. In comparison to other cloud models, the fast evolving FaaS increasingly abstracts the underlying infrastructure and refocuses on the application logic. This trend brings huge benefits in application and performance, but comes with difficulties for benchmarking cloud applications. In this position paper, we present an initial investigation of benchmarking FaaS in close to reality production systems. Furthermore, we outline the architectural design including the necessary benchmarking metrics. We also discuss the possibility of using the proposed framework for identifying security vulnerabilities.

Area 7 - Services Science

Full Papers
Paper Nr: 2
Title:

Utilising the Tor Network for IoT Addressing and Connectivity

Authors:

Felix W. Baumann, Ulrich Odefey, Sebastian Hudert, Michael Falkenthal and Uwe Breitenbücher

Abstract: For Internet of Things (IoT) devices and cyber-physical systems (CPS), it is required to connect them securely and reliably to some form of cloud environment or computing entity for control, management and utilisation. The Internet is a suitable, standardized, and proven means for the connection of IoT devices in various scenarios. Connection over the Internet utilises existing protocols, standards, technologies and avoids investment in new, specialised concepts. Thereby, this connection requires a transparent addressing schema which is commonly TCP/IP, using domain names and IP addresses. However, in industrial, commercial and private networks, the addressability and connectability/connectivity is often limited by firewalls, proxies and router configurations utilising NAT. Thus, the present network configurations hinder the establishment of connections between IoT devices across different locations. Therefore, the method for connecting IoT devices in a client-server configuration proposed herein utilises the Tor (previously: The onion router/routing) network for addressing of and secured communication to IoT and CPS devices. It is an overlay protocol that was designed to allow for robust and anonymous communication. The benefit of this approach is to enable addressability and connectivity of IoT devices in firewalled and potentially unknown and changing network environments, thus allowing for IoT devices to be used reliably behind firewalls as long as outgoing communication is not blocked.

Paper Nr: 25
Title:

Authorization-aware HATEOAS

Authors:

Marc Hüffmeyer, Florian Haupt, Frank Leymann and Ulf Schreier

Abstract: The architectural style named Representational State Transfer (REST) is nowadays widely established and still enjoys a growing popularity. One of the core principles of REST is referred as ”Hypermedia as the Engine of Application State” (HATEOAS). HATEOAS is one of the foundations of the scalability that RESTful systems provide and enables the decoupling of client and server. But the realization of HATEOAS is challenging, because there is no systematic approach how to enforce the constraint. Therefore, the implementation is mostly up to the developer of a RESTful service. This work describes a new method of how to apply the HATEOAS constraint. We describe a method that systematically enables HATEOAS based on REST API models and the integration of access control mechanisms. In order to avoid unauthorized access attempts and unnecessary network traffic, the resource representations are customized to the requesting subject. References that lead to not accessible resources, are not included in the customized resource representations. Therefore, an attribute based access control mechanism is extended to distinguish between static and dynamic attributes. A 2-phase authorization procedure is introduced that relies on this discrimination and determines the references which must be included in the resource representation. The result is a flexible realization of HATEOAS based on formal models.

Paper Nr: 66
Title:

A Model-based Architecture for Autonomic and Heterogeneous Cloud Systems

Authors:

Hugo Bruneliere, Zakarea Al-Shara, Frederico Alvares, Jonathan Lejeune and Thomas Ledoux

Abstract: Over the last few years, Autonomic Computing has been a key enabler for Cloud system's dynamic adaptation. However, autonomously managing complex systems (such as in the Cloud context) is not trivial and may quickly become fastidious and error-prone. We advocate that Cloud artifacts, regardless of the layer carrying them, share many common characteristics. Thus, this makes it possible to specify, (re)configure and monitor them in an homogeneous way. To this end, we propose a generic model-based architecture for allowing the autonomic management of any Cloud system. From a "XaaS'' model describing a given Cloud system, possibly over multiple layers of the Cloud stack, Cloud administrators can derive an autonomic manager for this system. This paper introduces the designed model-based architecture, and notably its core generic XaaS modeling language. It also describes the integration with a constraint solver to be used by the autonomic manager, as well as the interoperability with a Cloud standard (TOSCA). It presents an implementation (with its application on a multi-layer Cloud system) and compares the proposed approach with other existing solutions.

Short Papers
Paper Nr: 11
Title:

Reactive Through Services - Opinionated Framework for Developing Reactive Services

Authors:

Micael Pedrosa, Jorge Miguel and Carlos Costa

Abstract: Front-end development is inherently asynchronous, and for some time there was no correct way on how to build it in a reactive manner. Reactive Programming is a paradigm of software development centered in asynchronous data streams. Lately, this paradigm has been getting a lot of visibility due to integrations with frameworks like Angular and React. However, there is (from the electronic jargon) an impedance mismatch between the reactive UI and the required server-side services. Since common REST services do not integrate well with this new paradigm and JSON-RPC lacks some of the communication models described in this publication, we present a possible alternative for an opinionated framework which fills the gap between the reactive front-end and back-end services, maintaining the straightforward way of development that REST and JSON-RPC provides. This framework offers possibilities for new protocols and models, extending the basic REST and JSON-RPC models. It also delivers a reference implementation, certifying the viability of the proposal. Available at https://github.com/shumy/reactive-through-services.

Paper Nr: 27
Title:

Towards Multi-cloud SLO Evaluation

Authors:

Kyriakos Kritikos, Chrysostomos Zeginis, Andreas Paravoliasis and Dimitris Plexousakis

Abstract: A modern service-based application (SBA) operates in a cross-cloud, highly dynamic environment while comprises various components at different abstraction levels that might fail. To support cross-level SBA adaptation, a cross-cloud Service Level Objective (SLO) monitoring and evaluation system is required, able to produce the right events that must trigger suitable adaptation actions. While most research focuses on SBA monitoring, SLO evaluation is usually restricted in a centralised, single-cloud form, not amenable to heavy workloads that could incur in a complex SBA system. Thus, a fast and scalable event generation and processing system is needed, able to scale well to handle such a load. Such a system must address the cross-level event composition, suitable for detecting complex problematic situations. This paper closes this gap by proposing a novel complex event processing framework, scalable and distributable across the whole SBA architecture. This framework can cover any kind of event combination, no matter how complex it is. It also supports event pattern management while exploits a publish-subscribe mechanism to: (a) synchronise with the modification of adaptation rules directly involving these event patterns; (b) enable the decoupling from an SBA management system.

Paper Nr: 56
Title:

Using Machine Learning for Recommending Service Demand Estimation Approaches - Position Paper

Authors:

Johannes Grohmann, Nikolas Herbst, Simon Spinner and Samuel Kounev

Abstract: Service demands are key parameters in service and performance modeling. Hence, a variety of different approaches to service demand estimation exist in the literature. However, given a specific scenario, it is not trivial to select the currently best approach, since deep expertise in statistical estimation techniques is required and the requirements and characteristics of the application scenario might change over time (e.g., by varying load patterns). To tackle this problem, we propose the use of machine learning techniques to automatically recommend the best suitable approach for the target scenario. The approach works in an online fashion and can incorporate new measurement data and changing characteristics on-the-fly. Preliminary results show that executing only the recommended estimation approach achieves 99.6% accuracy compared to executing all approaches available, while speeding up the estimation time by 57%.

Paper Nr: 73
Title:

Designing and Implementing Elastically Scalable Services - A State-of-the-art Technology Review

Authors:

Kiyana Bahadori and Tullio Vardanega

Abstract: The prospect of fast and affordable on-demand service delivery over the Internet proceeds from the very notion of Cloud Computing. For service providers, the ability to afford those benefits to the user is contingent on attaining rapid elasticity in service design and implementation, which is a very open research goal as yet. With a view to this challenge, this paper draws a trajectory that, starting from a better understanding of the principal service design features, relates them to the microservice architectural style and its implications on elastic scalability, most notably dynamic orchestration, and concludes reviewing how well state-of-the-art technology fares for their implementation.

Paper Nr: 74
Title:

Building Cloud Data Interchange Services for E-Learning Systems: Applications on the Moodle System

Authors:

Alina Andreica and Fernando Paulo Belfo

Abstract: The cloud data interchange services among various information systems still have huge possibilities to evolve. We focus on applying data interchange principles that we have proposed for cloud environments in order to perform data exchange for e-learning systems. Our case study is based on the moodle system. Developing and improving data interchange standards for learning objects contributes to improving processing and exchanging techniques of digital content used for teaching, learning, or training among various e-learning systems. Equivalence algorithms and canonical representation are used in order to ensure the uniform representation in the cloud database. The solution we describe, designed for cloud architectures, has important advantages in e-learning systems communication and educational institutions cooperation, since different institutions, using different e-learning systems, do not have automatic means of exchanging learning information. Therefore, these techniques for representing and exchanging learning objects facilitate their sharing among different stakeholders.

Posters
Paper Nr: 59
Title:

The Ontologically based Model for the Integration of the IoT and Cloud ERP Services

Authors:

Darko Andročec, Ruben Picek and Marko Mijač

Abstract: On-premise and Cloud ERP systems have become a backbone of almost all businesses. Another recent trend currently in focus of both industry and academy is Internet of Things. The integration of Cloud ERP and the Internet of Things (IoT) should be looked as a new shift in business effectiveness and will have a great momentum in future. In this work, we propose the ontologically based model for the integration of the IoT and Cloud ERP systems by using Semantic Web services. To semantically annotate things as services, we plan to use recently published W3C’s SSN and SOSA ontologies. Furthermore, we plan to extend mentioned ontologies to include classification and descriptions of Cloud ERP APIs. Our integration model proposes usage of Semantic web services and AI planning technique to semi-automatically compose IoT and Cloud ERP services.

Area 8 - Cloud Computing Platforms and Applications

Full Papers
Paper Nr: 10
Title:

AUTOGENIC: Automated Generation of Self-configuring Microservices

Authors:

Stefan Kehrer and Wolfgang Blochinger

Abstract: The state of the art proposes the microservices architectural style to build applications. Additionally, container virtualization and container management systems evolved into the perfect fit for developing, deploying, and operating microservices in line with the DevOps paradigm. Container virtualization facilitates deployment by ensuring independence from the runtime environment. However, microservices store their configuration in the environment. Therefore, software developers have to wire their microservice implementation with technologies provided by the target runtime environment such as configuration stores and service registries. These technological dependencies counteract the portability benefit of using container virtualization. In this paper, we present AUTOGENIC - a model-based approach to assist software developers in building microservices as self-configuring containers without being bound to operational technologies. We provide developers with a simple configuration model to specify configuration operations of containers and automatically generate a self-configuring microservice tailored for the targeted runtime environment. Our approach is supported by a method, which describes the steps to automate the generation of self-configuring microservices. Additionally, we present and evaluate a prototype, which leverages the emerging TOSCA standard.

Paper Nr: 12
Title:

Evaluating the User Acceptance Testing for Multi-tenant Cloud Applications

Authors:

Victor Hugo Santiago C. Pinto, Ricardo R. Oliveira, Ricardo F. Vilela and Simone R. S. Souza

Abstract: SaaS (Software as a Service) is a service delivery model in which an application can be provided on demand via the Internet. Multi-tenant architecture is essential for SaaS because it enables multiple customers, so-called tenants, to share the system's resources in a transparent way to reduce costs and customize the software layer, resulting in variant applications. Despite the popularity of this model, there have been few cases of evaluation of software testing in cloud computing. Many researchers argue that traditional software testing may not be a suitable way of validating cloud applications owing to the high degree of customization, its dynamic environment and multi-tenancy. User Acceptance Testing (UAT) evaluates the external quality of a product and complements previous testing activities. The main focus of this paper is on investigating the ability of the parallel and automated UAT to detect faults with regard to the number of tenants. Thus, our aim is to evaluate to what extent the ability to detect faults varies if a different number of variant applications is executed. A case study was designed with a multi-tenant application called iCardapio and a testing framework created through Selenium and JUnit extensions. The results showed a significant difference in terms of detected faults when test scenarios with a single-tenant and multi-tenant were included.

Paper Nr: 29
Title:

HIOBS: A Block Storage Scheduling Approach to Reduce Performance Fragmentation in Heterogeneous Cloud Environments

Authors:

Denis M. Cavalcante, Flávio R. C. Sousa, Manoel Rui P. Paula, Eduardo Rodrigues, José S. Costa Filho, Javam C. Machado and Neuman Souza

Abstract: Cloud computing is a highly successful paradigm of service-oriented computing and has revolutionized the usage of computing infrastructure. In the cloud storage, the service user has requirements regarding availability and performance. Once the cloud resources are shared among multi-tenants, a service level agreement (SLA) is defined by the service provider and the user. The multi-tenant characteristic of the cloud implies a heterogeneity of SLA requirements from users. At the same time, cloud providers should upgrade their infrastructure with modern storage nodes to attend data-driven applications, resulting in a heterogeneous cloud environment. The heterogeneity of both SLA requirements and storage resources makes the volume scheduling problem complex to guarantee SLAs. This paper presents HIOBS, an SLA-aware approach for block storage scheduling in heterogeneous cloud environments to reduce the performance fragmentation of the available storage resources, thus increasing the chances of new SLA requirements to be met. We demonstrate through experiments that our method improves more than 40% the rate of SLA violations while using fewer storage nodes.

Paper Nr: 45
Title:

Transparent Interoperability Middleware between Data and Service Cloud Layers

Authors:

Elivaldo Lozer Fracalossi Ribeiro, Marcelo Aires Vieira, Daniela Barreiro Claro and Nathale Silva

Abstract: Over the years, many organizations have been using cloud computing services to persist, consume and provide data. Models such as Software as a Service (SaaS), Data as a Service (DaaS), and Database as a Service (DBaaS) are consumed on demand to serve a specific purpose. In summary, SaaS is a delivery model for applications, while DaaS and DBaaS are models to provide data and database management systems on demand, respectively. SaaS applications require additional efforts to access those data due to their heterogeneity: Non-structured (e.g. text), semi-structured (e.g. XML, JSON), and structured format (e.g. Relational Database). Consequently, the lack of standardization from DaaS and DBaaS generates a lack of interoperability among cloud layers. In this paper, we propose a middleware MIDAS (Middleware for DaaS and SaaS) to provide transparent interoperability between Services (SaaS) and Data layers (DaaS and DBaaS). Our current version of MIDAS concerns two important issues: (i) a formal description of our middleware and (ii) a joining data from different DaaS and DBaaS. To evaluate our middleware, we provide a set of experiments to handle functional, execution time, overhead, and interoperability issues. Our results demonstrate the effectiveness of our approach to addressing interoperability concerns in cloud computing environments.

Paper Nr: 50
Title:

Performance and Cost Analysis Between On-Demand and Preemptive Virtual Machines

Authors:

Breno G. S. Costa, Marco Antonio Sousa Reis, Aletéia P. F. Araújo and Priscila Solis

Abstract: A few years ago, Amazon Web Services introduced spot instances, transient servers that can be contracted at a significant discount over regular price, but whose availability depends on cloud provider criteria and the instance can be revoked at any time. Google Cloud Platform offers preemptive instances, transient servers that have similar behavior and discount level to spot instances. Both providers advertise that their transient servers have the same performance level as servers contracted on-demand. Even with the possibility of revocation at the provider’s discretion, some applications can benefit from the low prices charged by these servers. But the measured performance of both models, transient and on-demand, must be similar, and the applications must survive occasional or mass server revoking. This work compares the performance and costs of transient and on-demand servers from both providers. Results show there is no significant difference in performance measured, but there is real cost advantage using transient servers. On Amazon Web Services a MapReduce cluster composed of transient servers achieved a 68% discount when compared to the same cluster based on on-demand servers. On Google Cloud Platform, the discount achieved was 26% but it can be bigger when the clusters are larger.

Paper Nr: 57
Title:

Deadline-constrained Stochastic Optimization of Resource Provisioning, for Cloud Users

Authors:

Masoumeh Tajvidi, Daryl Essam and Michael J. Maher

Abstract: Acquiring computational resources dynamically, in response to demand, and only paying for the resources used, is the main benefit cloud computing may bring for cloud customers. However, this benefit can only be realized when customers can determine the right size of the resources required and allocate such resources in a cost-effective way. While resource over-provisioning can cost users more than necessary, resource under provisioning hurts application performance. To leverage the potential of clouds, a major concern, hence is optimizing the monetary cost spent in using cloud resources while ensuring the quality of service (QoS) and meeting deadlines. Unfortunately, there is still a lack of a good understanding of such cost optimization. The resource provisioning, from the cloud-user perspective, is a complicated optimization problem that consists of much uncertainty, as well as heterogeneity in its parameters. The variety of pricing plans further complicates this problem. There has been little work on solving this problem as it is in the real world and from the end users view. Most works relax the problem by not considering the dynamicity or heterogeneity of the environment. The aim of this paper, however, is optimizing the operational cost whilst guaranteeing performance and meeting deadline constraints, by taking into account parameters’ uncertainty and heterogeneity, as well as considering all three available pricing plans, i.e. on-demand, reservation, and spot pricing. The experimental implementation using a real cloud workload shows that, however the proposed model has not the perfect foresight of future; results are very close or in many cases similar to, the full knowledge model. We also analyse the results of various users with different workload pattern, based on a k-means clustering.

Paper Nr: 87
Title:

Architectural Patterns for Microservices: A Systematic Mapping Study

Authors:

Davide Taibi, Valentina Lenarduzzi and Claus Pahl

Abstract: Microservices is an architectural style increasing in popularity. However, there is still a lack of understanding how to adopt a microservice-based architectural style. We aim at characterizing different microservice architectural style patterns and the principles that guide their definition. We conducted a systematic mapping study in order to identify reported usage of microservices and based on these use cases extract common patterns and principles. We present two key contributions. Firstly, we identified several agreed microservice architecture patterns that seem widely adopted and reported in the case studies identified. Secondly, we presented these as a catalogue in a common template format including a summary of the advantages, disadvantages, and lessons learned for each pattern from the case studies. We can conclude that different architecture patterns emerge for different migration, orchestration, storage and deployment settings for a set of agreed principles.

Short Papers
Paper Nr: 9
Title:

Coordinating Vertical Elasticity of both Containers and Virtual Machines

Authors:

Yahya Al-Dhuraibi, Faiez Zalila, Nabil Djarallah and Philippe Merle

Abstract: Elasticity is a key feature in cloud computing as it enables the automatic and timely provisioning and deprovisioning of computing resources. To achieve elasticity, clouds rely on virtualization techniques including Virtual Machines (VMs) and containers. While many studies address the vertical elasticity of VMs and other few works handle vertical elasticity of containers, no work manages the coordination between these two vertical elasticities. In this paper, we present the first approach to coordinate vertical elasticity of both VMs and containers. We propose an auto-scaling technique that allows containerized applications to adjust their resources at both container and VM levels. This work has been evaluated and validated using the RUBiS benchmark application. The results show that our approach reacts quickly and improves application performance. Our coordinated elastic controller outperforms container vertical elasticity controller by 18.34% and VM vertical elasticity controller by 70%. It also outperforms container horizontal elasticity by 39.6%.

Paper Nr: 16
Title:

Sensor Network Modeling as a Service

Authors:

Anca Daniela Ionita, Florin Daniel Anton and Adriana Olteanu

Abstract: Cloud Computing opens new possibilities of service provisioning for sensor networks, necessary as they become more pervasive and distributed. This paper introduces a Model as a Service approach and a modeling language specifically defined for representing sensor network architecture, based on a four-phased method. It describes the metamodel and the correspondent environment for graphical modeling, with examples of sensor network models for road traffic monitoring. The sensor modeling environment was integrated on a private Cloud platform, within a virtual machine template, to provide sensor network modeling as a service, which is currently available for our university students. This serves as a foundation for delivering new services based on the interpretation of the resulted sensor network architecture models.

Paper Nr: 21
Title:

Comparison Between Bare-metal, Container and VM using Tensorflow Image Classification Benchmarks for Deep Learning Cloud Platform

Authors:

Chan-Yi Lin, Hsin-Yu Pai and Jerry Chou

Abstract: The recent success of AI is contributed by the adaptation of using deep learning in decision making process. To harness the power of deep learning, developers must not only rely on a computing framework, but a cloud platform to ensure resource utilization and computing performance to ease the burden on users as well. Hence, ”how cloud resources should be orchestrated for deep learning?” becomes a fundamental question for cloud providers. In this work, we built an in-house OpenStack cloud platform to enable various resource orchestrations, including virtual machine, container and bare-metal. Then we systematically evaluate the performance of different orchestration choices using Tensorflow image classification benchmarks to quantify the performance impact and discuss the challenges of addressing these performance issues.

Paper Nr: 32
Title:

Debugging Remote Services Developed on the Cloud

Authors:

M. Subhi Sheikh Quroush and Tolga Ovatman

Abstract: Cloud based development platforms are getting more widely used as the cloud services become more available and the performance of such platforms increase. One of the key issues in providing a cloud based development platform is to enable the developers to debug their code just as efficiently and effectively as they would perform in a desktop IDE development session. However, especially if the development of a remote service is being carried out, the debugging client and the server running the actual code is separated, disclosing many problems which are not present in a usual debugging session. This paper proposes a record/replay approach to deal with the problems of remote debugging. To keep the communication overhead of the proposed approach as small as possible,the debugger saves the variable values only for external data access such as getting the data from a database query or a web service call. The proposed approach is integrated to a real world cloud based development platform and the run-time overhead is measured on real world case studies to demonstrate the usefulness of the approach.

Paper Nr: 44
Title:

PopRing: A Popularity-aware Replica Placement for Distributed Key-Value Store

Authors:

Denis M. Cavalcante, Victor A. Farias, Flavio R. C. Sousa, Manoel Rui P. Paula, Javam C. Machado and Neuman Souza

Abstract: Distributed key-value stores (KVS) are a well-established approach for cloud data-intensive applications, but they were not designed to consider workloads with data access skew, mainly caused by popular data. In this work, we analyze the problem of replica placement on KVS for workloads with data access skew. We formally define our problem as a multi-objective optimization and present the PopRing approach based on genetic algorithm to find a new replica placement scheme. We also use OpenStack-Swift as the baseline to evaluate the performance improvements of PopRing under different configurations. A moderate PopRing configuration reduced in 52% the load imbalance and in 32% the replica placement maintenance while requiring the reconfiguration (data movement) of only 6% of total system data.

Paper Nr: 54
Title:

About being the Tortoise or the Hare? - A Position Paper on Making Cloud Applications too Fast and Furious for Attackers

Authors:

Nane Kratzke

Abstract: Cloud applications expose - beside service endpoints - also potential or actual vulnerabilities. And attackers have several advantages on their side. They can select the weapons, the point of time and the point of attack. Very often cloud application security engineering efforts focus to harden the fortress walls but seldom assume that attacks may be successful. So, cloud applications rely on their defensive walls but seldom attack intruders actively. Biological systems are different. They accept that defensive "walls" can be breached at several layers and therefore make use of an active and adaptive defense system to attack potential intruders - an immune system. This position paper proposes such an immune system inspired approach to ensure that even undetected intruders can be purged out of cloud applications. This makes it much harder for intruders to maintain a presence on victim systems. Evaluation experiments with popular cloud service infrastructures (Amazon Web Services, Google Compute Engine, Azure and OpenStack) showed that this could minimize the undetected acting period of intruders down to minutes.

Paper Nr: 60
Title:

Constructive Privacy for Shared Genetic Data

Authors:

Fatima-zahra Boujdad and Mario Sudholt

Abstract: The need for the sharing of genetic data, for instance, in genome-wide association studies is incessantly growing. In parallel, serious privacy concerns rise from a multi-party access to genetic information. Several techniques, such as encryption, have been proposed as solutions for the privacy-preserving sharing of genomes. However, existing programming means do not support guarantees for privacy properties and the performance optimization of genetic applications involving shared data. We propose two contributions in this context. First, we present new cloud-based architectures for cloud-based genetic applications that are motivated by the needs of geneticians. Second, we propose a model and implementation for the composition of watermarking with encryption, fragmentation, and client-side computations for the secure and privacy-preserving sharing of genetic data in the cloud.

Paper Nr: 61
Title:

Scalable Data Placement of Data-intensive Services in Geo-distributed Clouds

Authors:

Ankita Atrey, Gregory Van Seghbroeck, Bruno Volckaert and Filip De Turck

Abstract: The advent of big data analytics and cloud computing technologies has resulted in wide-spread research in finding solutions to the data placement problem, which aims at properly placing the data items into distributed datacenters. Although traditional schemes of uniformly partitioning the data into distributed nodes is the defacto standard for many popular distributed data stores like HDFS or Cassandra, these methods may cause network congestion for data-intensive services, thereby affecting the system throughput. This is because as opposed to MapReduce style workloads, data-intensive services require access to multiple datasets within each transaction. In this paper, we propose a scalable method for performing data placement of data-intensive services into geographically distributed clouds. The proposed algorithm partitions a set of data-items into geodistributed clouds using spectral clustering on hypergraphs. Additionally, our spectral clustering algorithm leverages randomized techniques for obtaining low-rank approximations of the hypergraph matrix, thereby facilitating superior scalability for computation of the spectra of the hypergraph laplacian. Experiments on a real-world trace-based online social network dataset show that the proposed algorithm is effective, efficient, and scalable. Empirically, it is comparable or even better (in certain scenarios) in efficacy on the evaluated metrics, while being up to 10 times faster in running time when compared to state-of-the-art techniques.

Paper Nr: 65
Title:

A Cloud-aware Autonomous Workflow Engine and Its Application to Gene Regulatory Networks Inference

Authors:

Arnaud Bonnaffoux, Eddy Caron, Hadrien Croubois and Olivier Gandrillon

Abstract: With the recent development of commercial Cloud offers, Cloud solutions are today the obvious solution for many computing use-cases. However, high performance scientific computing is still among the few domains where Cloud still raises more issues than it solves. Notably, combining the workflow representation of complex scientific applications with the dynamic allocation of resources in a Cloud environment is still a major challenge. In the meantime, users with monolithic applications are facing challenges when trying to move from classical HPC hardware to elastic platforms. In this paper, we present the structure of an autonomous workflow manager dedicated to IaaS-based Clouds (Infrastructure as a Service) with DaaS storage services (Data as a Service). The solution proposed in this paper fully handles the execution of multiple workflows on a dynamically allocated shared platform. As a proof of concept we validate our solution through a biologic application with the WASABI workflow.

Paper Nr: 83
Title:

Co-Transformation to Cloud-Native Applications - Development Experiences and Experimental Evaluation

Authors:

Josef Spillner, Yessica Bogado, Walter Benítez and Fabio López-Pires

Abstract: Modern software applications following cloud-native design principles and architecture guidelines have inherent advantages in fulfilling current user requirements when executed in complex scheduled environments. Engineers responsible for software applications therefore have an intrinsic interest to migrate to cloud-native architectures. Existing methodologies for transforming legacy applications do not yet consider migration from partly cloud-enabled and cloud-aware applications under continuous development. This work thus introduces a co-transformation methodology and validates it through the migration of a prototypical music identification and royalty collection application. Experimental results demonstrate that the proposed methodology is capable to effectively guide a transformation process, resulting in elastic and resilient cloud-native applications. Findings include the necessity to maintain application self-management even on modern cloud platforms.

Paper Nr: 85
Title:

OneDataShare - A Vision for Cloud-hosted Data Transfer Scheduling and Optimization as a Service

Authors:

Asif Imran, Md S. Q. Zulkar Nine, Kemal Guner and Tevfik Kosar

Abstract: Fast, reliable, and efficient data transfer across wide-area networks is a predominant bottleneck for dataintensive cloud applications. This paper introduces OneDataShare, which is designed to eliminate the issues plaguing effective cloud-based data transfers of varying file sizes and across incompatible transfer end-points. The vision of OneDataShare is to achieve high-speed data transfer, interoperability between multiple transfer protocols, and accurate estimation of delivery time for advance planning, thereby maximizing user-profit through improved and faster data analysis for business intelligence. The paper elaborates on the desirable features of OneDataShare as a cloud-hosted data transfer scheduling and optimization service, and how it is aligned with the vision of harnessing the power of the cloud and distributed computing. Experimental evaluation and comparison with existing real-life file transfer services show that the transfer throughout achieved by OneDataShare is up to 6.5 times greater compared to other approaches.

Posters
Paper Nr: 42
Title:

A Data-Science-as-a-Service Model

Authors:

Matthias Pohl, Sascha Bosse and Klaus Turowski

Abstract: The keen interest in data analytics as well as the highly complex and time-consuming implementation lead to an increasing demand for services of this kind. Several approaches claim to provide data analytics functions as a service, however they do not process data analysis at all and provide only an infrastructure, a platform or a software service. This paper presents a Data-Science-as-a-Service model that covers all of the related tasks in data analytics and, in contrast to former technical considerations, takes a problem-centric and technology-independent approach. The described model enables customers to categorize terms in data analytics environments.

Area 9 - Cloud Computing Enabling Technology

Full Papers
Paper Nr: 38
Title:

Detection of Access Control Violations in the Secure Sharing of Cloud Storage

Authors:

Carlos André Batista de Carvalho, Rossana Maria de Castro Andrade, Nazim Agoulmine and Miguel Franklin de Castro

Abstract: A cloud storage service implements security mechanisms to protect users data, including an access control mechanism to enable the data sharing. Thus, it is possible to define users permissions, granting the access only to authorized users. Existing solutions consider that the provider is honest but curious so that the designed mechanisms prevent the access to the files by the provider. However, the possibility of executing illegal transactions is not analyzed, and a malicious provider can perform transactions requested by unauthorized users, resulting in access control violations. In this paper, we propose monitoring and auditing mechanisms to detect these violations. As a result, new attacks are identified, especially those resulting from writing actions requested by users whose permissions were revoked. Colored Petri Nets (CPNs) are used to model and validate our proposal.

Paper Nr: 68
Title:

Cost-efficient Datacentre Consolidation for Cloud Federations

Authors:

Gabor Kecskemeti, Andras Markus and Attila Kertesz

Abstract: Cloud Computing has become mature enough to enable the virtualized management of multiple datacentres. Datacentre consolidation is an important method for the efficient operation of such distributed infrastructures. Several approaches have been developed to improve the efficiency e.g. in terms of power consumption, but only a few attention has been turned to combining pricing methods with consolidation techniques. In this paper we discuss how we introduced cost models to the DISSECT-CF simulator to foster the development of cost efficient datacentre consolidation solutions. We also exemplify the usage of this extended simulator by performing cost-aware datacentre consolidation. We apply real world traces to simulate cloud load, and propose 7 strategies to address the problem.

Short Papers
Paper Nr: 47
Title:

PacificClouds: A Flexible MicroServices based Architecture for Interoperability in Multi-Cloud Environments

Authors:

Juliana Oliveira de Carvalho, Fernando Trinta and Dario Vieira

Abstract: Cloud Computing has become a popular IT service delivery model in recent years. While the cloud brings several benefits, there are still some challenges that need to be overcome to apply the cloud model in certain scenarios. One such problem is the so-called vendor lock-in since different cloud providers offer peculiar and often incompatible services, which results in the automatic migration impossibility of the application between cloud providers. This issue becomes even more problematic when thinking of future applications composed of services or components hosted by different cloud providers in a multi-cloud environment. Dealing with vendor lock-in in multiple clouds requires addressing two important challenges: interoperability and portability. Some solutions have been proposed to deal with both problems, but most of them fail to provide flexibility. Therefore, we propose PacificClouds, a novel architecture based on microservices for addressing interoperability in a multi-cloud environment. PacificClouds differs from previous works by providing greater flexibility due to the microservices architectural pattern. In this article, we also propose a definition of microservices and a comparative analysis of the works related to PacificClouds. Finally, we show the main challenges of PacificClouds, and we point out the future directions.

Paper Nr: 86
Title:

The Importance of Being OS-aware - In Performance Aspects of Cloud Computing Research

Authors:

Tommaso Cucinotta, Luca Abeni, Mauro Marinoni and Carlo Vitucci

Abstract: This paper highlights ineffifiencies in modern cloud infrastructures due to a distance between the research on high-level cloud management / orchestration and the research on low-level kernel and hypervisor mechanisms. Our position about this issue is that more research is needed to make these two worlds talk to each other, providing richer abstractions to describe the low-level mechanisms and automatically map higher-level descriptions and abstractions to configuration and performance tuning options available within operating systems and kernels (both host and guest), as well as hypervisors.

Paper Nr: 96
Title:

A Novel Green Service Level Agreement for Cloud Computing using Fuzzy Logic

Authors:

Awatif Ragmani, Amina El Omri, Noreddine Abghour, Khalid Moussaid and Mohamed Rida

Abstract: Cloud computing has several features including elasticity and economy of scale that have allowed it to find several uses in scientific, economic and industrial fields. However, the Cloud model relies heavily on the architecture of the data centers that host the Cloud services. To date, these data centers consume huge quantities of electrical energy in order to ensure the operation of both computer equipment and auxiliary equipment such as cooling systems. In order to reduce the environmental and economic impact of this energy consumption, several initiatives have emerged especially in the context of Green computing. Through this article, we propose a Cloud architecture that includes a service level agreement negotiation module based on the concept of fuzzy logic. The proposed solution aims to introduce a virtual machine consolidation policy in order to complete a global three-tier architecture. The final solution includes different modules of load balancing and scheduling based on fuzzy logic, metaheuristic and Map reduce algorithms in order to optimize both energy efficiency, response time and cost of Cloud services.

Posters
Paper Nr: 3
Title:

Viability of Small-Scale HPC Cloud Infrastructures

Authors:

Emmanuel Kayode Akinshola Ogunshile

Abstract: RDMA networking has historically been linked closely and almost exclusively with HPC infrastructures. However, as demand for RDMA networking increases in fields outside of HPC, such as with Hadoop in the Big Data space, an increasing number of organisations are exploring methods of introducing merged HPC and cloud platforms into their daily operations. This paper explores the benefits of RDMA over traditional TCP/IP networking, and considers the challenges faced in the areas of storage and networking from the perspectives of integration, management and performance. It also explores the overall viability of building such a platform, providing a suitable hardware infrastructure for a fictional case study business.

Paper Nr: 88
Title:

Risk Perception of Migrating Legacy Systems to the Cloud

Authors:

Breno G. S. Costa and Priscila Solis

Abstract: With the utilization of Cloud Computing as the way to provide Information Technology services, the organizations can migrate legacy systems to the cloud in order to reach several benefits. There are in the literature several proposals to model the critical elements of migration and many have been validated by specific case studies. Also several reference models were defined on top of these proposals and were created with the intention to consolidate different researches, trying to expand their applicability. Based on the above, this paper selects a reference model for migrating legacy systems to the cloud and proposes a method for calculating a score of perceived risk associated with each legacy system migration. The paper presents a proof of concept on government domain to show the method’s applicability.