CLOSER 2020 Abstracts


Area 1 - Cloud Computing Fundamentals

Short Papers
Paper Nr: 15
Title:

Security for Distributed Smart Meter: Blockchain-based Approach, Ensuring Privacy by Functional Encryption

Authors:

Artem Yurchenko, Mahbuba Moni, Daniel Peters, Jan Nordholz and Florian Thiel

Abstract: Today the trend towards a completely distributed measuring device is progressing and increasing numbers of measuring instruments have already a cloud connection. This development requires new solutions to cover the requirements laid down by legal metrology. These new challenges could be tackled by designing innovative solutions which extend and merge novel technologies. The aim of this publication is to use blockchain technology and functional encryption to develop a model of a secure smart metering system, demonstrating the capabilities and limitations of these technologies in a practical scenario in the framework of legal metrology.
Download

Paper Nr: 23
Title:

Network Traffic Characterization in the Control Network of OpenStack based on Virtual Machines State Changes

Authors:

Adnei W. Donatti, Guilherme P. Koslovski, Maurício A. Pillon and Charles C. Miers

Abstract: The adoption of private clouds is an option for optimizing the use and organization of computing resources. Although the cloud benefits are been known for some time, there are still questions on how to plan the cloud infrastructure in order to avoid basic network performance issues. OpenStack is one of the most widely used open source solutions for building private Infrastructure as a Service (IaaS) clouds. OpenStack distributes network traffic across multiple interfaces and virtualized networks, which connect hosts to its cloud services, divided into the domains: control, public, or guest. The present work aims to characterize network traffic in the OpenStack control domain produced by changing the state of virtual machines (VMs). There are few works related to the network infrastructure scenario on private clouds, research in this area is generally focused on public domain or guest domain operations. In this sense, we performed a characterization in the OpenStack administrative network. In order to perform OpenStack network characterization, experimentation methods were used to identify operating services as well as to measure network traffic in the OpenStack control domain during a set of common operations in VMs.
Download

Paper Nr: 31
Title:

Self-contained Service Deployment Packages

Authors:

Michael Zimmermann, Uwe Breitenbücher, Lukas Harzenetter, Frank Leymann and Vladimir Yussupov

Abstract: Complex applications are typically composed of multiple components. In order to install these components all their dependencies need to be satisfied. Typically these dependencies are resolved, downloaded, and installed during the deployment time and in the target environment, e.g., using package manager of the operating system. However, under some circumstances this approach is not applicable, e.g., if the access to the Internet is limited or non-existing at all. For instance, Industry 4.0 environments often have no Internet access for security reasons. Thus, in these cases, deployment packages without external dependencies are required that already contain everything required to deploy the software. In this paper, we present an approach enabling the transformation of non-self-contained deployment packages into self-contained deployment packages. Furthermore, we present a method for developing self-contained deployment packages systematically. The practical feasibility is validated by a prototypical implementation following our proposed system architecture. Moreover, our prototype is evaluated by provisioning a LAMP stack using the open-source ecosystem OpenTOSCA.
Download

Paper Nr: 43
Title:

Software-Defined Network Security over OpenStack Clouds: A Systematic Analysis

Authors:

Nicolas P. Lane, Guilherme P. Koslovski, Maurício A. Pillon, Charles C. Miers and Nelson M. Gonzalez

Abstract: Cloud computing infrastructure is an enticing target for malicious activity due to its network and compute capacity. Several studies focus on different aspects of cloud security from the client (tenant) side, leaving a gap regarding the cloud provider’s infrastructure perspective. To address this gap, this study conducts a systematic review of the literature on OpenStack, the most adopted open source cloud operating system. We present a qualitative assessment of security vulnerabilities related do Openflow usage on OpenStack network management. Based on this analysis we identify a critical vulnerability which affects the cloud infrastructure via Software-Defined Networks. This reveals the urge for having more studies focusing on the provider’s infrastructure side and associated tools and technologies.
Download

Paper Nr: 68
Title:

Quality of Service in Cloud Computing Environments with Multitenant DBMS

Authors:

Manuel I. Capel, Oscar I. Aporta and María C. Pegalajar-Jiménez

Abstract: This article proposes a new study of Quality of Service (QoS) in Database Management Systems with multitenancy in which it is experimentally verified that tenants follow interference patterns between them when they concurrently access the DBMS. The interference degree depends on characteristics of the database used by each tenant. A testing architecture with virtual machines (VM), managed with OpenNebula, has been designed. In each VM one DBMS is loaded managing many databases, one for each tenant. Five experiments were designed and numerous measurements performed using benchmarks of reference, such as TPCC, in a Cloud computing-based system. The results of the experiments are presented here, for which the latency and performance were measured with respect to different workloads and tenant configurations. To carry out the experiments, a multitenant environment model known as shared database/separate schema or shared instance was deployed, which is widely used at moment and presents the best ratio between resource use, performance and response.
Download

Area 2 - Cloud Operations

Full Papers
Paper Nr: 28
Title:

Enabling Container Cluster Interoperability using a TOSCA Orchestration Framework

Authors:

Domenico Calcaterra, Giuseppe Di Modica, Pietro Mazzaglia and Orazio Tomarchio

Abstract: Cloud orchestration frameworks are recognised as a useful tool to tackle the complexity of managing the life-cycle of Cloud resources. In scenarios where resources happen to be supplied by multiple providers, such complexity is further exacerbated by portability and interoperability issues due to incompatibility of providers’ proprietary interfaces. Container-based technologies do provide a solution to improve portability in the Cloud deployment landscape. Though the majority of Cloud orchestration tools support containerisation, they usually provide integration with a limited set of container-based cluster technologies without focusing on standard-based approaches for the description of containerised applications. In this work we discuss how we managed to embed the containerisation feature into a TOSCA-based Cloud orchestrator in a way that enables it to theoretically interoperate with any container run-time software. Tests were run on a software prototype to prove the approach viability.
Download

Short Papers
Paper Nr: 42
Title:

A Self-healing Platform for the Control and Management Planes Communication in Softwarized and Virtualized Networks

Authors:

Natal S. Neto, Daniel C. Oliveira, Maurício A. Gonçalves, Flávio O. Silva and Pedro F. Rosa

Abstract: Future computer networks will possibly use infrastructures with SDN, and NFV approaches to satisfy the requirements of future applications. In these approaches, components need to be monitored and controlled in a highly diverse environment, making it virtually impossible to solve complex management problems manually. Several solutions are found to deal with the data plane resilience, but the control and management planes still have fault tolerance issues. This work presents a new solution focused on maintaining the health of networks by considering the connectivity between control, management, and data planes. Self-healing concepts are used to build the model and the architecture of the solution. This position paper places the solution as a system occurring in the management plane and is focused on the maintenance of the control and management layers. The solution is designed using widely accepted technologies in the industry and aims to be deployed in companies that require carrier-grade capabilities in their production environments.
Download

Paper Nr: 41
Title:

Network Self-configuration for Edge Elements using Self-Organizing Networks Architecture (SONAr)

Authors:

Daniel C. Oliveira, Natal S. Neto, Maurício A. Gonçalves, Flávio O. Silva and Pedro F. Rosa

Abstract: 5G networks deployment is already a commercial reality, while the specification groups move towards 5G standalone standardization. With the need for network slicing and edge computing capabilities – and therefore, inherent SDN and NFV adoption in a cloud environment –, these networks’ complexity will require some degree of automation. The concept of Self-Organizing Networks (SON) is a trend for next-generation systems, as it is already used in the mobile radio access network (RAN) context. In this work, we will present a self-configuration (one of the four basic self-* properties of SON) system basic design proposal, within the context of Self-Organizing Networks Architecture (SONAr) framework. We will discuss the generic system architecture and present one practical application example, using the self-configuration platform to automatically configure the network to receive an eNodeB (4G base station) edge component. All the system design is being conceived to work on a carrier-grade production environment, with application versatility, reliability, and scalability.
Download

Area 3 - Edge Cloud and Fog Computing

Full Papers
Paper Nr: 17
Title:

A Fuzzy Controller for Self-adaptive Lightweight Edge Container Orchestration

Authors:

Fabian Gand, Ilenia Fronza, Nabil El Ioini, Hamid R. Barzegar, Shelernaz Azimi and Claus Pahl

Abstract: Edge clusters consisting of small and affordable single-board devices are used in a range of different applications such as microcontrollers regulating an industrial process or controllers monitoring and managing traffic roadside. We call this wider context of computational infrastructure between the sensor and Internet-of-Things world and centralised cloud data centres the edge or edge computing. Despite the growing hardware capabilities of edge devices, resources are often still limited and need to be used intelligently. This can achieved by providing a self-adaptive scaling component in these clusters that is capable of scaling individual parts of the application running in the cluster. We propose an auto-scalable container-based cluster architecture for lightweight edge devices. A serverless architecture is at the core of the management solution. Our auto-scaler as the key component of this architecture is based on fuzzy logic in order to address challenges arising from an uncertain environment. In this context, it is crucial to evaluate the capabilities and limitations of the application in a real-world context. Our results show that the proposed platform architecture, the implemented application and the scaling functionality meet the set requirements and offer a basis for lightweight edge computing.
Download

Paper Nr: 46
Title:

Adaptive Fog Service Placement for Real-time Topology Changes in Kubernetes Clusters

Authors:

Tom Goethals, Bruno Volckaert and Filip de Turck

Abstract: Recent trends have caused a shift from services deployed solely in monolithic data centers in the cloud to services deployed in the fog (e.g. roadside units for smart highways, support services for IoT devices). Simultaneously, the variety and number of IoT devices has grown rapidly, along with their reliance on cloud services. Additionally, many of these devices are now themselves capable of running containers, allowing them to execute some services previously deployed in the fog. The combination of IoT devices and fog computing has many advantages in terms of efficiency and user experience, but the scale, volatile topology and heterogeneous network conditions of the fog and the edge also present problems for service deployment scheduling. Cloud service scheduling often takes a wide array of parameters into account to calculate optimal solutions. However, the algorithms used are not generally capable of handling the scale and volatility of the fog. This paper presents a scheduling algorithm, named “Swirly”, for large scale fog and edge networks, which is capable of adapting to changes in network conditions and connected devices. The algorithm details are presented and implemented as a service using the Kubernetes API. This implementation is validated and benchmarked, showing that a single threaded Swirly service is easily capable of managing service meshes for at least 300.000 devices in soft real-time.
Download

Paper Nr: 61
Title:

Develop or Dissipate Fogs? Evaluating an IoT Application in Fog and Cloud Simulations

Authors:

Andras Markus, Peter Gacsi and Attila Kertesz

Abstract: The recent advances in Information and Communication Technology had a significant impact on distributed systems by giving birth to novel paradigms like Cloud Computing, Fog Computing and the Internet of Things (IoT). Clouds and fogs have promising properties to serve IoT needs, which require enormous data to be stored, processed and analysed generated by their sensors and devices. Since such IoT-Fog-Cloud systems can be very complex, it is inevitable to use simulators to investigate them. Cloud simulation is highly studied by now, and solutions offering fog modelling capabilities have also started to appear. In this paper we briefly compare fog modelling approaches of simulators, and present detailed evaluations in two of them to show the effects of utilizing fog resources over cloud ones to execute IoT applications. We also share our experiences in working with these simulators to help researchers and practitioners, who aim to perform future research in this field.
Download

Short Papers
Paper Nr: 9
Title:

A Location-allocation Model for Fog Computing Infrastructures

Authors:

Thiago Alves de Queiroz, Claudia Canali, Manuel Iori and Riccardo Lancellotti

Abstract: The trend of an ever-increasing number of geographically distributed sensors producing data for a plethora of applications, from environmental monitoring to smart cities and autonomous driving, is shifting the computing paradigm from cloud to fog. The increase in the volume of produced data makes the processing and the aggregation of information at a single remote data center unfeasible or too expensive, while latency-critical applications cannot cope with the high network delays of a remote data center. Fog computing is a preferred solution as latency-sensitive tasks can be moved closer to the sensors. Furthermore, the same fog nodes can perform data aggregation and filtering to reduce the volume of data that is forwarded to the cloud data centers, reducing the risk of network overload. In this paper, we focus on the problem of designing a fog infrastructure considering both the location of how many fog nodes are required, which nodes should be considered (from a list of potential candidates), and how to allocate data flows from sensors to fog nodes and from there to cloud data centers. To this aim, we propose and evaluate a formal model based on a multi-objective optimization problem. We thoroughly test our proposal for a wide range of parameters and exploiting a reference scenario setup taken from a realistic smart city application. We compare the performance of our proposal with other approaches to the problem available in literature, taking into account two objective functions. Our experiments demonstrate that the proposed model is viable for the design of fog infrastructure and can outperform the alternative models, with results that in several cases are close to an ideal solution.
Download

Paper Nr: 16
Title:

Serverless Container Cluster Management for Lightweight Edge Clouds

Authors:

Fabian Gand, Ilenia Fronza, Nabil El Ioini, Hamid R. Barzegar and Claus Pahl

Abstract: Clusters consisting of lightweight single-board devices are used in a variety of use cases: from microcontrollers regulating the production process of an assembly line to roadside controllers monitoring and managing traffic. Often, data that is accumulated on the devices has to be sent to remote cloud data centers for processing. However, with hardware capabilities of controllers continuously increasing and the need for better performance and security through local processing, directly processing data on a remote cluster, known as Edge Computing, is a favourable solution. Recent trends such as microservices, containerization and serverless technology provide solutions for interconnecting the nodes and deploying software across the created cluster. This paper aims at proposing a serverless architecture for clustered container applications. The architecture relies on the MQTT protocol for communication, Prometheus for monitoring and Docker swarm in conjunction with openFaas for deploying services across a cluster. Using the proposed architecture as a foundation, a concrete traffic management application is implemented as a proof-of-concept. Results show that the proposed architecture meets performance requirements. However, the network set-up as well as the network capabilities of the used devices were identified as potential bottlenecks.
Download

Paper Nr: 20
Title:

Particle Swarm Optimization for Performance Management in Multi-cluster IoT Edge Architectures

Authors:

Shelernaz Azimi, Claus Pahl and Mirsaeid H. Shirvani

Abstract: Edge computing extends cloud computing capabilities to the edge of the network, allowing for instance Internet-of-Things (IoT) applications to process computation more locally and thus more efficiently. We aim to minimize latency and delay in edge architectures. We focus on an advanced architectural setting that takes communication and processing delays into account in addition to an actual request execution time in a performance engineering scenario. Our architecture is based on multi-cluster edge layer with local independent edge node clusters. We argue that particle swarm optimisation as a bio-inspired optimisation approach is an ideal candidate for distributed load processing in semi-autonomous edge clusters for IoT management. By designing a controller and using a particle swarm optimization algorithm, we can demonstrate that processing and propagation delay and the end-to-end latency (i.e., total response time) can be optimized.
Download

Paper Nr: 22
Title:

Enabling the Management and Orchestration of Virtual Networking Functions on the Edge

Authors:

Vincent M. Richards, Rodrigo Moreira and Flávio O. Silva

Abstract: Tthe Internet of Things (IoT) is the pillar of the next generation of applications that will create a fusion between the real and digital worlds. Several use cases rely on Edge computing capabilities, and the virtualization capacity on Edge is a crucial requirement. The management and orchestration of computing, storage, and networking resources in the Edge cloud pose several challenges. In this work, we propose a Management and Orchestration (MANO) of Virtual Networking Functions (VNFs) on the Edge using standard solutions such as Open Source MANO (OSM) and OpenStack. To showcase our approach, we handled an experimental evaluation using an RPi3 infrastructure. OpenStack manages this infrastructure, and OSM provides the VNF MANO capabilities offering a significant improvement to support cloud computing on Edge. The evaluation showed the feasibility of using low-cost devices such as RPi with standard management solutions used in the core cloud.
Download

Paper Nr: 32
Title:

Leveraging Social Behaviour of Users Mobility and Interests for Improving QoS and Energy Efficiency of Content Services in Mobile Community Edge-clouds

Authors:

Vu H. Huynh and Milena Radenkovic

Abstract: Community network edge-clouds have been attracting significant interests over the recent years with the emergence of ubiquitous networked devices embedded in our daily activities and increasingly widespread fully-distributed heterogeneous networks of smart edges offering various applications and services in real time. This paper proposes EdgeCNC, a novel joint multilayer adaptive opportunistic network-coding algorithm integrated with adaptive opportunistic content caching service. EdgeCNC exploits the multilayer spatial-temporal locality of users’ mobility and interests in community network edge-clouds in order to select highly suitable set of contents to forward, cache and network code to highly suitable set of nodes in order to enhance QoS, reduce data transmissions and improve energy efficiency. We perform a multi-criteria evaluation of EdgeCNC performance in realistic Foursquare New York scenario of mobile community edge-clouds against the benchmark and competitive protocols in the face of dynamically changing users’ publish-subscribe and mobility patterns. We show that EdgeCNC achieves higher success ratio and data transmission efficiency while keeping lower delays, packet loss and energy consumption compared to the competitive and benchmark protocols.
Download

Paper Nr: 44
Title:

PF-BVM: A Privacy-aware Fog-enhanced Blockchain Validation Mechanism

Authors:

H. Baniata and A. Kertesz

Abstract: Blockchain technology has been successfully implemented in cryptocurrency industries, yet it is in the research phase for other applications. Enhanced security, decentralization and reliability are some of the advantages of blockchain technology that represent beneficial integration possibilities for computing and storage infrastructures. Fog computing is one of the recently emerged paradigms that needs to be improved to serve Internet of Things (IoT) environments of the future. In this paper we propose PF-BVM, a Privacy-aware Fog-enhanced Blockchain Validation Mechanism, that aims to support the integration of IoT, Fog Computing, and the blockchain technology. In this model the more trusted a fog node is, the higher the authority granted to validate a block on behalf of the blockchain nodes. To guarantee the privacy-awareness in PF-BVM, we use a blockchain-based PKI architecture that is able to provide higher anonymity levels, while maintaining the decentralization property of a blockchain system. We also propose a concept for measuring reliability levels of blockchain systems. We validated our proposed approach in terms of execution time and energy consumption in a simulated environment. We compared PF-BVM to the currently used validation mechanism in the Proof-of-Work (PoW) consensus algorithm, and found that PF-BVM can effectively reduce the total validation time and total energy consumption of an IoT-Fog-Blockchain system.
Download

Paper Nr: 62
Title:

Community-governed Services on the Edge

Authors:

João Mafra, Francisco Brasileiro and Raquel Lopes

Abstract: The popularization of resource-rich smartphones has enabled a wide range of new applications to emerge. Typically, these applications use a remote cloud to process data. In many cases, the data processed (or part of it) is collected by the users’ devices and sent to the cloud. In this architecture, the external cloud provider is the sole responsible for defining the governance of the application and all its data. This is not satisfactory from the privacy viewpoint, and may not be feasible in the long run. We propose an architecture in which the service is governed by the users of a community who have a common problem to solve. To make it possible, we use the concepts of Participatory Sensing, Mobile Social Networks (MSN) and Edge Computing, which enable data processing closer to the data sources (i.e. the users’ devices). We describe the proposed architecture and a case study to assess the feasibility and quality of our solution compared with other solutions already in place. Our case study uses simulation experiments fed with real data from the public transport system of Curitiba, a city in the South of Brazil with a population of approximately 2 million people. The results show that our approach is feasible and can potentially deliver quality of service (QoS) similar or close to the QoS delivered by current approaches that require the existence of a central server.
Download

Paper Nr: 6
Title:

QoS-aware Autonomic Adaptation of Microservices Placement on Edge Devices

Authors:

Bruno Stévant, Jean-Louis Pazat and Alberto Blanc

Abstract: Given the widespread availability of cheap computing and storage devices, as well as the increasing popularity of high speed network connections (e.g., Fiber To The Home (FTTH)), it is feasible for groups of users to share their own resources to build a service hosting platform. In such use-case, the response-time of the service is critical for the quality of experience. We describe a solution to optimize the response-time in the case of an application based on microservices. This solution leverages the flexibility of microservices to dynamically adapt the placement of the application workloads on edge devices. We validate this solution on a production edge infrastructure and discuss possible strategies for the decision rules.
Download

Paper Nr: 48
Title:

Towards Securing LoRaWAN ABP Communication System

Authors:

Hassan N. Noura, Ola Salman, Tarif Hatoum, Mohammad Malli and Ali Chehab

Abstract: A large number of power-constrained devices will be connected to the Internet of Things (IoT). Deployed in large areas, the battery-powered IoT devices call for power efficient and long range communication technologies. Consequently, the Low Power Wide Area Networks(LPWAN) were devoted to being the key IoT enablers. In this context, LoRaWAN, an LPWAN technology, is one of the main IoT communication protocols candidates. However, LoRaWAN suffers from different security and privacy threats. These threats lead to several availability, authentication, and privacy attacks. In this paper, we present efficient countermeasures against two well-known ABP attacks (eavesdropping and replay). The proposed solution aims at making LoRa ABP end-devices safer, more secure and more reliable. In fact, the proposed solution is based on the dynamic key derivation scheme. We presented two variants of dynamic key derivation: counter-based and channel information-based. A set of security and performance tests shows that the proposed countermeasures present low overhead in terms of computation and communication resources with a high level of security.
Download

Area 4 - Mobile Cloud Computing

Full Papers
Paper Nr: 24
Title:

Bootstrapping and Plug-and-Play Operations on Software Defined Networks: A Case Study on Self-configuration using the SONAr Architecture

Authors:

Maurício A. Gonçalves, Natal S. Neto, Daniel C. Oliveira, Flávio O. Silva and Pedro F. Rosa

Abstract: The autonomous network concept has gained strength with the growing complexity of the current networks, especially after the definition of 5G requirements and their key performance indicators. This concept is challenging to implement in legacy networks, and it has become feasible with the emergence of network softwarization, which enables the deployment of functionalities through a logically centralized control plane abstraction. The network softwarization simplifies the management process, reduces operational costs (OPEX), enhances protection against failures, and enables complex requirements such as high-performance indicators and IoT support. The Self-Organizing Networks Architecture (SONAr) project uses cutting-edge technologies and concepts such as SDN, NFV, and Machine Learning. It proposes a new architecture aimed at the design of self-management in computer networks, oriented by declarative intents. In this work, we introduce the SONAr project by describing its components and specifications. We also present a case study that shows the self-configuration property that includes bootstrapping and plug-and-play operations using SONAr components focusing on strategies applicable to OpenFlow based networks. We explain the decisions made in the implementation and present comparative results between them.
Download

Short Papers
Paper Nr: 38
Title:

A Study about the Impact of Encryption Support on a Mobile Cloud Computing Framework

Authors:

Francisco A. Gomes, Paulo L. Rego, Fernando M. Trinta, Windson Viana, Francisco A. Silva, José A. F. de Macêdo and José N. de Souza

Abstract: Mobile Cloud Computing joins two complementary paradigms by allowing the migration of tasks and data from resource-constrained devices into remote servers with higher processing capabilities in an approach known as offloading. An essential aspect of any offloading solution is the privacy support of the information transferred among mobile devices and remote servers. A common solution to address privacy issues in data transmission is the use of encryption. Nevertheless, encryption algorithms impose additional processing tasks that impact both on the offloading performance and the power consumption of mobile devices. This paper presents a study on the impact of using cryptographic algorithms in the CAOS offloading platform. Results from experiments show that the encryption time represents 2.17% to 5.35% of the total offloading time depending on the amount of offloaded data, encryption key size, and the place to run the offloaded task. Similar behavior occurred regarding energy consumption.
Download

Paper Nr: 60
Title:

IRENE: Interference and High Availability Aware Microservice-based Applications Placement for Edge Computing

Authors:

Paulo Souza, João Nascimento, Conrado Boeira, Ângelo Vieira, Felipe Rubin, Rômulo Reis, Fábio Rossi and Tiago Ferreto

Abstract: The adoption of microservice-based applications in Edge Computing is increasing, as a result of the improved maintainability and scalability delivered by these highly-distributed and decoupled applications. On the other hand, Edge Computing operators must be aware of availability and resource contention issues when placing microservices on the edge infrastructure to avoid applications’ performance degradation. In this paper, we present IRENE, a genetic algorithm designed to improve the performance of microservice-based applications in Edge Computing by preventing performance degradation caused by resource contention and increasing the application’s availability as a means for avoiding SLA violations. Experiments were carried, and the results showed IRENE effectiveness over different existing strategies in different scenarios. In future investigations, we intend to extend IRENE’s interference awareness to the network level.
Download

Area 5 - Service Modelling and Analytics

Short Papers
Paper Nr: 7
Title:

Online Automatic Characteristics Discovery of Faulty Application Transactions in the Cloud

Authors:

Shay Horovitz, Yair Arian and Noam Peretz

Abstract: Performance debugging and fault isolation in distributed cloud applications is difficult and complex. Existing Application Performance Management (APM) solutions allow manual investigation across a huge space of metrics, topology, functions, service calls, attributes and values - a frustrating resource and time demanding task. It would be beneficial if one could gain explainable insights about a faulty transaction whether due to an error or performance degradation, such as specific attributes and/or url patterns that are correlated with the problem and can characterize it. Yet, this is a challenging task as demanding resources of storage, memory and processing are required and insights are expected to be discovered as soon as the problem occurs. As cloud resources are limited and expensive, supporting a large number of applications having many transaction types is impractical. We present Perceptor – an Online Automatic Characteristics Discovery of Faulty Application Transactions in the Cloud. Perceptor discerns attributes and/or values correlated with transaction error or performance degradation events. It does so with minimal resource consumption by using sketch structures and performs in streaming mode. Over an extensive set of experiments in the cloud, with various applications and transactions, Perceptor discovered non-trivial relevant fault properties validated by an expert.
Download

Area 6 - Services Science

Short Papers
Paper Nr: 13
Title:

Composite Web Services as Dataflow Graphs for Constraint Verification

Authors:

Jyotsana Gupta, Joey Paquet and Serguei A. Mokhov

Abstract: Owing to advantages such as re-usability of components, broader options for composition requesters and liberty to specialize for component providers, composite services have been extensively researched and significantly enhanced in several respects. Yet, most of the studies undertaken fail to acknowledge that every web service has a limited context in which it can successfully perform its tasks. When used as part of a composition, the restricted context-spaces of all such component services together define the contextual boundaries of the composite service. However, from a thorough review of the existing literature on the subject, we have discovered that no systems have yet been proposed to cater to the specific verification of internal constraints imposed on components of a composite service. In an attempt to address this gap in service composition research, we propose a multi-faceted solution capable, firstly, of automatically constructing context-aware composite web services with their internal constraints positioned for optimum resource-utilization and, secondly, of validating the generated compositions using the General Intensional Programming SYstem (GIPSY) as a time- and cost-efficient simulation/execution environment.
Download

Paper Nr: 27
Title:

Reconfiguration Penalty Calculation for Cross-cloud Application Adaptations

Authors:

Vasilis-Angelos Stefanidis, Yiannis Verginadis, Daniel Bauer, Tomasz Przezdziek and Grigoris Mentzas

Abstract: Cloud’s indisputable value for SMEs and enterprises has led to its wide adoption for benefiting from its cost-effective and on-demand service provisioning. Furthermore, novel systems emerge for aiding the cross-cloud application deployments that can further reduce costs and increase agility in the everyday business operations. In such dynamic environments, adequate reconfiguration support is always needed to cope with the fluctuating and diverse workloads. This paper focuses on one of the critical aspects of optimal decision making when adapting the cross-cloud applications, by considering time-related penalties. It also contributes a set of recent measurements that highlight virtualized resources startup times across different public and private clouds.
Download

Paper Nr: 18
Title:

Remote Procedure Call Approach using the Node2FaaS Framework with Terraform for Function as a Service

Authors:

Leonardo Rebouças de Carvalho and Aleteia P. F. de Araujo

Abstract: Cloud computing has evolved into a scenario where multiple providers make up the list of services that process client workloads, resulting in Functions as a Service. Because of this, this work proposes an approach of using RPC based FaaS. Using the Node2FaaS framework as a NodeJS application converter and integrated with Terraform as a cloud orchestrator. So, CPU, memory and I/O overhead tests were performed on a local environment and on the three main FaaS services: AWS Lambda, Google Functions and Azure Functions. The results showed significant runtime gains between the local environment and FaaS services, reaching up to a 99% reduction in runtime when the tests were run on cloud providers.
Download

Paper Nr: 30
Title:

Performance Evaluation of Software Defined Network Controllers

Authors:

Edna D. Canedo, Fábio L. Lopes de Mendonça, Georges A. Nze, Bruno G. Praciano, Gabriel M. Pinheiro and Rafael D. Sousa Jr.

Abstract: The increasing digitization of various industrial sectors, such as automotive, transportation, urban mobility, telecommunications, among others, reveals the need for using point-to-multipoint communications services, such as constantly updating software and reliably delivering messages users, as well as optimizing the use of hardware and software resources. The implementation of Software Defined Networks (SDN) provides the network with the flexibility and programming capabilities needed to accurately and reliably support point-to-multipoint distribution services. This paper presents an account of the activities implemented to produce the performance evaluation of the SDN controllers. We identified the metrics used in the literature to perform performance evaluation of SDN controllers. The mechanisms adopted were used to describe the performance evaluation environment and the respective operating processes of the controllers. Besides, a methodology is proposed to perform the performance evaluation of the controllers.
Download

Paper Nr: 57
Title:

Research Challenges of Open Data as a Service for Smart Cities

Authors:

Leonard Walletzký, Františka Romanovská, Angeliki M. Toli and Mouzhi Ge

Abstract: The open data are considered to be an important building block of Smart City services. Based on the data availability in the cities, companies are building innovative smart services to facilitate the city development. However, most of the practitioners are focusing only on the open data usage but paying little attention to the processes related to the data publication. Thus, there is a lack of understanding the whole process of open data life cycle such as before the data can be open, when they are published and when they need to be archived. Those phases in the open data life cycle are critical for assuring the data accuracy, availability and relevance because the data are analysed, changed, anatomized or generally processed during the whole life cycle. This also creates a set of research challenges for open data. Therefore, in the context of smart cities, this paper proposes to consider Open Data as a Service and identifies the research challenges along with the open data life cycle.
Download

Area 7 - Cloud Computing Platforms and Applications

Full Papers
Paper Nr: 3
Title:

Secure Cloud Storage with Client-side Encryption using a Trusted Execution Environment

Authors:

Marciano da Rocha, Dalton G. Valadares, Angelo Perkusich, Kyller C. Gorgonio, Rodrigo T. Pagno and Newton C. Will

Abstract: With the evolution of computer systems, the amount of sensitive data to be stored as well as the number of threats on these data grow up, making the data confidentiality increasingly important to computer users. Currently, with devices always connected to the Internet, the use of cloud data storage services has become practical and common, allowing quick access to such data wherever the user is. Such practicality brings with it a concern, precisely the confidentiality of the data which is delivered to third parties for storage. In the home environment, disk encryption tools have gained special attention from users, being used on personal computers and also having native options in some smartphone operating systems. The present work uses the data sealing, feature provided by the Intel Software Guard Extensions (Intel SGX) technology, for file encryption. A virtual file system is created in which applications can store their data, keeping the security guarantees provided by the Intel SGX technology, before send the data to a storage provider. This way, even if the storage provider is compromised, the data are safe. To validate the proposal, the Cryptomator software, which is a free client-side encryption tool for cloud files, was integrated with an Intel SGX application (enclave) for data sealing. The results demonstrate that the solution is feasible, in terms of performance and security, and can be expanded and refined for practical use and integration with cloud synchronization services.
Download

Paper Nr: 29
Title:

Performance Analysis of Continuous Binary Data Processing using Distributed Databases within Stream Processing Environments

Authors:

Manuel Weißbach, Hannes Hilbert and Thomas Springer

Abstract: Big data applications must process increasingly large amounts of data within ever shorter time. Often a stream processing engine (SPE) is used to process incoming data with minimal latency. While these engines are designed to process data quickly, they are not made to persist and manage it. Thus, databases are still integrated into streaming architectures, which often becomes a performance bottleneck. To overcome this issue and achieve maximum performance, all system components used must be examined in terms of their throughput and latency, and how well they interact with each other. Several authors have already analyzed the performance of popular distributed database systems. While doing so, we focus on the interaction between the SPEs and the databases, as we assume that stream processing leads to changes in the access patterns to the databases. Moreover, our main focus is on the efficient storing and loading of binary data objects rather than typed data, since in our use cases the actual data analysis is not to be performed by the database, but by the SPE. We’ve benchmarked common databases within streaming environments to determine which software combination is best suited for these requirements. Our results show that the database performance differs significantly depending on the access pattern used and that different software combinations lead to substantial performance differences. Depending on the access pattern, Cassandra, MongoDB and PostgreSQL achieved the best throughputs, which were mostly the highest when Apache Flink was used.
Download

Paper Nr: 51
Title:

Cloud-native Deploy-ability: An Analysis of Required Features of Deployment Technologies to Deploy Arbitrary Cloud-native Applications

Authors:

Michael Wurster, Uwe Breitenbücher, Antonio Brogi, Frank Leymann and Jacopo Soldani

Abstract: The adoption of cloud computing combined with DevOps enables companies to react to new market requirements more rapidly and fosters the use of automation technologies. This influences the way software solutions are built, which is why the concept of cloud-native applications has emerged over the last few years to build highly scalable applications, and to automatically deploy and run them in modern cloud environments. However, there is currently no reference work clearly stating the features that a deployment technology must offer to support the deployment of arbitrary cloud-native applications. In this paper, we derive three essential features for deployment technologies based on the current cloud-native research and characteristics discussed therein. The presented features can be used to compare and categorize existing deployment technologies, and they are intended to constitute a first step towards a comprehensive framework to assess deployment technologies.
Download

Paper Nr: 56
Title:

Patterns for Serverless Functions (Function-as-a-Service): A Multivocal Literature Review

Authors:

Davide Taibi, Nabil El Ioini, Claus Pahl and Jan S. Niederkofler

Abstract: [Context] Serverless is a recent technology that enables companies to reduce the overhead for provisioning, scaling and in general managing the infrastructure. Companies are increasingly adopting Serverless, by migrating existing applications to this new paradigm. Different practitioners proposed patterns for composing and managing serverless functions. However, some of these patterns offer different solutions to solve the same problem, which makes it hard to select the most suitable solution for each problem. [Goal] In this work, we aim at supporting practitioners in understanding the different patterns, by classifying them and reporting possible benefits and issues. [Method] We adopted a multivocal literature review process, surveying peer-reviewed and grey literature and classifying patterns (common solutions to solve common problems), together with benefits and issues. [Results] Among 24 selected works, we identified 32 patterns that we classified as orchestration, aggregation, event-management, availability, communication, and authorization. [Conclusion] Practitioners proposed a list of fairly consistent patterns, even if a small number of patterns proposed different solutions to similar problems. Some patterns emerged to circumvent some serverless limitations, while others for some classical technical problems (e.g. publisher/subscriber).
Download

Short Papers
Paper Nr: 10
Title:

Auto-scaling Walkability Analytics through Kubernetes and Docker SWARM on the Cloud

Authors:

Lu Chen, Yiru Pan and Richard O. Sinnott

Abstract: The Australian Urban Research Infrastructure Network (AURIN – www.aurin.org.au) provides a data-driven, Cloud-based research environment for researchers across Australasia. The platform offers seamless and secure access to over 5000 definitive data sets from over 100 major government agencies across Australia with over 100 targeted tools that can be used for data analysis. One such tool is the walkability tool environment. This offers a set of Cloud-based components that generate walkability indices at user-specified scales. The walkability tools utilize geospatial data to create walkability indices that can be used to establish the walkability of given locations. The walkability workflow tools are built on a range of specialised spatial and statistical functions delivered as part of the AURIN environment. However, the existing AURIN web-based tools are currently deployed on a single (large) virtual machine, which is a performance and scalability bottleneck. Container technologies such as Docker and associated container orchestration environments such as Docker Swarm and Kubernetes support Cloud-based scaling. This paper introduces the background to the walkability environment and describes how it was extended to support Docker in Swarm mode and Kubernetes to make the walkability environment more robust and scalable, especially under heavy workloads. Performance benchmarking and a case study are presented looking at the creation of large-scale walkability indexes for areas around Melbourne.
Download

Paper Nr: 19
Title:

Performance of Cluster-based High Availability Database in Cloud Containers

Authors:

Raju Shrestha

Abstract: Database is an important component in any software application, which enables efficient data management. High availability of databases is critical for an uninterruptible service offered by the application. Virtualization has been a dominant technology behind providing highly available solutions in the cloud including database, where database servers are provisioned and dynamically scaled based on demands. However, containerization technology has gained popularity in recent years because of light-weight and portability, and the technology has seen increased number of enterprises embracing containers as an alternative to heavier and resource-consuming virtual machines for deploying applications and services. A relatively new cluster-based synchronous multi-master database solution has gained popularity recently and has seen increased adoption against the traditional master-slave replication for better data consistency and high availability. This article evaluates the performance of a cluster-based high availability database deployed in containers and compares it to the one deployed in virtual machines. A popular cloud software platform, OpenStack, is used for virtual machines. Docker is used for containers as it is the most popular container technology at the moment. Results show better performance by HA Galera cluster database setup using Docker containers in most of the Sysbench benchmark tests compared to a similar setup using OpenStack virtual machines.
Download

Paper Nr: 58
Title:

An Elasticity Description Language for Task-parallel Cloud Applications

Authors:

Jens Haussmann, Wolfgang Blochinger and Wolfgang Kuechlin

Abstract: In recent years, the cloud has become an attractive execution environment for parallel applications, which introduces novel opportunities for versatile optimizations. Particularly promising in this context is the elasticity characteristic of cloud environments. While elasticity is well established for client-server applications, it is a fundamentally new concept for parallel applications. However, existing elasticity mechanisms for client-server applications can be applied to parallel applications only to a limited extent. Efficient exploitation of elasticity for parallel applications requires novel mechanisms that take into account the particular runtime characteristics and resource requirements of this application type. To tackle this issue, we propose an elasticity description language. This language facilitates users to define elasticity policies, which specify the elasticity behavior at both cloud infrastructure level and application level. Elasticity at the application level is supported by an adequate programming and execution model, as well as abstractions that comply with the dynamic availability of resources. We present the underlying concepts and mechanisms, as well as the architecture and a prototypical implementation. Furthermore, we illustrate the capabilities of our approach through real-world scenarios.
Download

Paper Nr: 1
Title:

A Conceptual Framework Supporting Pattern Design Selection for Scientific Workflow Applications in Cloud Computing

Authors:

Ehab N. Alkhanak, Saif R. Khan, Alexander Verbraeck and Hans van Lint

Abstract: Scientific Workflow Applications (SWFA) play a vital role for both service consumers and service providers in designing and implementing large and complex scientific processes. Previously, researchers used parallel and distributed computing technologies, such as utility and grid computing to execute the SWFAs, these technologies provide limited utilization for the shared resources. In contrast, the scalability and flexibility challenges are better handled by using cloud-computing technologies for SWFA. Since cloud computing offers a technology that can significantly utilize the amounts of storage space and computing resources necessary for processing large-size and complex SWFAs. The workflow pattern design has provided the facility of re-using previously developed workflow solutions that enable the developers to adopt them for the considered SWFA. Inspired by this, the researchers have adopted several patterns of design to better design the SWFA. Effective pattern design that can consider challenges that may not become visible only in the implementation stage of a SWFA. However, the selection of the most effective pattern design in accordance with an execution method, data size, and problem complexity of a SWFA remains a challenging task. Motivated by this, we have proposed a conceptual framework that facilitates in recommending a suitable pattern design based on the quality requirements and capabilities are given and advertised by cloud consumers and providers, respectively. Finally, guidelines to assist in a smooth migrating of SWFA from other computation paradigms to cloud computing.
Download

Paper Nr: 59
Title:

ISABEL: Infrastructure-Agnostic Benchmark Framework for Cloud-Native Platforms

Authors:

Paulo Souza, Felipe Rubin, João Nascimento, Conrado Boeira, Ângelo Vieira, Rômulo Reis and Tiago Ferreto

Abstract: The popularity of Cloud Computing has contributed to the growth of new business models, which have changed how companies develop and distribute their software. The reoccurring use of cloud resources called for a more modern and Cloud-Native approach in contrast to traditional monolithic architecture. While this approach has introduced better portability and use of resources, the abstraction of the infrastructure beneath it incited doubts about its reliability. Through the use of benchmarks, cloud providers began to provide quality assurances for their users in the form of service-level agreements (SLA). Although benchmarking allowed such assurances to be defined, the variety of offered services, each one of them requiring different tools, along with a nonexistent standard of input and output formats, has turned this in an arduous task. In order to promote better interoperability of benchmarking, we propose ISABEL, a benchmark suite for Cloud-Native platforms that standardizes the process of benchmarking using any existing benchmark tool.
Download

Area 8 - Cloud Computing Enabling Technology

Full Papers
Paper Nr: 8
Title:

Comparative Evaluation of Kernel Bypass Mechanisms for High-performance Inter-container Communications

Authors:

Gabriele Ara, Tommaso Cucinotta, Luca Abeni and Carlo Vitucci

Abstract: This work presents a framework for evaluating the performance of various virtual switching solutions, each widely adopted on Linux to provide virtual network connectivity to containers in high-performance scenarios, like in Network Function Virtualization (NFV). We present results from the use of this framework for the quantitative comparison of the performance of software-based and hardware-accelerated virtual switches on a real platform with respect to a number of key metrics, namely network throughput, latency and scalability.
Download

Paper Nr: 11
Title:

UM2Q: Multi-cloud Selection Model based on Multi-criteria to Deploy a Distributed Microservice-based Application

Authors:

Juliana Carvalho, Dario Vieira and Fernando Trinta

Abstract: Choosing the best configuration to deploy a distributed application in a multi-cloud environment is a complex task since many cloud providers offer several services with different capabilities. In this paper, we present a multi-cloud selection process to deploy a distributed microservice-based application, which has low communication cost among microservices. The proposed selection process is part of PacificClouds, an approach that intends to manage the deployment and execution of distributed applications across multiple providers from the software architect perspective. The proposed approach selects various providers, where each provider must host an entire microservice with multiple tasks consuming several cloud services. The UM2Q approach selects the provider that better meets the software architect requirements, for each microservice of a multi-cloud application. Hence, the proposed process of selecting multiple providers uses multi-criteria decision-making methods to rank the cloud services and selects cloud providers and services by individually observing each microservice requirement, such as cloud availability, response time, and cost. Further, we propose a formal description of UM2Q and a brief one of the strategy implementation. We also introduce a set of experiments to evaluate UM2Q performance, and the outcomes showed its feasibility for a variable number of requirements, microservices and providers, even for extreme values.
Download

Paper Nr: 14
Title:

Performance Modeling in Predictable Cloud Computing

Authors:

Riccardo Mancini, Tommaso Cucinotta and Luca Abeni

Abstract: This paper deals with the problem of performance stability of software running in shared virtualized infrastructures. The focus is on the ability to build an abstract performance model of containerized application components, where real-time scheduling at the CPU level, along with traffic shaping at the networking level, are used to limit the temporal interferences among co-located workloads, so as to obtain a predictable distributed computing platform. A model for a simple client-server application running in containers is used as a case-study, where an extensive experimental validation of the model is conducted over a testbed running a modified OpenStack on top of a custom real-time CPU scheduler in the Linux kernel.
Download

Paper Nr: 21
Title:

Live Migration Timing Optimization for VMware Environments using Machine Learning Techniques

Authors:

Mohamed E. Elsaid, Hazem M. Abbas and Christoph Meinel

Abstract: Live migation of Virtual Machines (VMs) is a vital feature in virtual datacenters and cloud computing platforms. Pre-copy live migration techniques is the commonly used technique in virtual datacenters hypervisors including VMware, Xen, Hyper-V and KVM. This is due to the robustness of pre-copy technique compared to post-copy or hybrid-copy techniques. The disadvantage of pre-copy live migration type is the challenge to predict the live migration cost and performance. So, virtual datanceters admins run live migration without an idea about the expected cost and the optimal timing for running live migration especially for large VMs or for multiple VMs running concurrently. This leads to longer live migration duration, network bottlenecks and live migration failure in some cases. In this paper, we use machine learning techniques to predict the optimal timing for running a live migration request. This optimal timing approach is based on using machine learning for live migration cost prediction and datacenter network utilization prediction. Datacenter admins can be alerted with this optimal timing recommendation when a live migration request is issued.
Download

Paper Nr: 25
Title:

Fast Analysis and Prediction in Large Scale Virtual Machines Resource Utilisation

Authors:

Abdullahi Abubakar, Sakil Barbhuiya, Peter Kilpatrick, Ngo A. Vien and Dimitrios S. Nikolopoulos

Abstract: Most Cloud providers running Virtual Machines (VMs) have a constant goal of preventing downtime, increasing performance and power management among others. The most effective way to achieve these goals is to be proactive by predicting the behaviours of the VMs. Analysing VMs is important, as it can help cloud providers gain insights to understand the needs of their customers, predict their demands, and optimise the use of resources. To manage the resources in the cloud efficiently, and to ensure the performance of cloud services, it is crucial to predict the behaviour of VMs accurately. This will also help the cloud provider improve VM placement, scheduling, consolidation, power management, etc. In this paper, we propose a framework for fast analysis and prediction in large scale VM CPU utilisation. We use a novel approach both in terms of the algorithms employed for prediction and in terms of the tools used to run these algorithms with a large dataset to deliver a solid VM CPU utilisation predictor. We processed over two million VMs from Microsoft Azure VM traces and filter out the VMs with complete one month of data which amount to 28,858VMs. The filtered VMs were subsequently used for prediction. Our Statistical analysis reveals that 94% of these VMs are predictable. Furthermore, we investigate the patterns and behaviours of those VMs and realised that most VMs have one or several spikes of which the majority are not seasonal. For all the 28,858VMs analysed and forecasted, we accurately predicted 17,523 (61%) VMs based on their CPU. We use Apache Spark for parallel and distributed processing to achieve fast processing. In terms of fast processing (execution time), on average, each VM is analysed and predicted within three seconds.
Download

Paper Nr: 39
Title:

Behavioral Analysis for Virtualized Network Functions: A SOM-based Approach

Authors:

Tommaso Cucinotta, Giacomo Lanciano, Antonio Ritacco, Marco Vannucci, Antonino Artale, Joao Barata, Enrica Sposato and Luca Basili

Abstract: In this paper, we tackle the problem of detecting anomalous behaviors in a virtualized infrastructure for network function virtualization, proposing to use self-organizing maps for analyzing historical data available through a data center. We propose a joint analysis of system-level metrics, mostly related to resource consumption patterns of the hosted virtual machines, as available through the virtualized infrastructure monitoring system, and the application-level metrics published by individual virtualized network functions through their own monitoring subsystems. Experimental results, obtained by processing real data from one of the NFV data centers of the Vodafone network operator, show that our technique is able to identify specific points in space and time of the recent evolution of the monitored infrastructure that are worth to be investigated by a human operator in order to keep the system running under expected conditions.
Download

Paper Nr: 66
Title:

Microservices vs Serverless: A Performance Comparison on a Cloud-native Web Application

Authors:

Chen-Fu Fan, Anshul Jindal and Michael Gerndt

Abstract: A microservices architecture has gained higher popularity among enterprises due to its agility, scalability, and resiliency. However, serverless computing has become a new trendy topic when designing cloud-native applications. Compared to the monolithic and microservices, serverless architecture offloads management and server configuration from the user to the cloud provider and let the user focus only on the product development. Hence, there are debates regarding which deployment strategy to use. This research provides a performance comparison of a cloud-native web application in terms of scalability, reliability, cost, and latency when deployed using microservices and serverless deployment strategy. This research shows that neither the microservices nor serverless deployment strategy fits all the scenarios. The experimental results demonstrate that each type of deployment strategy has its advantages under different scenarios. The microservice deployment strategy has a cost advantage for long-lasting services over serverless. On the other hand, a request accompanied by the large size of the response is more suitably handled by serverless because of its scaling-agility.
Download

Paper Nr: 67
Title:

TOSCA Light: Bridging the Gap between the TOSCA Specification and Production-ready Deployment Technologies

Authors:

Michael Wurster, Uwe Breitenbücher, Lukas Harzenetter, Frank Leymann, Jacopo Soldani and Vladimir Yussupov

Abstract: The automation of application deployment is critical because manually deploying applications is time-consuming, tedious, and error-prone. Several deployment automation technologies have been developed in recent years employing tool-specific deployment modeling languages. At the same time, the OASIS standard Topology Orchestration Specification for Cloud Applications (TOSCA) emerged as a means for describing cloud applications, i. e., their components and relationships, in a vendor-agnostic fashion. Despite TOSCA is widely used in research, it is not supported by the production-ready deployment automation technologies daily used by practitioners working with cloud-native applications, hence resulting in a gap between the state-of-the-art in research and state-of-practice in the industry. To help bridging this gap, we leverage the recently introduced Essential Deployment Metamodel (EDMM) and identify TOSCA Light, an EDMM-compliant subset of TOSCA, to enact the transformation from TOSCA to the vast majority of deployment automation technology-specific models used by today’s software industry. Further, we present an end-to-end TOSCA Light modeling and transformation workflow and show a prototypical implementation to validate our approach.
Download

Short Papers
Paper Nr: 12
Title:

Performance Evaluation of Container Runtimes

Authors:

Lennart Espe, Anshul Jindal, Vladimir Podolskiy and Michael Gerndt

Abstract: The co-location of containers on the same host leads to significant performance concerns in the multi-tenant environment such as the cloud. These concerns are raised due to an attempt to maximize host resource utilization by increasing the number of containers running on the same host. The selection of a container runtime becomes critical in the case of strict performance requirements. In the scope of the study, two commonly used runtimes were evaluated: containerd (industry standard) and CRI-O (reference implementation of the CRI) on two different Open Container Initiative (OCI) runtimes: runc (default for most container runtimes) and gVisor (highly secure alternative to runc). Evaluation aspects of container runtimes include the performance of running containers, the performance of container runtime operations, and scalability. A tool called TouchStone was developed to address these evaluation aspects. The tool uses the CRI standard and is pluggable into any Kubernetes-compatible container runtime. Performance results demonstrate the better performance of containerd in terms of CPU usage, memory latency and scalability aspects, whereas file system operations (in particular, write operations) are performed more efficiently by CRI-O.
Download

Paper Nr: 35
Title:

Accounting and Billing Challenges in Large Scale Emerging Cloud Technologies

Authors:

Piyush Harsh and Oleksii Serhiienko

Abstract: Billing models which can easily adapt with emerging market opportunities is essential in long term survival of any business. Accounting and billing is also one of the few processes which has wide impact on legal and regulatory compliance, revenue lines as well as customer retention models of all businesses. In the era of rapid technology shifts, with emergence of Fog and Edge deployment models, and marriage of IoT and cloud which promises smart-everything everywhere, it is paramount to understand what new challenges must be addressed within any billing framework. In this paper we list several emerging challenges which should be overcome in architecting any future-ready billing platform. We also present briefly an analysis of few technologies which could be used in prototyping such a solution. We present our proof of concept experiment along with initial results highlighting the feasibility of our proposed architecture towards a scalable billing framework for massively distributed IoT applications at the edge.
Download

Paper Nr: 55
Title:

SEAPORT: Assessing the Portability of Serverless Applications

Authors:

Vladimir Yussupov, Uwe Breitenbücher, Ayhan Kaplan and Frank Leymann

Abstract: The term serverless is often used to describe cloud applications that comprise components managed by third parties. Like any other cloud application, serverless applications are often tightly-coupled with providers, their features, models, and APIs. As a result, when their portability to another provider has to be assessed, application owners must deal with identification of heterogeneous lock-in issues and provider-specific technical details. Unfortunately, this process is tedious, error-prone, and requires significant technical expertise in the domains of serverless and cloud computing. In this work, we introduce SEAPORT, a method for automatically assessing the portability of serverless applications with respect to a chosen target provider or platform. The method introduces (i) a canonical serverless application model, and (ii) the concepts for portability assessment involving classification and components similarity calculation together with the static code analysis. The method aims to be compatible with existing migration concepts to allow using it as a complementary part for serverless use cases. We present an architecture of a decision support system supporting automated assessment of the given application model with respect to the target provider. To validate the technical feasibility of the method, we implement the system prototypically.
Download

Paper Nr: 69
Title:

A Distributed Checkpoint Mechanism for Replicated State Machines

Authors:

Niyazi Ö. Çelikel and Tolga Ovatman

Abstract: This study presents preliminary results from a distributed checkpointing approach developed to be used with replicated state machines. Our approach takes advantage from splitting and storing partial execution history of the master state machine in a distributed way. Our initial results show that using such an approach provides less memory consumption for both the running replicas and the restoring replica in case of a failure. On the other hand for larger histories and larger number of replicas it also increases the restore duration as a major drawback.
Download

Paper Nr: 54
Title:

First Experience and Practice of Cloud Bursting Extension to OCTOPUS

Authors:

Susumu Date, Hiroaki Kataoka, Shuichi Gojuki, Yuki Katsuura, Yuki Teramae and Shinichiro Kigoshi

Abstract: In the Cybermedia Center at Osaka University, OCTOPUS (Osaka university Cybermedia cenTer Over-Petascale Universal Supercomputer) has provided for providing a high performance computing environment for scientists and researchers in Japan since April 2017. OCTOPUS enables a stable operation with only a few technical problems. However, due to higher and higher computing demands, the users’ waiting time from job submission to job completion has been increasing. In this research, we explore the feasibility of solving this problem by offloading the computational workload to IaaS Cloud services, in the hope that we can alleviate the high utilization on OCTOPUS. Technically, we build a cloud bursting environment where OCTOPUS as an on-premise computing environment and Microsoft Azure as an IaaS Cloud service are integrated. Therefore, we investigate whether the cloud bursting environment is useful and practical for avoiding high utilization on OCTOPUS. In this paper, we summarize the first experience and practice of our cloud-bursting solution.
Download