CLOSER 2021 Abstracts


Area 1 - Cloud Computing Fundamentals

Short Papers
Paper Nr: 23
Title:

Characterization of Network Management Traffic in OpenStack based on Virtual Machine State Changes

Authors:

Adnei W. Donatti, Charles C. Miers, Guilherme P. Koslovski, Maurício A. Pillon and Tereza B. Carvalho

Abstract: OpenStack is a popular and versatile solution for creating IaaS clouds. OpenStack has several private cloud issues implementations regarding its network infrastructure with which organizations are not always familiar. In this context, this work characterizes the administrative network traffic of OpenStack clouds. Administrative traffic has a separate network, which can affect the performance of the cloud as a whole. We set up an induced lifecycle for virtual machines (VMs) and measured network traffic and Application Programming Interface (API) calls for each task with different operating system (OS) images. Moreover, we also provide an analysis and characterization of the measured network traffic in the management security domain of OpenStack, as well as verify the possibility of using a linear regression model for predicting the traffic volume produced for each task.
Download

Paper Nr: 27
Title:

Multi-objective Optimization for Virtual Machine Allocation in Computational Scientific Workflow under Uncertainty

Authors:

Arun Ramamurthy, Priyanka Pantula, Mangesh Gharote, Kishalay Mitra and Sachin Lodha

Abstract: Providing resources and services from various cloud providers is now an increasingly promising paradigm. Workflow applications are becoming increasingly computation-intensive or data-intensive, with resource allocation being maintained in terms of pay per usage. In this paper, a multi-objective optimization study for scientific workflow in a cloud environment is proposed. The aim is to minimize execution time and purchasing cost simultaneously while satisfying the demand requirements of customers. The uncertainties present in the model are identified and handled using a well-known technique called Chance Constrained Programming (CCP) for real-world implementation. The model is solved using the Non-dominated Sorting Genetic Algorithm – II (NSGA-II). This comprehensive study shows that the solutions obtained on considering uncertainties vary from the deterministic case. Based on the probability of constraint satisfaction, the objective functions improve but at the cost of reliability of the solution.
Download

Area 2 - Cloud Operations

Full Papers
Paper Nr: 9
Title:

Kubernetes Autoscaling: YoYo Attack Vulnerability and Mitigation

Authors:

Ronen Ben David and Anat Bremler-Barr

Abstract: In recent years, we have witnessed a new kind of DDoS attack, the burst attack(Chai, 2013; Dahan, 2018), where the attacker launches periodic bursts of traffic overload on online targets. Recent work presents a new kind of Burst attack, the YoYo attack (Bremler-Barr et al., 2017) that operates against the auto-scaling mechanism of VMs in the cloud. The periodic bursts of traffic loads cause the auto-scaling mechanism to oscillate between scale-up and scale-down phases. The auto-scaling mechanism translates the flat DDoS attacks into Economic Denial of Sustainability attacks (EDoS), where the victim suffers from economic damage accrued by paying for extra resources required to process the traffic generated by the attacker. However, it was shown that YoYo attack also causes significant performance degradation since it takes time to scale-up VMs. In this research, we analyze the resilience of Kubernetes auto-scaling against YoYo attacks. As containerized cloud applications using Kubernetes gain popularity and replace VM-based architecture in recent years. We present experimental results on Google Cloud Platform, showing that even though the scale-up time of containers is much lower than VM, Kubernetes is still vulnerable to the YoYo attack since VMs are still involved. Finally, we evaluate ML models that can accurately detect YoYo attack on a Kubernetes cluster.
Download

Paper Nr: 36
Title:

Automated Generation of Management Workflows for Running Applications by Deriving and Enriching Instance Models

Authors:

Lukas Harzenetter, Tobias Binz, Uwe Breitenbücher, Frank Leymann and Michael Wurster

Abstract: As automation is a key driver to achieve efficiency in the ever growing IT landscape, many different deployment automation technologies arose. These technologies to deploy and manage applications have been widely adopted in industry and research. In larger organizations, usually even multiple deployment technologies are used in parallel. However, as most of these technologies offer limited or no management capabilities, managing application systems deployed using different deployment technologies is cumbersome. Thus, holistic management functionalities affecting multiple components of an application, e. g., update or back up all components, is impossible. In this paper, we present an approach that enables the automated execution of holistic management functionalities for running applications. To achieve this, we first retrieve instance information of a running application and derive a standardized instance model of the application. Afterwards, the instance model is enriched with additional management functionality. We hereby extend the existing Management Feature Enrichment and Workflow Generation approach to support running applications. To execute the enriched management functionalities on the running application, standard-based workflows are generated.
Download

Short Papers
Paper Nr: 22
Title:

Solutions for Monitoring and Anomaly Detection in Dynamic IT Infrastructure: Literature Review

Authors:

Jānis Grabis, Jānis Kampars, Krišjānis Pinka, Guntis Mosāns, Ralfs Matisons and Artjoms Vindbergs

Abstract: Modern information technology infrastructure is highly complex, and its monitoring requires integration of different monitoring tools and management systems. That is especially important if monitoring data is to be used for predictive maintenance purposes. This paper identifies methods and technologies suitable for analysis of the information technology infrastructure. They are identified by means of literature review. The research questions considered are: 1) What methods are applicable for analysing the virtualized IT infrastructure related data from a technological point of view? 2) What architectural patterns and group of tools are appropriate for infrastructure data processing and analysis? and 3) What tools according to the identified categories in RQ3 can be used for storing and analysing topology graphs and metrics describing virtualized infrastructure? The research finding will serve as an input for further research activities on architectural design of the integrated monitoring solution and development of machine learning model for predictive maintenance.
Download

Area 3 - Data as a Service

Short Papers
Paper Nr: 50
Title:

Continuous Data Quality Management for Machine Learning based Data-as-a-Service Architectures

Authors:

Shelernaz Azimi and Claus Pahl

Abstract: Data-as-a-Service (DaaS) solutions make raw source data accessible in the form of processable information. Machine learning (ML) allows to produce meaningful information and knowledge based on raw data. Thus, quality is a major concern that applies to raw data as well as to information provided by ML-generated models. At the core of the solution is a conceptual framework that links input data quality and the machine learned data service quality, specifically inferring raw data problems as root causes from observed data service deficiency symptoms. This allows to deduce the hidden origins of quality problems observable by users of DaaS offerings. We analyse the quality framework through an extensive case study from an edge cloud and Internet-of-Things-based traffic application. We determine quality assessment mechanisms for symptom and cause analysis in different quality dimensions.
Download

Area 4 - Edge Cloud and Fog Computing

Full Papers
Paper Nr: 37
Title:

Trusted Execution Environments for Cloud/Fog-based Internet of Things Applications

Authors:

Dalton G. Valadares, Newton C. Will, Marco A. Spohn, Danilo S. Santos, Angelo Perkusich and Kyller C. Gorgonio

Abstract: Cloud services and fog-based solutions can improve the communication and processing efficiency of the Internet of Things (IoT). Cloud and fog servers offer more processing power to IoT solutions, enabling more complex tasks within reduced time frames, which could not be possible when relying solely on IoT devices. Cloud and fog computing benefits are even better when considering sensitive data processing once IoT devices can hardly perform complex security tasks. To improve data security in cloud/fog-based IoT solutions, Trusted Execution Environments (TEEs) allow the processing of sensitive data and code inside protected and isolated regions of memory. This paper presents a brief survey regarding TEEs’ adoption to protect data in cloud/fog-based IoT applications. We focus on solutions based on the two leading TEE technologies currently available in the market (Intel SGX and ARM Trustzone), pointing out some research challenges and directions.
Download

Short Papers
Paper Nr: 7
Title:

A Machine Learning based Context-aware Prediction Framework for Edge Computing Environments

Authors:

Abdullah F. Aljulayfi and Karim Djemame

Abstract: A Context-aware Prediction Framework (CAPF) can be provided through a Self-adaptive System (SAS) resource manager to support the autoscaling decision in Edge Computing (EC) environments. However, EC dynamicity and workload fluctuation represent the main challenges to design a robust prediction framework. Machine Learning (ML) algorithms show a promising accuracy in workload forecasting problems which may vary according to the workload pattern. Therefore, the accuracy of such algorithms needs to be evaluated and compared in order to select the most suitable algorithm for EC workload prediction. In this paper, a thorough comparison is conducted focusing on the most popular ML algorithms which are Linear Regression (LR), Support Vector Regression (SVR), and Neural Networks (NN) using real EC dataset. The experimental results show that a robust prediction framework can be supported by more than one algorithm considering the EC contextual behavior. The results also reveal that the NN outperforms LR and SVR in most cases.
Download

Paper Nr: 33
Title:

Runlet: A Cross-platform IoT Tool for Interactive Job Execution Over Heterogeneous Devices with Reliable Message Delivery

Authors:

Vandré L. Cândido and Flávio O. Silva

Abstract: IoT uses different hardware and software components in a mixed environment, and for the management of the devices, interoperability and reliability are key issues. Interactive job execution is another important concept for the management in different scenarios. The literature lacks a tool with such characteristics. This work fills the gap in the state-of-the-art by introducing a tool that achieves interactive job execution over a network of heterogeneous devices with reliable message delivery. The tool leverages the power of the protocol Advanced Message Queuing Protocol (AMQP) and the message broker RabbitMQ. AMQP is an open standard Machine-to-Machine (M2M) publish/subscribe messaging protocol optimized for high-latency and unreliable networks that enables client applications to communicate with conforming messaging middleware brokers. RabbitMQ is an open-source message broker that supports various message protocols. The architecture of Runlet is discussed in detail, including the reasoning behind architectural decisions. The evaluation is conducted through an experimental approach that assesses interactivity and reliability on a testbed of devices composed of single-board ARM computers and laptop devices. The experimental results show that the application offers interactivity under different scenarios and provides reliable message delivery after node and server failures.
Download

Paper Nr: 48
Title:

Modelling Energy Consumption of IoT Devices in DISSECT-CF-Fog

Authors:

Andras Markus and Attila Kertesz

Abstract: The continuously evolving information technology carries requirements to foster cost, resource and energy-aware systems. The Internet of Things is considered as one of the most trending technology, which is often coupled with Cloud or Fog Computing resources that manage the possibly big data generated by smart devices in an effective way. To reduce the carbon footprint of such IoT-Fog-Cloud infrastructures, planning and optimisation of their energy consumption is necessary to realise sustainable solutions. It is also inevitable to use simulation in the design phase of such complex systems, hence it would by hardly feasible and rather costly to evaluate numerous settings effecting the energy use. In this paper, we design an IoT energy model based on real world measurements, and propose an extension of the energy model of the DISSECT-CF-Fog simulator to enable the energy usage monitoring of complex, IoT-Fog-Cloud infrastructures. We also present a validation of the extension with a weather forecasting use case to exemplify its configuration possibilities that meet the design requirements of the energy sector.
Download

Area 5 - Mobile Cloud Computing

Short Papers
Paper Nr: 19
Title:

An Empirical Study about the Adoption of Multi-language Technique in Computation Offloading in a Mobile Cloud Computing Scenario

Authors:

Filipe S. B. de Matos, Paulo L. Rego and Fernando M. Trinta

Abstract: Low processing capabilities and limited energy autonomy are common restrictions faced by most mobile devices. In order to address these issues, the computation offloading technique has been proposed to transfer tasks from low processing devices to other machines with higher computing capability. This paper presents an empirical study on the performance of multi-language techniques in offloading procedures. Our experiments evaluate the processing time and the energy consumed by a mobile device when executing methods of two applications locally (on a mobile phone) and remotely (via offloading) on a server process developed using distinct programming languages (Go, C++, Java, and Python). Google’s gRPC and Protocol Buffers were used as a data serialization mechanism to allow offloading between client and server processes. The results show that using a multi-language approach for offloading can reduce the processing time by up to 39 times and the mobile device’s energy consumption by up to 96% approximately.
Download

Area 6 - Services Science

Full Papers
Paper Nr: 20
Title:

Data Flow Testing of Serverless Functions

Authors:

Stefan Winzinger and Guido Wirtz

Abstract: Serverless functions are a popular trend on the cloud computing market offered by many cloud platform providers. The statelessness of serverless functions enables the dynamic scalability by providing additional instances running these functions. However, statelessness doesn’t guarantee the persistence of the state of a container running a serverless function for the next call. Therefore, serverless functions must interact with other services to save their state. This results in systems whose interaction with other services is complex and hard to test. Considering the data flow resulting from the integration of different components is an adequate approach in an integration testing process. Therefore, we investigated the external factors influencing the execution of serverless functions to use this insight for the creation of a testing framework. The framework helps measure important data flow coverage aspects supporting developers in their evaluation of test cases for the integration process of a serverless application. We showed that data flow criteria between serverless functions can be measured with a small overhead of run time making it attractive for developers to use.
Download

Paper Nr: 29
Title:

Design and Development of a Technique for the Automation of the Risk Analysis Process in IT Security

Authors:

Daniele Granata and Massimiliano Rak

Abstract: Cloud service architectures are very heterogeneous and commonly relies on components managed by third parties. As a consequence, the security verification process of these architectures is a complex and costly process. Moreover, development of application that runs in cloud should take into account the agile software design and development methodologies and a really short time-to market, which are often incompatible with deep security testing. This article aims at addressing such issues proposing a technique, compatible with Security-By-Design methodologies, that automates the threat modeling and risk evaluation of a system, reducing the costs and requiring a limited set of security skills. Through the proposed approach, the software system is analysed identifying the threats that affects the system technical assets, ranking the level of risk associated to each threat and suggesting a set of countermeasures in standard terms; the process requires a minimal user interaction. The proposed technique, was implemented through a dedicated tool and, correctly integrated in development processes, can significantly reduce the need of costly security experts and shorten the time needed to execute a full system security assessment. In order to validate the technique, we compared our results with approaches available in literature and existing tools.
Download

Short Papers
Paper Nr: 11
Title:

GoAT: A Sensor Ranking Approach for IoT Environments

Authors:

Felipe S. Costa, Silvia M. Nassar and Mario R. Dantas

Abstract: The data collected and transmitted by the sensors, in the Internet of Things environment, must be stored and processed in order to enable Smart Cities and Industry 4.0. However, due to the growth of number of devices, it becomes necessary to implement techniques to select most suitable sensors for each task. This approach is important to make possible to execute applications, where low latency requirements are present. Thus, several works were dedicated to the study on how to search, index, and rank sensors to overcome these challenges. A method, called GoAT, is presented in this paper to rank sensors based on the theory of active perception. The solution was evaluated using four real datasets. Our results successfully demonstrate that the proposal solution can provide an interesting level of reliability of the utilization of sensor data. Furthermore, GoAT requires a low computational resource, and at the same time, reduces latency in the sensor selection process.
Download

Area 7 - Cloud Computing Platforms and Applications

Full Papers
Paper Nr: 24
Title:

A Platform for Interactive Data Science with Apache Spark for On-premises Infrastructure

Authors:

Rafal Lokuciejewski, Dominik Schüssele, Florian Wilhelm and Sven Groppe

Abstract: Various cloud providers offer integrated platforms for interactive development in notebooks for processing and analysis of Big Data on large compute clusters. Such platforms enable users to easily leverage frameworks like Apache Spark as well as to manage cluster resources. However, Data Scientists and Engineers are facing the lack of a similar holistic solution when working with on-premises infrastructure. Especially a central point of administration to access a notebooks’ UI, manage notebook kernels, allocate resources for frameworks like Apache Spark or monitor cluster workloads, in general, is currently missing for on-premises infrastructure. To overcome these issues and provide on-premises users with a platform for interactive development, we propose a cross-cluster architecture resulting from an extensive requirements engineering process. Based on open-source components, the designed platform provides an intuitive Web-UI that enables users to easily access notebooks, manage custom kernel-environments as well as monitor cluster resources and current workloads. Besides an admin panel for user restrictions, the platform provides isolation of user workloads and scalability by design. The designed platform is evaluated against prior solutions for on-premises as well as from a user perspective by utilizing the User Experience Questionnaire, an independent benchmark tool for interactive products.
Download

Paper Nr: 41
Title:

DynamoML: Dynamic Resource Management Operators for Machine Learning Workloads

Authors:

Min-Chi Chiang and Jerry Chou

Abstract: The recent success of deep learning applications is driven by the computing power of GPUs. However, as the workflow of deep learning becomes increasingly complicated and resource-intensive, how to manage the expensive GPU resources for Machine Learning (ML) workload becomes a critical problem. Existing resource managers mostly only focus on a single specific type of workload, like batch processing or web services, and lacks runtime optimization and application performance awareness. Therefore, this paper proposes a set of runtime dynamic management techniques (including auto-scaling, job preemption, workload-aware scheduling, and elastic GPU sharing) to handle a mixture of ML workloads consisting of modeling, training, and inference jobs. Our proposed system is implemented as a set of extended operators on Kubernetes and has the strength of complete transparency and compatibility to the application code as well as the deep learning frameworks. Our experiments conducted on AWS GPU clusters prove our approach can out-perform the native Kubernetes by 60% system throughput improvement, 70% training time reduction without causing any SLA violations on inference services.
Download

Short Papers
Paper Nr: 4
Title:

Deployment Service for Scalable Distributed Deep Learning Training on Multiple Clouds

Authors:

Javier Jorge, Germán Moltó, Damian Segrelles, João P. Fontes and Miguel A. Guevara

Abstract: This paper introduces a platform based on open-source tools to automatically deploy and provision a distributed set of nodes that conduct the training of a deep learning model. To this end, the deep learning framework TensorFlow will be used, as well as the Infrastructure Manager service to deploy complex infrastructures programmatically. The provisioned infrastructure addresses: data handling, model training using these data, and the persistence of the trained model. For this purpose, public Cloud platforms such as Amazon Web Services (AWS) and General-Purpose Computing on Graphics Processing Units (GPGPU) are employed to dynamically and efficiently perform the workflow of tasks related to training deep learning models. This approach has been applied to real-world use cases to compare local training versus distributed training on the Cloud. The results indicate that the dynamic provisioning of GPU-enabled distributed virtual clusters in the Cloud introduces great flexibility to cost-effectively train deep learning models.
Download

Paper Nr: 14
Title:

A Holistic Machine Learning-based Autoscaling Approach for Microservice Applications

Authors:

Alireza Goli, Nima Mahmoudi, Hamzeh Khazaei and Omid Ardakanian

Abstract: Microservice architecture is the mainstream pattern for developing large-scale cloud applications as it allows for scaling application components on demand and independently. By designing and utilizing autoscalers for microservice applications, it is possible to improve their availability and reduce the cost when the traffic load is low. In this paper, we propose a novel predictive autoscaling approach for microservice applications which leverages machine learning models to predict the number of required replicas for each microservice and the effect of scaling a microservice on other microservices under a given workload. Our experimental results show that the proposed approach in this work offers better performance in terms of response time and throughput than HPA, the state-of-the-art autoscaler in the industry, and it takes fewer actions to maintain a desirable performance and quality of service level for the target application.
Download

Paper Nr: 38
Title:

From Serverful to Serverless: A Spectrum of Patterns for Hosting Application Components

Authors:

Vladimir Yussupov, Jacopo Soldani, Uwe Breitenbücher, Antonio Brogi and Frank Leymann

Abstract: The diversity of available cloud service models yields multiple hosting variants for application components. Moreover, the overall trend of reducing control over the infrastructure and scaling configuration makes it non-trivial to decide which hosting variant suits more a certain software component. In this work, we introduce a spectrum of component hosting patterns that covers various combinations of management responsibilities related to (i) the deployment stack required by a given component as well as (ii) required infrastructure resources and component’s scaling rules. We validate the presented patterns by identifying and showing at least three real world occurrences of each pattern following the well-known Rule of Three.
Download

Paper Nr: 43
Title:

An Approach to Reduce Network Effects in an Industrial Control and Edge Computing Scenario

Authors:

Rômulo L. V. de Omena, Danilo S. Santos and Angelo Perkusich

Abstract: The cloud-based nature of Industry 4.0 enhances its flexibility and scalability features. To support time-sensitive and mission-critical applications, whereby low latency and fast response are essential requirements, usually cloud computing resources should be placed closer to the industry. The Edge Computing concept combined with next-generation networks, such as 5G, may fulfill those requirements. This paper presents an experimental system setup that combines a Model Predictive Control approach with a compensation strategy to mitigate network delay and packet loss. The experimental system has two sides, namely, the edge and the local side. The former executes the controller and connects to the local side through a network. The latter is attached to the application and has lower computing capabilities. In our setup, the application under control is a two-wheeled mobile robot, which could act as an Automated Guided Vehicle. We defined two control objectives, the Point Stabilization, and the Trajectory Tracking, which ran through distinct network conditions, including delays and packet losses. These control objectives are only validation scenarios of the proposed approach but could be replaced by a real case path planner. The obtained results suggest that the approach is valid.
Download

Paper Nr: 44
Title:

Cross-Component Issue Metamodel and Modelling Language

Authors:

Sandro Speth, Steffen Becker and Uwe Breitenbücher

Abstract: Software systems are often built out of distributed components developed by independent teams. As a result, issues of these components, such as bugs or feature requests, are typically managed in separate, isolated issue management systems. As a result, it is hard to keep an overview of issues affecting issues of other components. Managing issues in a component-specific scope comes with significant problems in the development process since managing such cross-component issues is error-prone and time-consuming. Therefore, the cross-component issue management system Gropius was developed in previous work, which is a tool for integrated cross-component issue management that acts as a wrapper across the independent components’ issue management systems. This paper introduces the underlying metamodel of Gropius in detail and presents the graphical modelling language implemented by Gropius.
Download

Paper Nr: 10
Title:

Specialized Network Self-configuration: An Approach using Self-Organizing Networks Architecture (SONAr)

Authors:

Daniel C. Oliveira, Maurício A. Gonçalves, Natal S. Neto, Flávio O. Silva and Pedro F. Rosa

Abstract: The increasing complexity of computer networks is constantly requiring the proposal of new network automation models. These are mostly based on Software-Defined Networks (SDN) and Network Functions Virtualization (NFV) with implementation of the features called self-* within the concept of Self-Organizing Networks (SON), which includes self-configuration. Regarding this matter, some models have already been proposed at a higher level of definition, but there is still a lot of practical work to be done. In this paper, we present the modeling, implementation and performance of a network self-configuration system when a new edge element, which demands a specialized type of configuration, has been inserted. This is a typical case of a telecom network, in which edge network elements (NEs) usually require a very specific set of network parameters to be configured. The system recognizes the element and implements this specific network segment parameterization, acting as a specialized ”plug-and-play” feature. The approach will be based on the Self-Organized Networks Architecture (SONAr) framework, which was very suitable for this functionality.
Download

Area 8 - Cloud Computing Enabling Technology

Full Papers
Paper Nr: 6
Title:

SimFaaS: A Performance Simulator for Serverless Computing Platforms

Authors:

Nima Mahmoudi and Hamzeh Khazaei

Abstract: Developing accurate and extendable performance models for serverless platforms, aka Function-as-a-Service (FaaS) platforms, is a very challenging task. Also, implementation and experimentation on real serverless platforms is both costly and time-consuming. However, at the moment, there is no comprehensive simulation tool or framework to be used instead of the real platform. As a result, in this paper, we fill this gap by proposing a simulation platform, called SimFaaS, which assists serverless application developers to develop optimized Function-as-a-Service applications in terms of cost and performance. On the other hand, SimFaaS can be leveraged by FaaS providers to tailor their platforms to be workload-aware so that they can increase profit and quality of service at the same time. Also, serverless platform providers can evaluate new designs, implementations, and deployments on SimFaaS in a timely and cost-efficient manner. SimFaaS is open-source, well-documented, and publicly available, making it easily usable and extendable to incorporate more use case scenarios in the future. Besides, it provides performance engineers with a set of tools that can calculate several characteristics of serverless platform internal states, which is otherwise hard (mostly impossible) to extract from real platforms. In previous studies, temporal and steady-state performance models for serverless computing platforms have been developed. However, those models are limited to Markovian processes. We designed SimFaaS as a tool that can help overcome such limitations for performance and cost prediction in serverless computing. We show how SimFaaS facilitates the prediction of essential performance metrics such as average response time, probability of cold start, and the average number of instances reflecting the infrastructure cost incurred by the serverless computing provider. We evaluate the accuracy and applicability of SimFaaS by comparing the prediction results with real-world traces from Amazon AWS Lambda.
Download

Paper Nr: 18
Title:

AppArmor Profile Generator as a Cloud Service

Authors:

Hui Zhu and Christian Gehrmann

Abstract: Along with the rapid development of containerization technology, remarkable benefits have been created for developers and operation teams, and overall software infrastructure. Although lots of effort has been devoted to enhancing containerization security, containerized environments still have a huge attack surface. This paper proposes a secure cloud service for generating a Linux security module, AppArmor profiles for containerized services. The profile generator service implements container runtime profiling to apply customized AppArmor policies to protect containerized services without the need to make hard and potentially error-prone manual policy configurations. To evaluate the effectiveness of the profile generator service, we enable it on a widely used containerized web service to generate profiles and test them with real-world attacks. We generate an exploit database with 11 exploits harmful to the tested web service. These exploits are sifted from the 56 exploits of Exploit-db targeting the tested web service’s software. We launch these exploits on the web service protected by the profile. The results show that the proposed profile generator service improves the test web service’s overall security a lot compared to using the default Docker security profile.
Download

Paper Nr: 26
Title:

RT-MongoDB: A NoSQL Database with Differentiated Performance

Authors:

Remo Andreoli, Tommaso Cucinotta and Dino Pedreschi

Abstract: The advent of Cloud Computing and Big Data brought several changes and innovations in the landscape of database management systems. Nowadays, a cloud-friendly storage system is required to reliably support data that is in continuous motion and of previously unthinkable magnitude, while guaranteeing high availability and optimal performance to thousands of clients. In particular, NoSQL database services are taking momentum as a key technology thanks to their relaxed requirements with respect to their relational counterparts, that are not designed to scale massively on distributed systems. Most research papers on performance of cloud storage systems propose solutions that aim to achieve the highest possible throughput, while neglecting the problem of controlling the response latency for specific users or queries. The latter research topic is particularly important for distributed real-time applications, where task completion is bounded by precise timing constraints. In this paper, the popular MongoDB NoSQL database software is modified introducing a per-client/request prioritization mechanism within the request processing engine, allowing for a better control of the temporal interference among competing requests with different priorities. Extensive experimentation with synthetic stress workloads demonstrates that the proposed solution is able to assure differentiated per-client/request performance in a shared MongoDB instance. Namely, requests with higher priorities achieve reduced and significantly more stable response times, with respect to lower priorities ones. This constitutes a basic but fundamental brick in providing assured performance to distributed real-time applications making use of NoSQL database services.
Download

Short Papers
Paper Nr: 8
Title:

Find the Way in the Jungle of Quality of Service in Industrial Cloud: A Systematic Mapping Study

Authors:

Malvina Latifaj, Federico Ciccozzi and Séverine Sentilles

Abstract: The rapid development of Industry 4.0 and Industrial Cyber-Physical Systems is leading to the exponential growth of unprocessed volumes of data. Industrial cloud computing has great potential for providing the resources for processing this data. To be widely adopted, the cloud must ensure satisfactory levels of Quality of Service (QoS). However, the lack of a standardized model of quality attributes hinders the assessment of QoS levels. This paper provides a comprehensive systematically defined map of current research trends, results, and gaps in quality attributes and QoS in industrial cloud computing. An extract of the main insights is as follows: (i) the adoption of cloud technologies is closely related to performance indicators, however other quality attributes, such as security, are not considered as much as they should; (ii) solutions are most often not tailored to specific industrial application domains; (iii) research largely focuses on providing solutions without solid validation, unsuitable for effective and fruitful technology transfer.
Download

Paper Nr: 12
Title:

Automating the Deployment of Distributed Applications by Combining Multiple Deployment Technologies

Authors:

Michael Wurster, Uwe Breitenbücher, Antonio Brogi, Felix Diez, Frank Leymann, Jacopo Soldani and Karoline Wild

Abstract: Various deployment technologies have been released to support automating the deployment of distributed applications. Although many of these technologies provide general-purpose functionalities to deploy applications as well as infrastructure components, different technologies provide specific capabilities making them suited for different environments and application types. As a result, the deployment of complex distributed applications often requires to combine several deployment technologies expressed by different deployment models. Thus, multiple deployment models are processed by different technologies and must be either orchestrated manually or the automated orchestration must be developed individually. To address these challenges, we present an approach (i) to annotate parts of a holistic deployment model that should be deployed with different deployment technologies, (ii) to automatically transform an annotated model to multiple technology-specific models for different technologies, and (iii) to automatically coordinate the deployment execution with different technologies by employing a centralized orchestrator component. To prove the practical feasibility of the approach, we describe a case study based on a third-party application.
Download

Paper Nr: 21
Title:

Comparative Performance Study of Lightweight Hypervisors Used in Container Environment

Authors:

Guoqing Li, Keichi Takahashi, Kohei Ichikawa, Hajimu Iida, Pree Thiengburanathum and Passakorn Phannachitta

Abstract: Virtual Machines (VMs) are used extensively in cloud computing. The underlying hypervisor allows hardware resources to be split into multiple virtual units which enhances resource utilization. However, VMs with traditional architecture introduce heavy overhead and reduce application performance. Containers have been introduced to overcome this drawback, yet such a solution raises security concerns due to poor isolation. Lightweight hypervisors have been leveraged to strike a balance between performance and isolation. However, there has been no comprehensive performance comparison among them. To identify the best fit use case, we investigate the performance characteristics of Docker container, Kata containers, gVisor, Firecracker and QEMU/KVM by measuring the performance on disk storage, main memory, CPU, network, system call and startup time. In addition, we evaluate their performance of running Nginx web server and MySQL database management system. We use QEMU/KVM as an example of traditional VM, Docker as the standard container and the rest as the representatives of lightweight hypervisors. We compare and analyze the benchmarking results, discuss the possible implications, explain the trade-off each organization made and elaborate on the pros and cons of each architecture.
Download

Paper Nr: 31
Title:

Sit Here: Placing Virtual Machines Securely in Cloud Environments

Authors:

Mansour Aldawood, Arshad Jhumka and Suhaib A. Fahmy

Abstract: A Cloud Computing Environment (CCE) leverages the advantages offered by virtualisation to enable virtual machines (VMs) within the same physical machine (PM) to share physical resources. Cloud service providers (CSPs) accommodate the fluctuating resource demands of cloud users dynamically, through elastic resource provisioning. CSPs use VM allocation techniques such as VM placement and VM migration to optimise the use of shared physical resources in the CCE. However, these techniques are exposed to potential security threats that can lead to the problem of malicious co-residency between VMs. This threat happens when a malicious VM is co-located with a critical (or target) VM on the same PM. Hence, the VM allocation techniques need to be made secure. While earlier works propose specific solutions to address this malicious co-residency problem, our work here proposes to investigate the allocation patterns that are more likely to lead to a secure allocation. Furthermore, we introduce a security-aware VM allocation algorithm (SRS) that aims to allocate the VMs securely, to reduce the potential for co-residency between malicious and target VMs. Our study shows: (i) our SRS algorithm outperforms all state-of-the-art allocation algorithms and (ii) algorithms that adopt stacking-based behaviours are more likely to return secure allocations than those with spreading or random behaviours.
Download

Paper Nr: 42
Title:

Container Allocation and Deallocation Traceability using Docker Swarm with Consortium Hyperledger Blockchain

Authors:

Marco A. Marques, Charles C. Miers and Marcos A. Simplicio Jr.

Abstract: Container-based virtualization enables the dynamic allocation of computational resources, thus addressing needs like scalability and fault tolerance. However, this added flexibility brought by containerization comes with a drawback: it makes system monitoring more challenging due to the large flow of calls and (de)allocations. In this article, we discuss how recording these operations in a blockchain-based data structure can facilitate auditing of employed resources, as well as analyses involving the chronology of performed operations. In addition, the use of a blockchain distributes the credibility of record integrity among providers, end-users, and developers of the container-based solution.
Download

Paper Nr: 15
Title:

Functionalities, Challenges and Enablers for a Generalized FaaS based Architecture as the Realizer of Cloud/Edge Continuum Interplay

Authors:

George Kousiouris and Dimosthenis Kyriazis

Abstract: The availability of decentralized edge computing locations as well as their combination with more centralized Cloud solutions enables the investigation of various trade-offs for application component placement in order to optimize application behaviour and resource usage. In this paper, the goal is to investigate key functionalities and operations needed by a middleware layer so that it can serve as a generalized architectural and computing framework in the implementation of a Cloud/Edge computing continuum. As a primary candidate, FaaS frameworks are taken under consideration, given their significant benefits such as flexibility in execution, maturity of the underlying tools, event driven nature and enablement of incorporation of arbitrary and legacy application components triggered by diverse actions and rules. Related work, gaps and enablers for three different layers (application design and implementation, semantically enriched runtime adaptation/configuration and deployment optimization) are highlighted. These aid in detecting necessary building blocks of a proposed generalized architecture in order to enclose the needed functionalities, covering aspects such as diverse service environments and links with the underlying platforms for orchestration, dynamic configuration, deployment and operation.
Download

Paper Nr: 39
Title:

Structural Coupling for Microservices

Authors:

Sebastiano Panichella, Mohammad I. Rahman and Davide Taibi

Abstract: Cloud-native Applications are “distributed, elastic and horizontal-scalable systems composed of (micro)services which isolates states in a minimum of stateful components”. Hence, an important property is to ensure a low coupling and a high cohesion among the (micro)services composing the cloud-native application.. Loosely coupled and highly cohesive services allow development teams to work in parallel, reducing the communication overhead between teams. However, despite both practitioners and researchers agreement on the importance of this general property, there are no validated metrics to effectively measure or test the actual coupling level between services. In this work, we propose ways to compute and to visualize the coupling between microservices, this by extending and adapting the concepts behind the computation of the traditional structural coupling. We validate these measures with a case study involving 17 open source projects and we provide an automatic approach to measure them. The results of this study highlight how these metrics provide to practitioners a quantitative and visual views of services compositions, which can be useful to conceive advanced systems to monitor the services evolution.
Download

Paper Nr: 47
Title:

Development of Wireless Sensor Networks Applications with State-based Orchestration

Authors:

Alexandre R. Ordakowski, Marco A. Carrero and Carmem S. Hara

Abstract: The growing demand for sensor devices, key elements of cyber-physical systems and the Internet of Things, requires fast development of new applications. However, the specification and implementation of such systems is a complicated task, especially because of the lack of support for code reuse and for defining the program execution flow. Service orchestration is a technique that has been widely adopted for developing applications for the cloud. In this paper we propose a similar technique for developing applications for Wireless Sensor Networks (WSN). To this end, we propose a development model based on reusable software components for WSN applications. For the components orchestration, which defines the application execution flow, we propose a domain-specific language, called SLEDS-SD (State Machine-based Language for Sensor Devices). In its current implementation, SLEDS-SD generates nesC code, which can be installed in TinyOS-based devices. The evaluation involved the development of three cluster-based WSN models. The efficiency of the proposal was evaluated by determining the amount of code reuse, while its efficacy was evaluated by the generated code correctness. For that, we compare the generated programs behavior with those reported in previous studies.
Download