CLOSER 2024 Abstracts


Area 1 - Cloud Computing Fundamentals

Full Papers
Paper Nr: 16
Title:

Feather: Lightweight Container Alternatives for Deploying Workloads in the Edge

Authors:

Tom Goethals, Maxim De Clercq, Merlijn Sebrechts, Filip De Turck and Bruno Volckaert

Abstract: Recent years have seen the adoption of workload orchestration into the network edge. Cloud orchestrators such as Kubernetes have been extended to edge computing, providing the virtual infrastructure to efficiently manage containerized workloads across the edge-cloud continuum. However, cloud-based orchestrators are resource intensive, sometimes occupying the bulk of resources of an edge device even when idle. While various Kubernetes-based solutions, such as K3s and KubeEdge, have been developed with a specific focus on edge computing, they remain limited to container runtimes. This paper proposes a Kubernetes-compatible solution for edge workload packaging, distribution, and execution, named Feather, which extends edge workloads beyond containers. Feather is based on Virtual Kubelets, superseding previous work from FLEDGE. It is capable of operating in existing Kubernetes clusters, with minimal, optional additions to the Kubernetes PodSpec to enable multi-runtime images and execution. Both Containerd and OSv unikernel backends are implemented, and evaluations show that unikernel workloads can be executed highly efficiently, with a memory reduction of up to 20% for Java applications at the cost of up to 25% CPU power. Evaluations also show that Feather itself is suitable for most modern edge devices, with the x86 version only requiring 58-62 MiB of memory for the agent itself.
Download

Paper Nr: 24
Title:

Tail-Latency Aware and Resource-Efficient Bin Pack Autoscaling for Distributed Event Queues

Authors:

Mazen Ezzeddine, Françoise Baude and Fabrice Huet

Abstract: Distributed event queues are currently the backbone for many large-scale real-time cloud applications including smart grids, intelligent transportation, and health care monitoring. Applications (event consumers) that process events from distributed event queue are latency-sensitive. They require that a high percentile of events be served in less than a desired latency. Meeting such desired latency must be accomplished at low cost in terms of resources used. In this research, we first express the problem of targeting resource-efficient and latency-aware event consuming from distributed event queues as a bin pack problem. This bin pack depends on the arrival rate of events, the number of events in the queue backlog, and the maximum consumption rate of event consumers. We show that the proposed bin pack solution outperforms a linear autoscaling solution by 3.5% up to 10% in terms of latency SLA. Furthermore, we discuss how dynamic event consumers provisioning in distributed event queues necessitates a blocking synchronization protocol. We show that this blocking synchronization protocol is at conflict with meeting a desired latency for high percentile of events. Hence, we propose an extension to the bin pack autoscaler logic in order to reduce the tail latency caused by the events accumulated during the blocking synchronisation protocol.
Download

Paper Nr: 41
Title:

Optimizing Service Placement in Edge-to-Cloud AR/VR Systems Using a Multi-Objective Genetic Algorithm

Authors:

Mohammadsadeq G. Herabad, Javid Taheri, Bestoun S. Ahmed and Calin Curescu

Abstract: Augmented Reality (AR) and Virtual Reality (VR) systems involve computationally intensive image processing algorithms that can burden end-devices with limited resources, leading to poor performance in providing low latency services. Edge-to-cloud computing overcomes the limitations of end-devices by offloading their computations to nearby edge devices or remote cloud servers. Although this proves to be sufficient for many applications, optimal placement of latency sensitive AR/VR services in edge-to-cloud infrastructures (to provide desirable service response times and reliability) remain a formidable challenging. To address this challenge, this paper develops a Multi-Objective Genetic Algorithm (MOGA) to optimize the placement of AR/VR-based services in multi-tier edge-to-cloud environments. The primary objective of the proposed MOGA is to minimize the response time of all running services, while maximizing the reliability of the underlying system from both software and hardware perspectives. To evaluate its performance, we mathematically modeled all components and developed a tailor-made simulator to assess its effectiveness on various scales. MOGA was compared with several heuristics to prove that intuitive solutions, which are usually assumed sufficient, are not efficient enough for the stated problem. The experimental results indicated that MOGA can significantly reduce the response time of deployed services by an average of 67% on different scales, compared to other heuristic methods. MOGA also ensures reliability of the 97% infrastructure (hardware) and 95% services (software).
Download

Paper Nr: 57
Title:

Load-Aware Container Orchestration on Kubernetes Clusters

Authors:

Angelo Marchese and Orazio Tomarchio

Abstract: Microservice Architecture is quickly becoming popular for building extensive applications designed for deployment in dispersed and resource-constrained cloud-to-edge computing settings. Being a cloud-native technology, the real strength of microservices lies in their loosely connected, autonomously deployable, and scalable features, facilitating distributed deployment and flexible integration across powerful cloud data centers to heterogeneous and often constrained edge nodes. Hence, there is a need to devise innovative placement algorithms that leverage these microservice features to enhance application performance. To address these issues, we propose extending Kubernetes with a load-aware orchestration strategy, enhancing its capability to deploy microservice applications within shared clusters characterized by dynamic resource usage patterns. Our approach dynamically orchestrates applications based on runtime resource usage, continuously adjusting their placement. The results, obtained by evaluating a prototype of our system in a testbed environment, show significant advantages over the vanilla Kubernetes scheduler.
Download

Paper Nr: 61
Title:

Prediction of Resource Utilisation in Cloud Computing Using Machine Learning

Authors:

Ruksar Shaikh, Cristina H. Muntean and Shaguna Gupta

Abstract: In today’s modern computing infrastructure, cloud computing has emerged as a pivotal paradigm that offers scalability and flexibility to satisfy the demands of a wide variety of specific applications. Maintaining optimal performance and cost-effectiveness inside cloud settings continues to be a significant problem and one of the most important challenges is efficient resource utilisation. A resource utilization prediction system is required to aid the resource allocator in providing optimal resource allocation. Accurate prediction is difficult in such a dynamic resource utilisation. The applications of machine learning techniques are the primary emphasis of this research project which aims to predict resource utilisation in cloud computing systems. The dataset GWA-T-12 Bitbrains have provided the data of timestamp, cpu usage, network transmitted throughput and Microsoft Azure traces has provided the data of cpu usage of a cloud server. To predict VM workloads based on CPU utilization, machine learning models such as Linear Regression, Decision Tree Regression, Gradient Boosting Regression, and Support Vector Regression are used. In addition to these, deep learning models such as Long Short-Term Memory and Bi-directional Long Short-Term Memory have also been evaluated in our approach. Bi-directional Long Short Term Memory approach is considered more effective as compared to other models in terms of CPU Utilisation and Network Transmitted Throughput as its R2 score is close to 1 and hence can produce more accurate results.
Download

Short Papers
Paper Nr: 34
Title:

Balancing Performance and Aging in Cloud Environments

Authors:

Thiago Gonçalves, Antonio S. Beck and Arthur F. Lorenzon

Abstract: As the number of cores per chip increases, cloud servers become more capable of effectively handling multiple requests simultaneously. However, they may present unexpected temperature-related challenges that accelerate aging, causing errors or malfunctions. Moreover, because of process variability, temperature will vary even for identical cores running at the same operating frequency in the processor. In this scenario, we propose EquiLifeCM, a framework designed to maximize the lifespan of cloud machines. Given the system’s current status and applications’ behavior, EquiLifeCM automatically allocates workloads across cores from different cloud machines and applies frequency scaling considering core variability.
Download

Paper Nr: 37
Title:

Towards a General Metric for Energy Efficiency in Cloud Computing Data Centres: A Proposal for Extending of the ISO/IEC 30134-4

Authors:

Carlos Juiz, Belen Bermejo, Alejandro Fernández-Montes and Damián Fernández-Cerero

Abstract: For some years now, energy efficiency has been one of the concerns of cloud system administrators. To improve energy efficiency, in recent years standards such as ISO/IEC 30134-4 and ISO/IEC 21836 have emerged. Both standards are focused on the evaluation of physical servers, taking into account the power consumed and the maximum peak of performance, under running a SPEC benchmark. In this way, the server consolidation through virtualization is not considered in these standards, being the consolidation of servers as one of the most applied techniques to improve energy efficiency in cloud data centres. This work proposes an extension of the methodology proposed in these standards to measure energy efficiency in consolidated servers. As a result, it has been demonstrated through real experimentation that the proposed generical methodology considers the consolidation of servers in any type of virtualization environment. This methodology helps system administrators to manage cloud data centres and servers more efficiently.
Download

Paper Nr: 46
Title:

On Maintainability and Microservice Dependencies: How Do Changes Propagate?

Authors:

Tomas Cerny, Md H. Chy, Amr S. Abdelfattah, Jacopo Soldani and Justus Bogner

Abstract: Modern software systems evolve rapidly, especially when boosted by continuous integration and delivery. While many tools exist to help manage the maintainability of monolithic systems, gaps remain in assessing changes in decentralized systems, such as those based on microservices. Microservices fuel cloud-native systems, the mainstream direction for most enterprise solutions, which drives motivation for a broader understanding of how changes propagate through such systems. This position paper elaborates on the role of dependencies when dealing with evolution challenges in microservices aiming to support maintainability. It highlights the importance of dependency management in the context of maintainability deterioration. Our proposed perspective refines the approach to maintainability assurance by focusing on the systematic management of dependencies as a more direct method for addressing and understanding change propagation pathways, compared to traditional methods that often only address symptoms like anti-patterns, smells, metrics, or high-level concepts.
Download

Paper Nr: 51
Title:

A Logic Programming Approach to VM Placement

Authors:

Remo Andreoli, Stefano Forti, Luigi Pannocchi, Tommaso Cucinotta and Antonio Brogi

Abstract: Placing virtual machines so to minimize the number of used physical hosts is an utterly important problem in cloud computing and next-generation virtualized networks. This article proposes a declarative reasoning methodology, and its open-source prototype, including four heuristic strategies to tackle this problem. Our proposal is extensively assessed over real data from an industrial case study and compared to state-of-the-art approaches, both in terms of execution times and solution optimality. As a result, our declarative approach determines placements that are only 6% far from optimal, outperforming a state-of-the-art genetic algorithm in terms of execution times, and a first-fit search for optimality of found placements. Last, its pipelining with a mathematical programming solution improves execution times of the latter by one order of magnitude on average, compared to using a genetic algorithm as a primer.
Download

Area 2 - Cloud Security and Privacy

Full Papers
Paper Nr: 15
Title:

On Detecting Malicious Code Injection by Monitoring Multi-Level Container Activities

Authors:

Md. H. Bhuiyan, Souvik Das, Shafayat H. Majumder, Suryadipta Majumdar and Md. S. Hossain

Abstract: In recent years, cloud-native applications have been widely hosted and managed in containerized environments due to their unique benefits, such as being lightweight, portable, and cost-efficient. Their growing popularity makes them a common subject of cyberthreats, as evidenced by recent attacks. Many of those attacks take place due to malicious code injection to breach systems and steal sensitive data from a containerized environment. However, existing solutions fail to classify malicious code injection attacks that impact multiple levels (e.g., application and orchestrator). In this paper, we fill in this gap and propose a multi-level monitoring-based approach where we monitor container activities at both the system call level and the container orchestrator (e.g., Kubernetes) level. Specifically, our approach can distinguish between the expected and unexpected behavior of a container from various system call characteristics (e.g., sequence, frequency, etc.) along with the activities through event logs at the orchestrator level to detect malicious code injection attacks. We implement our approach for Kubernetes, a major container orchestrator, and evaluate it against various attack paths outlined by the Cloud Native Computing Foundation (CNCF), an open-source foundation for cloud native computing.
Download

Short Papers
Paper Nr: 14
Title:

ALASCA: Function-Driven Advanced Access Control for Big Cold Data

Authors:

Karl Wolf, Frank Pallas and Sebastian Werner

Abstract: Large datasets collected over a long time and only accessed on an infrequent basis – called Big Cold Data herein – play an important role in a broad variety of data-driven applications. In managing such data and the access to it, implementing advanced access control schemes beyond mere role-based yes/no-decisions becomes decisive, given the often sensitive or personal nature of the data as well as the multitude of regulatory requirements and other constraints applying to it. Current, mostly cloud-based technologies for storing and managing Big Cold Data, however, lack that advanced access control functionalities, such as consent-based transformations, while existing approaches for implementing such functionalities do not pay sufficient regard to the particularities of Big Cold Data to offer efficient access on an infrequent basis. We therefore propose an architecture and framework (ALASCA) following the function-as-a-service (FaaS) paradigm for implementing versatile access services on cloud-managed Big Cold Data. Towards that end, we offer a first characterization of Big Cold Data and raise challenges in access control, specifically in performing custom and infrequent transformations on large heterogeneous datasets. We demonstrate the applicability of ALASCA by implementing and evaluating it for AWS and Google Cloud. Our preliminary evaluation shows the promise and practical applicability of FaaS-based access control, especially for advanced access control schemes to be applied to Big Cold Data.
Download

Paper Nr: 26
Title:

Enhancing SPIFFE/SPIRE Environment with a Nested Security Token Model

Authors:

Henrique Z. Cochak, Milton P. Neto, Charles C. Miers, Marco A. Marques and Marcos A. Simplicio Jr.

Abstract: Within the domains of authentication, authorization, and accounting, vulnerabilities often arise, posing significant challenges due to the inter-connectivity and communication among various system components. Addressing these threats, SPIFFE framework emerges as a robust solution tailored for workloads identity management. This work explores solutions for use cases not originally foreseen in the SPIFFE scope, focusing on enhancing security measures, particularly investigating a novel token model that introduces a nesting concept. This extended token model operates within a SPIRE environment, enabling token nesting with new features such as token tracing with both ephemeral and non-ephemeral keys and the possibility of delegated assertions.
Download

Paper Nr: 27
Title:

Visualizing the Information Security Maturity Level of Public Cloud Services Used by Public Administrations

Authors:

Michael Diener and Thomas Bolz

Abstract: The digitization of public administrations in Germany is making slow progress. At the same time, more and more innovative IT solutions are available on the market for solving practical business problems, e.g. web-based file sharing applications that are offered by external cloud service providers. Due to data protection regulations and uncertainties regarding information security issues, the adoption and operation of public cloud services within public administrations is a challenging task. As part of our research, we constructed a three-phase process model that uses a web-based tool approach, in order to support chief information officers to manage security audits of various public cloud services that are used by different organizational units. To ensure the efficient, transparent and comprehensive conduction of cloud security audits, we developed graphical visualization components that illustrate the information security maturity level in relation to multiple security requirements of the analyzed public cloud services. We have successfully evaluated our proposed tool visualization under real conditions within a public administration. Furthermore, we discussed several use cases and the user experience with different experts in this application domain.
Download

Paper Nr: 48
Title:

Machine Learning Models with Fault Tree Analysis for Explainable Failure Detection in Cloud Computing

Authors:

Rudolf Hoffmann and Christoph Reich

Abstract: Cloud computing infrastructures availability rely on many components, like software, hardware, cloud management system (CMS), security, environmental, and human operation, etc. If something goes wrong the root cause analysis (RCA) is often complex. This paper explores the integration of Machine Learning (ML) with Fault Tree Analysis (FTA) to enhance explainable failure detection in cloud computing systems. We introduce a framework employing ML for FT selection and generation, and for predicting Basic Events (BEs) to enhance the explainability of failure analysis. Our experimental validation focuses on predicting BEs and using these predictions to calculate the Top Event (TE) probability. The results demonstrate improved diagnostic accuracy and reliability, highlighting the potential of combining ML predictions with traditional FTA to identify root causes of failures in cloud computing environments and make the failure diagnostic more explainable.
Download

Paper Nr: 52
Title:

Systematic Threat Modelling of High-Performance Computing Systems: The V:HPCCRI Case Study

Authors:

Raffaele Elia, Daniele Granata and Massimiliano Rak

Abstract: High-Performance Computing (HPC) systems play a crucial role in different research and industry tasks, boasting high-intensity computing capacity, high-bandwidth network connections, and extensive storage at each HPC centre. The system’s objectives, coupled with the presence of valuable resources and sensitive data, make it an attractive target for malicious users. Traditionally, HPC systems are considered ”trusted” with users having significant rights and limited protective measures in place. Additionally, its heterogeneous nature complicates security efforts. Applying traditional security measures to individual cluster nodes proves insufficient as it neglects the system’s holistic perspective. To address these challenges, this paper presents a methodology for collecting threats affecting HPC environments from the literature analysis using a Systematic Search. Key contributions of this work include the application of the presented methodology to support the HPC domain through the definition of an HPC-specific threat catalogue and, starting from it, the generation of a threat model for a real-world case study: the V:HPCCRI supercomputer.
Download

Paper Nr: 36
Title:

A Methodology for Web Cache Deception Vulnerability Discovery

Authors:

Filippo Berto, Francesco Minetti, Claudio A. Ardagna and Marco Anisetti

Abstract: In recent years, the use of caching techniques in web applications has increased significantly, in line with their expanding user base. The logic of web caches is closely tied to the application logic, and misconfigurations can lead to security risks, including the unauthorized access of private information and session hijacking. In this study, we examine Web Cache Deception as a technique for attacking web applications. We develop a solution for discovering vulnerabilities that expands upon and encompasses prior research in the field. We conducted an experimental evaluation of the attack’s efficacy against real-world targets, and present a new attack vector via web-client-based email services.
Download

Paper Nr: 39
Title:

Responsible Information Sharing in the Era of Big Data Analytics Facilitating Digital Economy Through the Use of Blockchain Technology and Observing GDPR

Authors:

Vijon Baraku, Iraklis Paraskakis, Simeon Veloudis and Poonam Yadav

Abstract: In the contemporary digital landscape, the intersection of big data analytics, data ownership, and GDPR compliance emerges as a critical arena. This paper proposes a transformative framework to redefine data control, shifting ownership from entities such as Data Controllers to individuals. Central to this innovation is the proposed novel and disruptive concept of the Data Capsule, empowering individuals to effectively own and thus dictate terms and conditions for their data usage. The Data Capsule framework draws on ontologies, semantic technologies, and blockchain to homogenise heterogeneous data, automate annotation, enforce governance rules, and ensure transparency. By making individuals primary custodians of their data, this paper aims to provide privacy, security, and ethical data handling, counteracting the potential pitfalls of profit-driven practices. This paper outlines a comprehensive research plan, provides a state-of-the-art review, and shows aligning objectives with the system workflow.
Download

Paper Nr: 47
Title:

Towards Image-Based Network Traffic Pattern Detection for DDoS Attacks in Cloud Computing Environments: A Comparative Study

Authors:

Azizol Abdullah and Mohamed Aly Bouke

Abstract: With the increasing adoption of cloud computing and the emergence of Industry 4.0, the need for robust intrusion detection mechanisms to safeguard cloud-based systems against Distributed Denial of Services Attacks (DDoS) attacks has become more critical than ever. This study presents a comprehensive comparative analysis of traditional Machine Learning (ML) techniques and Deep Learning (DL) for DDoS attack detection in cloud environments. Utilizing the CIC-IDS 2017 dataset, transformed from tabular data into image-based formats for DL model compatibility, we evaluate 27,001 instances of normal traffic and 21,844 DDoS attack instances. We preprocess network traffic data and explore various DL architectures, including Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) networks, Bidirectional LSTM (BLSTM) networks, and Gated Recurrent Unit (GRU) networks. Additionally, we evaluate the performance of DL models against traditional ML algorithms, such as Random Forest (RF), Support Vector Machines (SVM), and Logistic Regression (LR), using standard evaluation metrics. Our results highlight the superior performance of DL models, particularly the BLSTM model, with an accuracy of 96.35%, precision of 97.42%, recall of 94.39%, F1 score of 95.88%, and ROC AUC score of 96.17%. Through in-depth analysis and discussion, we provide insights into the strengths and weaknesses of different intrusion detection mechanisms, such as the higher interpretability of ML models and the superior discriminatory power of DL models. These findings offer valuable guidance for practitioners and researchers in enhancing the security of cloud-based systems against DDoS attacks.
Download

Area 3 - Edge Cloud and Fog Computing

Short Papers
Paper Nr: 11
Title:

State-Aware Application Placement in Mobile Edge Clouds

Authors:

Chanh Nguyen, Cristian Klein and Erik Elmroth

Abstract: Placing applications within Mobile Edge Clouds (MEC) poses challenges due to dynamic user mobility. Maintaining optimal Quality of Service may require frequent application migration in response to changing user locations, potentially leading to bandwidth wastage. This paper addresses application placement challenges in MEC environments by developing a comprehensive model covering workloads, applications, and MEC infrastructures. Following this, various costs associated with application operation, including resource utilization, migration overhead, and potential service quality degradation, are systematically formulated. An online application placement algorithm, App EDC Match, inspired by the Gale-Shapley matching algorithm, is introduced to optimize application placement considering these cost factors. Through experiments that employ real mobility traces to simulate workload dynamics, the results demonstrate that the proposed algorithm efficiently determines near-optimal application placements within Edge Data Centers. It achieves total operating costs within a narrow margin of 8% higher than the approximate global optimum attained by the offline precognition algorithm, which assumes access to future user locations. Additionally, the proposed placement algorithm effectively mitigates resource scarcity in MEC.
Download

Paper Nr: 20
Title:

Task Offloading in Edge-Cloud Computing Using a Q-Learning Algorithm

Authors:

Somayeh Abdi, Mohammad Ashjaei and Saad Mubeen

Abstract: Task offloading is a prominent problem in edge − cloud computing, as it aims to utilize the limited capacity of fog servers and cloud resources to satisfy the QoS requirements of tasks, such as meeting their deadlines. This paper formulates the task offloading problem as a nonlinear mathematical programming model to maximize the number of independent IoT tasks that meet their deadlines and to minimize the deadline violation time of tasks that cannot meet their deadlines. This paper proposes two Q-learning algorithms to solve the formulated problem. The performance of the proposed algorithms is experimentally evaluated with respect to several algorithms. The evaluation results demonstrate that the proposed Q-learning algorithms perform well in meeting task deadlines and reducing the total deadline violation time.
Download

Paper Nr: 50
Title:

Uncertainty Estimation in Multi-Agent Distributed Learning for AI-Enabled Edge Devices

Authors:

Gleb Radchenko and Victoria A. Fill

Abstract: Edge IoT devices, once seen as low-power units with limited processing, have evolved with introduction of FPGAs and AI accelerators, significantly boosting their computational power for edge AI. This leads to new challenges in optimizing AI for energy and network resource constraints in edge computing. Our study examines methods for distributed data processing with AI-enabled edge devices to improve collaborative learning. We focus on the challenge of assessing confidence in learning outcomes amid data variability faced by agents. To address this issue, we investigate the application of Bayesian neural networks, proposing a novel approach to manage uncertainty in distributed learning environments.
Download

Paper Nr: 56
Title:

Lapse: Latency & Power-Aware Placement of Data Stream Applications on Edge Computing

Authors:

Carlos H. Kayser, Marcos Dias de Assunção and Tiago Ferreto

Abstract: Data Stream Processing (DSP) systems have gained considerable attention in edge computing environments to handle data streams from diverse sources, notably IoT devices, in real-time at the network’s edge. However, their effective utilization concerning end-to-end processing latency, SLA violations, and infrastructure power consumption in heterogeneous environments depends on the adopted placement strategy, posing a significant challenge. This paper introduces Lapse, an innovative cost-based heuristic algorithm specifically crafted to optimize the placement of DSP applications within edge computing environments. Lapse aims to concurrently minimize latency SLA violations and curtail the overall power consumption of the underlying infrastructure. Simulation-driven experiments indicate that Lapse outperforms baseline strategies, substantially reducing the power consumption of the infrastructure by up to 24.42% and SLA violations by up to 75%.
Download

Paper Nr: 25
Title:

IoT Devices Overhead: A Simulation Study of eHealth Solutions over a Hospitals’ Network

Authors:

Gabriel K. Costa, Edvard Martins de Oliveira and Mário Henrique Souza Pardo

Abstract: As Internet of Thing (IoT) applications for eHealth gain momentum, one aspect becomes critical: the underlying network infrastructure. The efficient operation of interconnected devices hinges upon a robust and reliable network architecture, that can present formidable barriers to IoT’s potential. In this paper we analyze the capacity of a typical hospital network infrastructure to implement new eHealth solutions, given the number of devices required in a average setup. We use the IFogSim Simulator to model a network setup for a hospital, selecting parameters for the devices from state-of-the-art solutions. This work is an important investigation on the overload a collection of IoT devices may cause in a common network setup, and could be used for planning ahead the installation of eHealth solutions. Our results show that a realistic arrange for IoT solutions might impact significantly the network flow, energy consumption and processing time. Also, the number of gateways reflects on the working capacity of the general system, turning from a distribution point to a bottleneck. The environment provides a reliable representation, allowing the extrapolation of the scenarios.
Download

Paper Nr: 42
Title:

Benefits of Dynamic Computational Offloading for Mobile Devices

Authors:

Vinay Yadhav, Andrew Williams, Ondrej Smid, Jimmy Kjällman, Raihan U. Islam, Joacim Halén and Wolfgang John

Abstract: The proliferation of applications across mobile devices coupled with fast mobile broadband have led to expectations of better application performance, user experiences, and extended device battery life. To address this, we propose a dynamic computational offloading solution that migrates critical application tasks to remote compute sites within mobile networks. Offloading is particularly advantageous for lightweight devices, as it enables access to more capable processing hardware. Application developers can also leverage the offloading service to customize features, address privacy concerns, and optimize performance based on user requirements. Moreover, the solution facilitates local synchronization among collaborating users. Our solution focuses on ad-hoc deployment and dynamic scheduling of fine-grained application tasks triggered by changes in device metrics, without extensive development efforts. It extends application functionality from mobile devices to remote compute environments, complementing the cloud-to-edge paradigm. We introduce a distributed execution framework based on portable, lightweight, and secure WebAssembly runtimes. Additionally, we present a programming model to simplify ad-hoc deployment and dynamic invocation of task modules during runtime. We demonstrate the benefits of our solution, showing significant performance improvements of the application, and reduced energy consumption and heat generation on the mobile device.
Download

Paper Nr: 49
Title:

Improving Edge-AI Image Classification Through the Use of Better Building Blocks

Authors:

Lucas Mohimont, Lilian Hollard and Luiz A. Steffenel

Abstract: Traditional CNN architectures for classification, while successful, suffer from limitations due to diminishing spatial resolution and vanishing gradients. The emergence of modular ”building blocks” offered a new approach, allowing complex feature extraction through stacked layers. Despite the popularity of models like VGG, their high parameter count restricts their use in resource-constrained environments like Edge AI. This work investigates efficient building blocks as alternatives to VGG blocks, comparing the performance of diverse blocks from well-known models alongside our proposal block. Extensive experiments across various datasets demonstrate that our proposed block surpasses established blocks like Inception v1 in terms of accuracy while requiring significantly fewer resources regarding computational cost (GFLOPs) and memory footprint (number of parameters). This showcases its potential for real-world applications in Edge AI.
Download

Area 4 - Cloud Computing Platforms and Applications

Full Papers
Paper Nr: 21
Title:

Creek: Leveraging Serverless for Online Machine Learning on Streaming Data

Authors:

Nazmul Takbir, Tahmeed Tarek and Muhammad A. Adnan

Abstract: Recently, researchers have seen promising results in using serverless computing for real-time machine learning inference tasks. Several researchers have also used serverless for machine learning training and compared it against VM-based (virtual machine) training. However, most of these approaches, which assumed traditional offline machine learning, did not find serverless to be particularly useful for model training. In our work, we take a different approach; we explore online machine learning. The incremental nature of training online machine learning models allows better utilization of the elastic scaling and consumption-based pricing offered by serverless. Hence, we introduce Creek, a proof-of-concept system for training online machine learning models on streaming data using serverless. We explore architectural variants of Creek on AWS and compare them in terms of monetary cost and training latency. We also compare Creek against VM-based training and identify the factors influencing the choice between a serverless and VM-based solution. We explore model parallelism and introduce a usage-based dynamic memory allocation of serverless functions to reduce costs. Our results indicate that serverless training is cheaper than VM-based training when the streaming rate is sporadic and unpredictable. Furthermore, parallel training using serverless can significantly reduce training latency for models with low communication overhead.
Download

Short Papers
Paper Nr: 54
Title:

Integrating Secure Multiparty Computation into Data Spaces

Authors:

Veronika Siska, Thomas Lorünser, Stephan Krenn and Christoph Fabianek

Abstract: Integrating secure multiparty computation (MPC) into data spaces is a promising approach for enabling secure and trustworthy data-sharing in the future Data Economy. This paper systematically analyzes the integration challenges of MPC in data spaces and proposes a comprehensive approach to address these challenges. The authors evaluate various use cases to identify key challenges and gaps in existing research. They propose concrete methods and technologies to solve these challenges, focusing on areas such as authentication and identity management, policy description, node selection, global system parameters, and access control. The paper emphasizes the importance of standardization efforts to ensure interoperability among MPC-enabled data spaces. Overall, this work provides valuable insights and directions for further research in integrating MPC into dynamic data sharing environments.
Download

Paper Nr: 59
Title:

Harnessing the Computing Continuum Across Personalized Healthcare, Maintenance and Inspection, and Farming 4.0

Authors:

Fatemeh Baghdadi, Davide Cirillo, Daniele Lezzi, Francesc Lordan, Fernando Vazquez, Eugenio Lomurno, Alberto Archetti, Danilo Ardagna and Matteo Matteucci

Abstract: The AI-SPRINT project, launched in 2021 and funded by the European Commission, focuses on the development and implementation of AI applications across the computing continuum. This continuum ensures the coherent integration of computational resources and services from centralized data centers to edge devices, facilitating efficient and adaptive computation and application delivery. AI-SPRINT has achieved significant scientific advances, including streamlined processes, improved efficiency, and the ability to operate in real time, as evidenced by three practical use cases. This paper provides an in-depth examination of these applications – Personalized Healthcare, Maintenance and Inspection, and Farming 4.0 – highlighting their practical implementation and the objectives achieved with the integration of AI-SPRINT technologies. We analyze how the proposed toolchain effectively addresses a range of challenges and refines processes, discussing its relevance and impact in multiple domains. After a comprehensive overview of the main AI-SPRINT tools used in these scenarios, the paper summarizes of the findings and key lessons learned.
Download

Paper Nr: 18
Title:

Value for Money: An Experimental Comparison of Cloud Pricing and Performance

Authors:

Michiel Willocx, Ilse Bohé and Vincent Naessens

Abstract: Organizations increasingly rely on cloud providers for computation intensive tasks. This study executes computation expensive experiments in five cloud environments with a substantial market share. More specifically, we selected the big three and two representative European counterparts. By means of the experiments, we aim at comparing and assessing their value for money with respect to computational intensive tasks. The paper focuses on three aspects with high interest of industrial stakeholders, namely (a) the impact of server location and time of day on performance, (b) the computational efficiency in relation to costs, and (c) a comparison between European service providers and the big three in the cloud space.
Download

Paper Nr: 53
Title:

Towards a Cost-Benefit Analysis of Additive Manufacturing as a Service

Authors:

Igor Ivkić, Tobias Buhmann, Burkhard List and Clemens Gnauer

Abstract: The landscape of traditional industrial manufacturing is undergoing a pivotal shift from resource-intensive production and long supply chains to more sustainable and regionally focused economies. In this evolving scenario, the move towards local, on-demand manufacturing is emerging as a remedy to the environmentally damaging practice of mass-producing products in distant countries and then transporting them over long dis-tances to customers. This paradigm shift significantly empowers customers, giving them greater control over the manufacturing process by enabling on-demand production and favouring local production sites over traditional mass production and extensive shipping practices. In this position paper we propose a cloud-native Manufacturing as a Service (MaaS) platform that integrates advances in three-dimensional (3D) printing technology into a responsive and eco-conscious manufacturing ecosystem. In this context, we propose a high-level architectural design for a cloud-based MaaS platform that connects web shops of local stores with small and medium-sized enterprises (SMEs) operating 3D printers. Furthermore, we outline an experimental design, including a cost-benefit analysis, to empirically evaluate the operational effectiveness and economic feasibility in a cloud-based additive manufacturing ecosystem. The proposed cloud-based MaaS platform enables on-demand additive manufacturing and opens up a profit sharing opportunity between different stakeholders.
Download

Paper Nr: 58
Title:

Towards a Cloud-Based Smart Office Solution for Shared Workplace Individualization

Authors:

Dominik Hasiwar, Andreas Gruber, Christian Dragschitz and Igor Ivkić

Abstract: In the evolving landscape of workplace dynamics, the shift towards hybrid working models has highlighted inefficiencies in the use of traditional office space and the need for an improved employee experience. In this position paper we propose a Smart Office solution that addresses these challenges by integrating a microservice architecture with Internet of Things (IoT) technologies to provide a flexible, personalized workspace environment. The position paper focuses on the technical implementation of this solution, including the design of a Workplace Environment Index (WEI) to monitor and improve office conditions. By using cloud technology, IoT devices with sensors, and following a user-centred design, the proposed solution shows how Shared Open Workspaces can be transformed into adaptive, efficient environments that support the diverse needs of the modern workforce. This position paper paves the way for future experimentation in real-world office environments to validate the effectiveness of the Smart Office solution and provide insights into its potential to redefine the workplace for improved productivity and employee satisfaction.
Download

Area 5 - Services

Short Papers
Paper Nr: 30
Title:

Model-Driven End-to-End Resolution of Security Smells in Microservice Architectures

Authors:

Philip Wizenty, Francisco Ponce, Florian Rademacher, Jacopo Soldani, Hernán Astudillo, Antonio Brogi and Sabine Sachweh

Abstract: Microservice Architecture (MSA) is a popular approach to designing, implementing, and deploying complex software systems. However, MSA introduces inherent challenges associated with distributed systems—one of them is the detection and mitigation of security smells. This paper draws on recent works that identified and categorized security smells in MSAs to propose a novel end-to-end approach for resolving security smells in existing MSAs. To this end, the presented approach extends a modeling ecosystem for MSAs with (i) reconstruction capabilities that automatically map MSA source code to viewpoint-specific architecture models; (ii) validations that detect security smells from reconstructed models; and (iii) model refactorings that support the interactive resolution of security smells and solutions’ reflection back to source code. Our approach allows for (i) uncovering security smells, which originate from the combination of different places in source code with possibly heterogeneous purposes, technologies, and software languages; as well as (ii) clustering, reifying, and fixing smells using a level of abstraction that is directed towards MSA stakeholders. The applicability and effectiveness of our approach are evaluated utilizing a standard case study from MSA research.
Download

Area 6 - Cloud Computing Enabling Technology

Full Papers
Paper Nr: 31
Title:

Hosting-Aware Pruning of Components in Deployment Models

Authors:

Miles Stötzner, Sandro Speth and Steffen Becker

Abstract: The deployment of modern composite applications, which are distributed across heterogeneous environments, typically requires a combination of different deployment technologies. Besides, applications must be deployed in different variants due to varying customer requirements. Variable Deployment Models manage such deployment variabilities based on conditional elements. To simplify modeling, elements, such as incomplete relations or hosting stacks without hosted components, are pruned, i.e., automatically removed from the model and, therefore, from the deployment. However, components whose hosting stack is absent are not automatically removed. Manually ensuring the absence of these components is repetitive, complex, and error-prone. In this work, we address this shortcoming and introduce the pruning of components without a hosting stack. This hosting-aware pruning must be correctly integrated into the already existing pruning concepts since, otherwise, a major part of the deployment is pruned unexpectedly. We evaluate our concepts by implementing a prototype and by conducting a case study using this prototype.
Download

Short Papers
Paper Nr: 38
Title:

Don't Train, Just Prompt: Towards a Prompt Engineering Approach for a More Generative Container Orchestration Management

Authors:

Nane Kratzke and André Drews

Abstract: Background: The intricate architecture of container orchestration systems like Kubernetes relies on the critical role of declarative manifest files that serve as the blueprints for orchestration. However, managing these manifest files often presents complex challenges requiring significant DevOps expertise. Methodology: This position paper explores using Large Language Models (LLMs) to automate the generation of Kubernetes manifest files through natural language specifications and prompt engineering, aiming to simplify Kubernetes management. The study evaluates these LLMs using Zero-Shot, Few-Shot, and Prompt-Chaining techniques against DevOps requirements and the ability to support fully automated deployment pipelines. Results show that LLMs can produce Kubernetes manifests with varying degrees of manual intervention, with GPT-4 and GPT-3.5 showing potential for fully automated deployments. Interestingly, smaller models sometimes outperform larger ones, questioning the assumption that bigger is always better. Conclusion: The study emphasizes that prompt engineering is critical to optimizing LLM outputs for Kubernetes. It suggests further research into prompt strategies and LLM comparisons and highlights a promising research direction for integrating LLMs into automatic deployment pipelines.
Download

Paper Nr: 19
Title:

Pruning Modes for Deployment Models: From Manual Modeling to Automated Removal of Elements and Their Implications

Authors:

Miles Stötzner, Sandro Speth and Steffen Becker

Abstract: The deployment of modern applications, which consist of multiple components distributed across multiple environments, typically requires a combination of multiple deployment technologies. Besides, applications are deployed in different variants due to different requirements, such as costs and elasticity. Managing deployment variability across multiple heterogeneous deployment technologies is complex and error-prone. Therefore, Variable Deployment Models provide a deployment variability modeling layer independent from the underlying deployment technologies. To ease modeling, elements are pruned, i.e., elements are automatically removed from the deployment due to consistency issues and semantic aspects. However, this might lead to unexpected removal of elements and might mask modeling errors. In this work, we investigate the implications of giving up control when pruning elements and analyze different degrees of pruning. Therefore, we introduce different Pruning Modes, that define which consistency issues and semantic aspects should be considered while pruning elements. We evaluate proposed pruning modes by implementing a prototype, conducting a case study, and experimenting using this prototype.
Download

Paper Nr: 23
Title:

Service Weaver: A Promising Direction for Cloud-Native Systems?

Authors:

Jacoby Johnson, Subash Kharel, Alan Mannamplackal, Amr S. Abdelfattah and Tomas Cerny

Abstract: Cloud-Native and microservice architectures have taken over the development world by storm. While being incredibly scalable and resilient, microservice architectures also come at the cost of increased overhead to build and maintain. Google’s Service Weaver aims to simplify the complexities associated with implementing cloud-native systems by introducing the concept of a single modular binary composed of agent-like components, thereby abstracting away the microservice architecture notion of individual services. While Service Weaver presents a promising approach to streamline the development of cloud-native applications and addresses nearly all significant aspects of conventional cloud-native systems, there are existing tradeoffs affecting the overall functionality of the system. Notably, Service Weaver’s straightforward implementation and deployment of components alleviate the overhead of constructing a complex microservice architecture. However, it is important to acknowledge that certain features, including separate code bases, routing mechanisms, resiliency, and security, are presently lacking in the framework.
Download

Paper Nr: 35
Title:

CacheFlow: Enhancing Data Flow Efficiency in Serverless Computing by Local Caching

Authors:

Yi-Syuan Ke, Zhan-Wei Wu, Chih-Tai Tsai, Sao-Hsuan Lin and Jerry Chou

Abstract: In Serverless workflows, it is common for various functions to share duplicate files or for the output of one function to be the input for the following function. Currently, storing these files in remote storage is common, and transferring a large amount of data over the network can be inefficient and time-consuming. However, the state of the art has not yet optimized this aspect, resulting in time wastage. In this paper, we present an improved data transfer solution that reduced data transfer time by up to 82.55% in our experiment. This improvement is achieved by replacing the time spent on remote network access with local disk access time. To reduce the data transfer route, we implement per-node caching, utilizing disk storage as a local cache space.
Download