CLOSER 2016 Abstracts


Area 1 - Cloud Computing Fundamentals

Full Papers
Paper Nr: 6
Title:

SemNaaS: Semantic Web for Network as a Service

Authors:

Mohamed Morsey, Hao Zhu, Isart Canyameres, Samuel Norbury, Paola Grosso and Miroslav Zivkovic

Abstract: Cloud Computing has several provisioning models, namely Infrastructure as a service (IaaS), Platform as a service (PaaS), and Software as a service (SaaS). However, cloud users (tenants) have limited or no control over the underlying network resources and services. Network as a Service (NaaS) is emerging as a novel model to bridge this gap. However, NaaS requires an approach capable of modeling the underlying network resources and capabilities in abstracted and vendor-independent form. In this paper we elaborate on SemNaaS, a Semantic Web based approach for supporting network management in NaaS systems. Our contribution is three-fold. First, we adopt and improve the Network Markup Language (NML) ontology for describing NaaS infrastructures. Second, based on that ontology, we develop a network modeling system that is integrated with the existing OpenNaaS framework. Third, we demonstrate the benefits that Semantic Web adds to the Network as a Service paradigm by applying SemNaaS operations to a specific NaaS use case.

Paper Nr: 21
Title:

DSaaS - A Cloud Service for Persistent Data Structures

Authors:

Pierre Bernard le Roux, Steve Kroon and Willem Bester

Abstract: In an attempt to tackle shortcomings of current approaches to collaborating on the development of structured data sets, we present a prototype platform that allows users to share and collaborate on the development of data structures via a web application, or by using language bindings or an API. Using techniques from the theory of persistent linked data structures, the resulting platform delivers automatically version-controlled map and graph abstract data types as a web service. The core of the system is provided by a Hash Array Mapped Trie (HAMT) which is made confluently persistent by path-copying. The system aims to make efficient use of storage, and to have consistent access and update times regardless of the version being accessed or modified.

Paper Nr: 29
Title:

Design Time Validation for the Correct Execution of BPMN Collaborations

Authors:

Jonas Anseeuw, Gregory Van Seghbroeck, Bruno Volckaert and Filip De Turck

Abstract: Cloud-based Software-as-a-Service (SaaS) providers want to grow into the space of business process outsourcing (BPO). BPO refers to the systematic and controlled delegation of many steps of a company’s business process. BPO is a new and important extension to SaaS, as it allows the provider to add more value in the online application services and as it enables the outsourcer to obtain more cost efficiency. BPO results in decentralized federated workflows. To describe these workflows, companies often use business process modeling languages. Currently, Business Process Modeling Notation (BPMN) is one of the best-known standards. It is crucial to ascertain that the modeled workflow is executed as intended. Errors that happen during execution of a federated workflow can come with huge costs. Validating the model is limited to syntactical checks and there is little support for validating the execution at design time. In this paper a method is presented to validate the correct execution of BPMN 2.0 Collaborations. The methods in this research use concepts from virtual time previously described for Web Services Choreography Description Language (WS-CDL). To validate the results of this research, the Eclipse BPMN modeler was extended with an implementation of the validation method.

Paper Nr: 32
Title:

Evidence Collection in Cloud Provider Chains

Authors:

Thomas Rübsamen, Christoph Reich, Nathan Clarke and Martin Knahl

Abstract: With the increasing importance of cloud computing, compliance concerns get into the focus of businesses more often. Furthermore, businesses still consider security and privacy related issues to be the most prominent inhibitors for an even more widespread adoption of cloud computing services. Several frameworks try to address these concerns by building comprehensive guidelines for security controls for the use of cloud services. However, assurance of the correct and effective implementation of such controls is required by businesses to attenuate the loss of control that is inherently associated with using cloud services. Giving this kind of assurance is traditionally the task of audits and certification. Cloud auditing becomes increasingly challenging for the auditor the more complex the cloud service provision chain becomes. There are many examples for Software as a Service (SaaS) providers that do not own dedicated hardware anymore for operating their services, but rely solely on other cloud providers of the lower layers, such as platform as a service (PaaS) or infrastructure as a service (IaaS) providers. The collection of data (evidence) for the assessment of policy compliance during a technical audit is aggravated the more complex the combination of cloud providers becomes. Nevertheless, the collection at all participating providers is required to assess policy compliance in the whole chain. The main contribution of this paper is an analysis of potential ways of collecting evidence in an automated way across cloud provider boundaries to facilitate cloud audits. Furthermore, a way of integrating the most suitable approaches in the system for automated evidence collection and auditing is proposed.

Area 2 - Services Science

Full Papers
Paper Nr: 37
Title:

Individual Service Clearing as a Business Service: A Capable Reference Solution for B2B Mobility Marketplaces

Authors:

Michael Strasser, Nico Weiner and Sahin Albayrak

Abstract: The paper presents an approach for individual and outsourced service clearing within Business-to-Business marketplaces for mobility services in a Software as a Service fashion. The lack of service clearing possibilities within today’s marketplaces solutions for mobility services have been confirmed by interviews experts who highlight the need for clearing. To enable service clearing, appropriate interfaces which enable access upon data are required. Current solutions lack those interfaces and thus service clearing to charge service transactions is not possible. The paper discusses interfaces for outsourced service clearing according to their design, data and role dependencies. A reference solution is implemented to demonstrate the feasibility of the interfaces and the overall clearing approach. A clearing algorithm has been developed to validate the interfaces’ reliability and correctness. Our presented clearing approach enables marketplace participants to outsource the transaction clearing to a provider which offers clearing capabilities. The work on hand contributes to interface and protocol standardization in respect to service clearing and marketplace interconnectivity.

Area 3 - Cloud Computing Fundamentals

Full Papers
Paper Nr: 40
Title:

Decision Support System for Adoption of Cloud-based Services

Authors:

Radhika Garg, Marc Heimgartner and Burkhard Stiller

Abstract: Adoption of any new technology in an organization is a crucial decision as it can have its impact at technical, economical, and organizational level. One of such decisions is related to adoption of Cloud-based services in an organization. Cloud Computing provides elastic resources as per the demand and provides the facility to pay as per the use. Thus, it is changing the way IT infrastructure is used today with huge benefit of cost savings. However, if the solution adopted by an organization is not fulfilling the requirements, it can have tremendous negative consequences at technical, economical, and organizational level. Therefore, the decision to adopt Cloud-based services should be based on a methodology that supports a wide array of criteria for evaluating the available alternatives. Also, as these criteria or factors can be mutually interdependent and conflicting, a trade-offs-based methodology is needed to make such decisions. This paper, therefore, discusses the design, implementation, and evaluation of the prototype developed for automating the theoretical methodology of Trade-offs based Methodology for Adoption of Cloud-based Services (TrAdeCIS) developed in (Garg and Stiller, 2014). This system is based on Multi-attribute Decision Algorithms (MADA), which selects the best alternative, based on the priorities of criteria of decision maker. In addition the applicability of this methodology to the adoption of cloud-based services in an organization is validated with several use-cases towards the end of the paper. Furthermore, the extendibility of this system to other domains is being evaluated with respect to Train Operating Companies, who wish to find out the best alternative of providing Internet connectivity and voice calls on-board trains.

Paper Nr: 71
Title:

Towards Auditing of Cloud Provider Chains using CloudTrust Protocol

Authors:

Thomas Rübsamen, Dirk Hölscher and Christoph Reich

Abstract: Although cloud computing can be considered mainstream today, there is still a lack of trust in cloud providers, when it comes to the processing of private or sensitive data. This lack of trust is rooted in the lack of transparency of the provider's data handling practices, security controls and their technical infrastructures. This problem worsens when cloud services are not only provisioned by a single cloud provider, but a combination of several independent providers. The main contributions of this paper are: we propose an approach to automated auditing of cloud provider chains with the goal of providing evidence-based assurance about the correct handling of data according to pre-defined policies. We also introduce the concepts of individual and delegated audits, discuss policy distribution and applicability aspects and propose a lifecycle model. Our previous work on automated cloud auditing and Cloud Security Alliance's (CSA) CloudTrust Protocol form the basis for the proposed system for provider chain auditing.

Paper Nr: 75
Title:

Process Mining Monitoring for Map Reduce Applications in the Cloud

Authors:

Federico Chesani, Anna Ciampolini, Daniela Loreti and Paola Mello

Abstract: The adoption of mobile devices and sensors, and the Internet of Things trend, are making available a huge quantity of information that needs to be analyzed. Distributed architectures, such as Map Reduce, are indeed providing technical answers to the challenge of processing these big data. Due to the distributed nature of these solutions, it can be difficult to guarantee the Quality of Service: e.g., it might be not possible to ensure that processing tasks are performed within a temporal deadline, due to specificities of the infrastructure or processed data itself. However, relaying on cloud infrastructures, distributed applications for data processing can easily be provided with additional resources, such as the dynamic provisioning of computational nodes. In this paper, we focus on the step of monitoring Map Reduce applications, to detect situations where resources are needed to meet the deadlines. To this end, we exploit some techniques and tools developed in the research field of Business Process Management: in particular, we focus on declarative languages and tools for monitoring the execution of business process. We introduce a distributed architecture where a logic-based monitor is able to detect possible delays, and trigger recovery actions such as the dynamic provisioning of further resources.

Paper Nr: 112
Title:

A Wavelet-inspired Anomaly Detection Framework for Cloud Platforms

Authors:

David O'Shea, Vincent C. Emeakaroha, John Pendlebury, Neil Cafferkey, John P. Morrison and Theo Lynn

Abstract: Anomaly detection in Cloud service provisioning platforms is of significant importance, as the presence of anomalies indicates a deviation from normal behaviour, and in turn places the reliability of the distributed Cloud network into question. Existing solutions lack a multi-level approach to anomaly detection in Clouds. This paper presents a wavelet-inspired anomaly detection framework for detecting anomalous behaviours across Cloud layers. It records the evolution of multiple metrics and extracts a two-dimensional spectrogram representing a monitored system’s behaviour. Over two weeks of historical monitoring data were used to train the system to identify healthy behaviour. Anomalies are then characterised as deviations from this expected behaviour. The training technique as well as the pre-processing techniques are highly configurable. Based on a Cloud service deployment use case scenario, the effectiveness of the framework was evaluated by randomly injecting anomalies into the recorded metric data and performing comparison using the resulting spectrograms.

Area 4 - Services Science

Full Papers
Paper Nr: 114
Title:

Evaluating the Effect of Utility-based Decision Making in Collective Adaptive Systems

Authors:

Vasilios Andrikopoulos, Marina Bitsaki, Santiago Goméz Sáez, Michael Hahn, Dimka Karastoyanova, Giorgos Koutras and Alina Psycharaki

Abstract: Utility, defined as the perceived satisfaction with a service, provides the ideal means for decision making on the level of individual entities and collectives participating in a large-scale dynamic system. Previous works have already introduced the concept into the area of collective adaptive systems, and have discussed what is the necessary infrastructure to support the realization of the involved theoretical concepts into actual decision making. In this work we focus on two aspects. First, we provide a concrete utility model for a case study that is part of a larger research project. Second, we incorporate this model into our implementation of the proposed architecture. More importantly, we design and execute an experiment that aims to empirically evaluate the use of utility for decision making by comparing it against simpler decision making mechanisms.

Area 5 - Cloud Computing Fundamentals

Short Papers
Paper Nr: 15
Title:

A Task Orientated Requirements Ontology for Cloud Computing Services

Authors:

Richard Greenwell, Xiaodong Liu, Kevin Chalmers and Claus Pahl

Abstract: Requirements ontology offers a mechanism to map requirements for cloud computing services to cloud computing resources. Multiple stakeholders can capture and map knowledge in a flexible and efficient manner. The major contribution of the paper is the definition and development of an ontology for cloud computing requirements. The approach views each user requirement as a semantic intelligence task that maps and delivers it as cloud services. Requirements are modelled as tasks designed to meet specific requirements, problem domains that the requirements exist in, and problem-solving methods which are generic mechanisms to solve problems. A meta-ontology for cloud computing is developed and populated with ontology fragments on to which cloud computing requirements can be mapped. A critical analysis of the usage of ontologies in the requirements process is made and a case study is described that demonstrates the approach in a real-world application. The conclusion is that problem-solving ontologies provide a useful mechanism for the specification and reuse of requirements in the cloud computing environment.

Paper Nr: 25
Title:

A FLOSS License-selection Methodology for Cloud Computing Projects

Authors:

Robert Viseur

Abstract: Cloud computing and open source are two disruptive innovations. Both deeply modify the way the computer resources are made available and monetized. They evolve between competition (e.g. open source software for desktop versus SaaS applications) and complementarity (e.g. cloud solutions based on open source components or cloud applications published under open source license). PaaSage is an open source integrated platform to support both design and deployment of cloud applications. The PaaSage consortium decided to publish the source code as open source. It needed a process for the open source license selection. Open source licensing scheme born before the development of cloud computing and evolved with the creation of new open source licenses suitable for SaaS applications. The license is a part of project governance and strongly influences the life of the project. In the context of the PaaSage European project, the issue of the open source license selection for cloud computing software has been addressed. The first section of the paper describes the state of the art about open source licenses including the known issues, a generic license-selection scheme and the automated source code analysis practices. The second section studies the common choices of licenses in cloud computing projects. The third section proposes a FLOSS license-selection process for cloud computing project following five steps: (1) inventoring software components, (2) selecting open source license, (3) approving license selection (vote), (4) spreading practical details and (5) monitoring source code. The fourth section describes the PaaSage use case. The last section consists in a discussion of the results.

Paper Nr: 30
Title:

Microservices: A Systematic Mapping Study

Authors:

Claus Pahl and Pooyan Jamshidi

Abstract: Microservices have recently emerged as an architectural style, addressing how to build, manage, and evolve architectures out of small, self-contained units. Particularly in the cloud, the microservices architecture approach seems to be an ideal complementation of container technology at the PaaS level However, there is currently no secondary study to consolidate this research. We aim here to identify, taxonomically classify and systematically compare the existing research body on microservices and their application in the cloud. We have conducted a systematic mapping study of 21 selected studies, published over the last two years until end of 2015 since the emergence of the microservices pattern. We classified and compared the selected studies based on a characterization framework. This results in a discussion of the agreed and emerging concerns within the microservices architectural style, positioning it within a continuous development context, but also moving it closer to cloud and container technology.

Paper Nr: 31
Title:

Challenges and New Avenues in Existing Replication Techniques

Authors:

Furat F. Altukhaim and Almetwally M. Mostafa

Abstract: Over recent years, the curve of the importance of data replication has risen steeply owing to the fact that databases are increasingly deployed over clusters of different workstations over time. A variety of replication techniques have been introduced to the distributed systems field which, in this paper, are classified based on whether they have an unbalanced load between servers or not (classic and modern). Replication techniques from both categories can be enhanced by avoiding some of the challenges that are illustrated in detail in this paper. Moreover, this paper analyses replication techniques in each category by exploring their strengths and weaknesses as well as providing possible novel solutions that can diminish or eliminate these challenges and introduces a brief description of the Dynamic Object Ownership Distribution Protocol that aims at increasing throughput by increasing the rate of performing transactions locally in addition to viewing a promising preliminary results of its performance.

Paper Nr: 39
Title:

An Optimized Model for Files Storage Service in Cloud Environments

Authors:

Haythem Yahyaoui and Samir Moalla

Abstract: Cloud computing represents nowadays a revolution in the distributed systems domain because of its several services. One of the most used services in Cloud environments is file storage, which consist on uploading files to the Cloud’s data center and using them at anytime from anywhere. Due to the highest number of Cloud customers we risk to have a bad management of cloud’s data center such as losing space by files redundancy which can be provided by uploading one file for several times by only one customer or the same file by many customers. Such a problem is very frequent when we have a big number of customers. Many studies have been done in this order, researchers propose many solutions and each one has its advantages and disadvantages. In order to save space and minimize costs we propose an optimized model which consists on deleting file redundancy before the duplication step in the data centers. Experimentally, during the evaluation phase, our model will be compared with the some existing methods.

Paper Nr: 41
Title:

Consolidation of Performance and Workload Models in Evolving Cloud Application Topologies

Authors:

Santiago Goméz Sáez, Vasilios Andrikopoulos and Frank Leymann

Abstract: The increase of available Cloud services and providers has contributed to accelerate the development and has broaden the possibilities for building and provisioning Cloud applications in heterogeneous Cloud environments. The necessity for satisfying business and operational requirements in an agile and rapid manner has created the need for adapting traditional methods and tooling support for building and provisioning Cloud applications. Focusing on the application's performance and its evolution, we observe a lack of support for specifying, capturing, analyzing, and reasoning on the impact of using different Cloud services and configurations. This paper bridges such a gap by proposing the conceptual and tooling support to enhance Cloud application topology models to capture and analyze the evolution of the application's performance. The tooling support is built upon an existing modeling environment, which is subsequently evaluated using the MediaWiki (Wikipedia) application and its realistic workload.

Paper Nr: 43
Title:

Deployment over Heterogeneous Clouds with TOSCA and CAMP

Authors:

Jose Carrasco, Javier Cubo, Ernesto Pimentel and Francisco Durán

Abstract: Cloud Computing providers offer diverse services and capabilities, which can be used by end-users to compose heterogeneous contexts of multiple cloud platforms to deploy their applications, in accordance with the best offered capabilities. However, this is an ideal scenario, since cloud platforms are being conducted in an isolated way by presenting interoperability and portability restrictions. Each provider defines its own API, non-functional requirements, QoS, add-ons, etc., and developers are often locked-in a concrete cloud environment, hampering the integration of heterogeneous provider services to achieve cross-deployment. This work presents an approach to deploy cross-cloud applications by using standardisation efforts of design, management and deployment of cloud applications. Specifically, using mechanisms specified by the TOSCA and CAMP standards, we propose a methodology to describe the topology and distribution of modules of a cloud application and to deploy the inter-connected modules over heterogeneous clouds. We present our prototype TOMAT, which supports the automatic distribution of cloud applications over multiple providers.

Paper Nr: 44
Title:

A Hadoop based Framework to Process Geo-distributed Big Data

Authors:

Marco Cavallo, Lorenzo Cusma', Giuseppe Di Modica, Carmelo Polito and Orazio Tomarchio

Abstract: In many application fields such as social networks, e-commerce and content delivery networks there is a constant production of big amounts of data in geographically distributed sites that need to be timely elaborated. Distributed computing frameworks such as Hadoop (based on the MapReduce paradigm) have been used to process big data by exploiting the computing power of many cluster nodes interconnected through high speed links. Unfortunately, Hadoop was proved to perform very poorly in the just mentioned scenario. We designed and developed a Hadoop framework that is capable of scheduling and distributing hadoop tasks among geographically distant sites in a way that optimizes the overall job performance. We propose a hierarchical approach where a top-level entity, by exploiting the information concerning the data location, is capable of producing a smart schedule of low-level, independent MapReduce sub-jobs. A software prototype of the framework was developed. Tests run on the prototype showed that the job scheduler makes good forecasts of the expected job’s execution time.

Paper Nr: 47
Title:

Proactive Learning from SLA Violation in Cloud Service based Application

Authors:

Ameni Meskini, Yehia Taher, Amal El gammal, Béatrice Finance and Yahya Slimani

Abstract: In recent years, business process management and Service-based applications have been an active area of research from both the academic and industrial communities. The emergence of revolutionary ICT technologies such as Internet-of-Things (IoT) and cloud computing has led to a paradigm shift that opens new opportunities for consumers, businesses, cities and governments; however, this significantly increases the complexity of such systems and in particular the engineering of Cloud Service-Based Application (CSBA). A crucial dimension in industrial practice is the non-functional service aspects, which are related to Quality-of-Service (QoS) aspects. Service Level Agreements (SLAs) define quantitative QoS objectivesandis a part of a contract between the service provider and the service consumer. Although significant work exists on how SLA may be specified, monitored and enforced, few efforts have considered the problem of SLA monitoring in the context of Cloud Service-Based Application (CSBA), which caters for tailoring of services using a mixture of Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS) and Infrastructure-as-a-Service (IaaS) solutions. With a preventive focus, the main contribution of this paper is a novel learning and prediction approach for SLA violations, which generates models that are capable of proactively predicting upcoming SLAs violations, and suggesting recovery actions to react to such SLA violations before their occurrence. A prototype has been developed as a Proof-Of-Concept (POC) to ascertain the feasibility and applicability of the proposed approach.

Area 6 - Services Science

Short Papers
Paper Nr: 51
Title:

Using a Predator-Prey Model to Explain Variations of Cloud Spot Price

Authors:

Zheng Li, William Tärneberg, Maria Kihl and Anders Robertsson

Abstract: The spot pricing scheme has been considered to be resource-efficient for providers and cost-effective for consumers in the Cloud market. Nevertheless, unlike the static and straightforward strategies of trading on-demand and reserved Cloud services, the market-driven mechanism for trading spot service would be complicated for both implementation and understanding. The largely invisible market activities and their complex interactions could especially make Cloud consumers hesitate to enter the spot market. To reduce the complexity in understanding the Cloud spot market, we decided to reveal the backend information behind spot price variations. Inspired by the methodology of reverse engineering, we developed a Predator-Prey model that can simulate the interactions between demand and resource based on the visible spot price traces. The simulation results have shown some basic regular patterns of market activities with respect to Amazon’s spot instance type m3.large. Although the findings of this study need further validation by using practical data, our work essentially suggests a promising approach (i.e. using a Predator-Prey model) to investigate spot market activities.

Area 7 - Cloud Computing Fundamentals

Short Papers
Paper Nr: 59
Title:

Design and Evaluation of Automatic Workflow Scaling Algorithms for Multi-tenant SaaS

Authors:

Ankita Atrey, Hendrik Moens, Gregory Van Seghbroeck, Bruno Volckaert and Filip De Turck

Abstract: Current Cloud software development efforts to come up with novel Software-as-a-Service (SaaS) applications are, just like traditional software development, usually no longer built from scratch. Instead more and more Cloud developers are opting to use multiple existing components and integrate them in their application workflow. Scaling the resulting application up or down, depending on user/tenant load, in order to keep the SLA, no longer becomes an issue of scaling resources for a single service, rather results in a complex problem of scaling all individual service endpoints in the workflow, depending on their monitored runtime behavior. In this paper, we propose and evaluate algorithms through CloudSim for automatic and runtime scaling of such multi-tenant SaaS workflows. Our results on time-varying workloads show that the proposed algorithms are effective and produce the best cost-quality trade-off while keeping Service Level Agreements (SLAs) in line. Empirically, the proactive algorithm with careful parameter tuning always meets the SLAs while only suffering a marginal increase in average cost per service component of  5􀀀8% over our baseline passive algorithm, which, although provides the least cost, suffers from prolonged violation of service component SLAs.

Paper Nr: 64
Title:

Toward an Understanding of Government Cloud Acceptance - A Quantitative Study of G-Cloud Acceptance by Saudi Government Agencies using Extended UTAUT

Authors:

Maha A. Al Rashed and Mutlaq B. Alotaibi

Abstract: With today’s rapid advances in Information and Communication Technologies (ICT), an increasing number of governments worldwide are seeking solutions to enhance their IT infrastructures and services, and reshape their e-government systems to meet public needs of providing easily accessible, cost-effective, high quality, and reliable e-services. In recent years, government cloud (G-Cloud) has emerged as a new and innovative computing paradigm with a promising opportunity for many governments to rationalize the way they manage their services and resources. Government cloud’s potential benefits has been recognized by many governments around the world. This paper will study the acceptance of cloud computing technologies and services in Saudi government agencies by investigating the significant and influential factors that affect the behavioral intentions to use G-Cloud. Moreover, in light of the rising concerns over trust issues in cloud computing which have been reported to be one of the major barriers to the adoption of the cloud, the study proposes an extended Unified Theory of Acceptance and Use of Technology (UTAUT) model by incorporating trust as a key factor in the acceptance of G-Cloud.

Paper Nr: 72
Title:

Towards a Goal-oriented Approach to Adaptable Re-deployment of Cloud-based Applications

Authors:

Patrizia Scandurra, Marina Mongiello, Simona Colucci and Luigi Alfredo Grieco

Abstract: Due to the on-demand and dynamic nature of Cloud, there is an increasing interest for automated management of adaptation and (possibly) re-deployment of cloud applications to realize quality requirements and evolution needs autonomously at run-time. This paper proposes a fast and automated approach for adapting and redeploying a cloud application at run-time as dictated by evolution needs and sudden changes in the operating environment conditions. The proposed approach exploits a graph-based model and an algorithm that extracts a sub-graph identifying the adaptation processes to be executed according to evolution changes. The approach is general enough to be implemented by any cloud application management framework. A TOSCA-based description of the structure and management aspects of the cloud application may be updated according to the above mentioned sub-graph. Then, this description may be processed by a TOSCA-compliant runtime environment to effectively adapt and possibly re-deploy the cloud application in an automated manner. The paper also illustrates the instantiation of this generic approach for adapting an e-commerce cloud application.

Paper Nr: 77
Title:

Interactions in Service Provisioning Systems for Smart City Mobility Services

Authors:

Michael Strasser, Nico Weiner and Sahin Albayrak

Abstract: In times of smart city and internet of things and services a lot of data is produced. However, there is no benefit in collecting the data without processing it. Smart services are one possibility to enable data access for data processing. Smart services have attracted research along their domain and requirements, benefits for the common as well as possible business models are developed. This work addresses the way how service consumers and service operators conduct business in an open B2B service marketplace. The paper presents and discusses phases of a business relationship in an digital service environment as well as discusses the business actions’ sequence. A role-action framework for service provisioning systems is developed. It contributes to a better understanding of service provision systems and demonstrates of what processes it constitutes. It furthermore presents what needs to be done by the systems’ participants to offer or consume services.

Paper Nr: 82
Title:

An Evolutionary Cultural Algorithm based Risk-aware Virtual Machine Scheduling Optimisation in Infrastructure as a Service (IaaS) Cloud

Authors:

Ming Jiang, Tom Kirkham and Craig Sheridan

Abstract: Cloud service reliability is one of the key common performance concerns of both Cloud Service Provider (CSP) and Cloud Service User (CSU). As the capability and scale of a Cloud infrastructure increase, the requirements of maintaining and improving the reliability of services is increasingly crucial for the CSP and CSU. Risk management is the process of analysing the potential risk factors associated with the reliability deterioration of a service provided by a CSP, assessing the uncertainties and consequences associated with this kind of deterioration, and finally identifying the system wide appropriate mitigation strategies for risk treatments. In this paper, an evolutionary Cultural Algorithm based risk management method is proposed to facilitate the identification (i.e., probability and consequences) and treatment (i.e., mitigations) of Cloud infrastructure reliability related risk for Virtual Machine scheduling optimisation.

Paper Nr: 85
Title:

Energy-efficient Task Scheduling in Data Centers

Authors:

Yousri Mhedheb and Achim Streit

Abstract: A data center is often also a Cloud center, which delivers its computational and storage capacity as services. To enable on-demand resource provision with elasticity and high reliability, the host machines in data centers are usually virtualized, which brings a challenging research topic, i.e., how to schedule the virtual machines (VM) on the hosts for energy efficiency. The goal of this Work is to ameliorate, through scheduling, the energy-efficiency of data center. To support this work a novel VM scheduling mechanism design and implementation will be proposed. This mechanism addresses on both load-balancing and temperature-awareness with a final goal of reducing the energy consumption of a data centre. Our scheduling scheme selects a physical machine to host a virtual machine based on the user requirements, the load on the hosts and the temperature of the hosts, while maintaining the quality of the service. The proposed scheduling mechanism on CloudSim will be finally validated, a well-known simulator that models data centers provisioning Infrastructure as a Service. For a comparative study, we also implemented other scheduling algorithms i.e., non power control, DVFS and power aware ThrMu. The experimental results show that the proposed scheduling scheme, combining the power-aware with the thermal-aware scheduling strategies, significantly reduces the energy consumption of a given Data Center because of its thermal-aware strategy and the support of VM migration mechanisms.

Area 8 - Data as a Service

Short Papers
Paper Nr: 90
Title:

A Big Data Analysis System for Use in Vehicular Outdoor Advertising

Authors:

Emmanuel Kayode Akinshola Ogunshile

Abstract: Outdoor advertising is an old industry and the only reliably growing advertising sector other than online advertising. However, for it to sustain this growth, media providers must supply a comparable means of tracking an advertisement’s effectiveness to online advertising. The problem is a continual and emerging area of research for large outdoor advertising corporations, and as a result of this, smaller companies looking to join the market miss out on providing clients with valuable metrics due to a lack of resources. In this paper, we discuss the processes undertaken to develop software to be used as a means of better understanding the potential effectiveness of a fleet of private car, taxi or bus advertisements. Each of the steps present unique challenges including big data visualisation, performance data aggregation and the inherent inconsistencies and unreliabilities produced by tracking fleets using GPS. We cover how we increased the metric aggregation algorithm performance by roughly 20x, built an algorithm and process to render data heat maps on the server side and how we built an algorithm to clean unwanted GPS ‘jitter’.

Area 9 - Services Science

Short Papers
Paper Nr: 94
Title:

A Container-centric Methodology for Benchmarking Workflow Management Systems

Authors:

Vincenzo Ferme, Ana Ivanchikj, Cesare Pautasso, Marigianna Skouradaki and Frank Leymann

Abstract: Trusted benchmarks should provide reproducible results obtained following a transparent and well-defined process. In this paper, we show how Containers, originally developed to ease the automated deployment of Cloud application components, can be used in the context of a benchmarking methodology. The proposed methodology focuses on Workflow Management Systems (WfMSs), a critical service orchestration middleware, which can be characterized by its architectural complexity, for which Docker Containers offer a highly suitable approach. The contributions of our work are: 1) a new benchmarking approach taking full advantage of containerization technologies; and 2) the formalization of the interaction process with the WfMS vendors described clearly in a written agreement. Thus, we take advantage of emerging Cloud technologies to address technical challenges, ensuring the performance measurements can be trusted. We also make the benchmarking process transparent, automated, and repeatable so that WfMS vendors can join the benchmarking effort.

Area 10 - Cloud Computing Fundamentals

Short Papers
Paper Nr: 97
Title:

Towards Resilience Metrics for Future Cloud Applications

Authors:

Marko Novak, Syed Noorulhassan Shirazi, Aleksandar Hudic, Thomas Hecht, Markus Tauber, David Hutchison, Silia Maksuti and Ani Bicaku

Abstract: An analysis of new technologies can yield insight into the way these technologies will be used. Inevitably, new technologies and their uses are likely to result in new security issues regarding threats, vulnerabilities and attack vectors. In this paper, we investigate and analyse technological and security trends and their potential to become future threats by systematically examining industry reports on existing technologies. Using a cloud computing use case we identify potential resilience metrics that can shed light on the security properties of the system.

Area 11 - Services Science

Short Papers
Paper Nr: 100
Title:

Testing of Web Services using Behavior-Driven Development

Authors:

Ahmet Furkan Oruç and Tolga Ovatman

Abstract: Web services are commonly used in the communication of software over the web. To fully trust a web service, it should be tested and certified, but testing of web services provoke new challenges. Behavior-Driven Development (BDD) can be applied to the testing of web services. Gherkin language is used to define scenarios in BDD. We used Gherkin language to define test cases for web services and we developed a tool to convert these test cases into JMeter test scripts.

Area 12 - Cloud Computing Fundamentals

Short Papers
Paper Nr: 105
Title:

On the Next Generations of Infrastructure-as-a-Services

Authors:

Dana Petcu, Maria Fazio, Radu Prodan, Zhiming Zhao and Massimiliano Rak

Abstract: Following the wide adoption by industry of the cloud computing technologies, we can talk about a second generation of cloud services and products that are currently under design phase. However, it is not yet clear how the third generation of cloud products and services of the next decade will look like, especially at the delivery level of Infrastructure-as-a-Service. In order to answer at least partially to such a challenging question, we initiated a literature overview and two surveys involving the members of a cluster of European research and innovation actions. The results are interpreted in this paper and a set of topics of interest for the third generation are identified.

Paper Nr: 106
Title:

Methodology to Obtain the Security Controls in Multi-cloud Applications

Authors:

Samuel Olaiya Afolaranmi, Luis E. Gonzalez Moctezuma, Massimiliano Rak, Valentina Casola, Erkuden Rios and Jose L. Martinez Lastra

Abstract: What controls should be used to ensure adequate security level during operation is a non-trivial subject in complex software systems and applications. The problem becomes even more challenging when the application uses multiple cloud services which security measures are beyond the control of the application provider. In this paper, a methodology that enables the identification of the best security controls for multi-cloud applications whose components are deployed in heterogeneous clouds is presented. The methodology is based on application decomposition and modelling of threats over the components, followed by the analysis of the risks together with the capture of cloud business and security requirements. The methodology has been applied in the MUSA EU H2020 project use cases as the first step for building up the multi-cloud applications’ security-aware Service Level Agreements (SLA). The identified security controls will be included in the applications’ SLAs for their monitoring and fulfilment assurance at operation.

Area 13 - Data as a Service

Short Papers
Paper Nr: 107
Title:

Revisiting Arguments for a Three Layered Data Warehousing Architecture in the Context of the Hadoop Platform

Authors:

Qishan Yang and Markus Helfert

Abstract: Data warehousing has been accepted in many enterprises to arrange historical data, regularly provide reports, assist decision making, analyze data and mine potentially valuable information. Its architecture can be divided into several layers from operated databases to presentation interfaces. The data all around the world is being created and growing explosively, if storing data or building a data warehouse via conventional tools or platforms may be time-consuming and exorbitantly expensive. This paper will discuss a three-layered data warehousing architecture in a big data platform, in which the HDFS (Hadoop Distributed File System) and the MapReduce mechanisms have been being leveraged to store and manipulate data respectively.

Area 14 - Cloud Computing Fundamentals

Short Papers
Paper Nr: 116
Title:

CLOUDLIGHTNING: A Framework for a Self-organising and Self-managing Heterogeneous Cloud

Authors:

Theo Lynn, Huanhuan Xiong, Dapeng Dong, Bilal Momani, George Gravvanis, Christos Filelis-Papadopoulos, Anne Elster, Malik Muhammad Zaki Murtaza Khan, Dimitrios Tzovaras, Konstantinos Giannoutakis, Dana Petcu, Marian Neagul, Ioan Dragon, Perumal Kuppudayar, Suryanarayanan Natarajan, Michael McGrath, Georgi Gaydadjiev, Tobias Becker, Anna Gourinovitch, David Kenny and John Morrison

Abstract: As clouds increase in size and as machines of different types are added to the infrastructure in order to maximize performance and power efficiency, heterogeneous clouds are being created. However, exploiting different architectures poses significant challenges. To efficiently access heterogeneous resources and, at the same time, to exploit these resources to reduce application development effort, to make optimisations easier and to simplify service deployment, requires a re-evaluation of our approach to service delivery. We propose a novel cloud management and delivery architecture based on the principles of self-organisation and self-management that shifts the deployment and optimisation effort from the consumer to the software stack running on the cloud infrastructure. Our goal is to address inefficient use of resources and consequently to deliver savings to the cloud provider and consumer in terms of reduced power consumption and improved service delivery, with hyperscale systems particularly in mind. The framework is general but also endeavours to enable cloud services for high performance computing. Infrastructure-as-a-Service provision is the primary use case, however, we posit that genomics, oil and gas exploration, and ray tracing are three downstream use cases that will benefit from the proposed architecture.

Posters
Paper Nr: 50
Title:

A Repeatable Framework for Best Fit Cloud Solution

Authors:

Emmanuel Kayode Akinshola Ogunshile

Abstract: To respond to business challenges with agility, modern businesses have to evolve quickly to stay competitive. Unfortunately, in many situations, proliferation of heterogeneous Information Technology shifts act as a barrier to innovations instead of as a driving force. Crucially, this is due to the confusions that they sometimes cause whilst Small Medium Enterprises (SMEs) are trying to elect the right technology solution appropriate for a given business challenge i.e. amidst various comparable options, claims, features and benefits from different technology vendors available in the market. To help small SMEs quickly make timely decision on what technology solutions are appropriate for a given business challenge i.e. given the vast array of solutions available in today’s market, this paper proposes a guideline for an implementable solution for any SME with similar requirements to our chosen fictitious customer called EPM. The paper will cover main areas such as introducing a generic SME business case, analysing hardware solutions and methods typically employed in cloud networks to reduce costs. Then the paper will introduce the solutions as a repeatable framework to be critically analysed to find a suitable solution for the customer, this will then be looked into with any other cloud principals that could create a better fitting solution for the customer.

Paper Nr: 54
Title:

Generic Cloud Computing Framework Understanding and Implementation

Authors:

Emmanuel Kayode Akinshola Ogunshile

Abstract: The rate of adoption of cloud services is increasing year on year as organisations realise the many benefits that moving operations onto a cloud based platform provides. However, there is an argument to be made that with the multitude services that offer cloud solutions in various forms, that choosing the right technology for a business is not straight forward and cloud services may not always provide all the benefits suggested by the service providers. Using the case study of OneDrum Ltd as a comparison of compatibility, an analysis of current cloud providers as well as hardware that is used to provide cloud solutions is considered. With these examples, potential solutions for the OneDrum Ltd scenarios have been devised with the aim to create transferable solutions for other small manufacturing businesses.

Paper Nr: 58
Title:

Disruption-resilient Publish and Subscribe

Authors:

Noor Ahmed and Bharat Bhargava

Abstract: Publish and Subscribe (pub/sub) dissemination paradigm has emerged as a popular means of disseminating selective time-sensitive information. Through the use of event service or broker, published information is filtered to disseminate only to the subscribers interested in that information. Once a broker is compromised, information can be delivered unfiltered, dropped, delayed, perhaps colluding among the brokers in virtualized cloud platforms. Such disruptive behavior is known as Byzantine faults. We present a Disruption–Resilient Publish and Subscribe (DRPaS) system designed to withstand faults through continuously refreshing the virtual instances of the broker. DRPaS combines advances in cloud management software stack (i.e., OpenStack nova and neutron) to control the broker’s susceptibility window of disruption. Preliminary experimental results show that the defensive security solutions enabled by the underlying cloud computing fabric is simpler and more effective than the ones implemented at the application/protocol level to withstand disruptions.

Paper Nr: 62
Title:

Survey of the Cloud Computing Standards Landscape 2015

Authors:

Bernd Becker, Emmanuel Darmois, Anders Kingstedt, Olivier Le Grand, Peter Schmitting and Wolfgang Ziegler

Abstract: Cloud Computing is increasingly used as the platform for IT infrastructure provisioning, application/systems development and end user support of a wide range of core services and applications for businesses and organisations. Cloud Computing is drastically changing the way IT is delivered and used. However, many challenges remain to be tackled. Concerns such as security, vendor lock-in, interoperability and accessibility are examples of issues that need to be addressed. Standards and certification programs play an important role in terms of increasing the market confidence in Cloud Computing. The availability of Cloud Computing standards and certification schemes that address current concerns will ensure that both customers/users as well as providers are likely to regard Cloud Computing with the same level of reliability, trust and maturity as traditional IT. In February 2015, the Cloud Standards Coordination Phase 2 (CSC-2) was launched by ETSI to address issues left open after the initial Cloud Standards Coordination work was completed at the end of 2013. CSC-2 is investigating some specific aspects of the Cloud Computing Standardization landscape, in particular from the point of view of the Cloud Computing users (e.g., SMEs, Administrations). In this paper, we will present final results of the work.

Paper Nr: 69
Title:

Application Splitting in the Cloud: A Performance Study

Authors:

Franz Faul, Rafael Arizcorreta, Florian Dudouet and Thomas Michael Bohnert

Abstract: Cloud-based deployments have become more and more mainstream in recent years, with many companies evaluating moving their infrastructure to the cloud, whether a public cloud, a private cloud, or a mix of the two through the hybrid cloud concept. One service offered by many clouds providers is Database-as-a-Service, where a user is offered a direct endpoint and access credentials to a chosen type of database. In this paper, we evaluate the performance impact of application splitting in a Hybrid Cloud environment. In this context, the database may be located in a cloud setting and the application servers on another cloud or on-premises, or the other way around. We found that for applications with low database latency and throughput requirements, moving to a public cloud environment can be a cost saving solution. None of the cloud providers evaluated were able to provide comparable performance for database-heavy database applications when compared to an optimized enterprise environment. Evaluating application splitting, we conclude that bursting to the cloud is a viable option in most cases, provided that the data is moved to the cloud before performing the requests.

Area 15 - Services Science

Posters
Paper Nr: 80
Title:

Towards Automatic Service Level Agreements Information Extraction

Authors:

Lucia De Marco, Filomena Ferrucci, M-Tahar Kechadi, Gennaro Napoli and Pasquale Salza

Abstract: Service Level Agreements (SLAs) are contracts co-signed by an Application Service Provider (ASP) and the end user(s) to regulate the services delivered through the Internet. They contain several clauses establishing for example the level of the services to guarantee, also known as quality of service (QoS) parameters and the penalties to apply in case the requirements are not met during the SLA validity time. SLAs use legal jargon, indeed they have legal validity in case of court litigation between the parties. A dedicated contract management facility should be part of the service provisioning because of the contractual importance and contents. Some work in literature about these facilities rely on a structured language representation of SLAs in order to make them machine-readable. The majority of these languages are the result of private stipulation and not available for public services where SLAs are expressed in common natural language instead. In order to automate the SLAs management, in this paper we present an investigation towards SLAs text recognition. We devised an approach to identify the definitions and the constraints included in the SLAs using different machine learning techniques and provide a preliminary assessment of the approach on a set of 36 publicly available SLA documents.

Area 16 - Cloud Computing Fundamentals

Posters
Paper Nr: 87
Title:

Empowering Services based Software in the Digital Single Market to Foster an Ecosystem of Trusted, Interoperable and Legally Compliant Cloud-Services

Authors:

Juncal Alonso Ibarra, Leire Orue-Echevarria, Marisa Escalante and Gorka Benguria

Abstract: The software industry has evolved from software on the shelf based applications deployed in dedicated servers , to Software as a service based components running on public or private Clouds and now to Cloud Service Brokers . So, Cloud service brokerages have emerged as digital intermediaries in the information technology (IT) services market (Shang, 2013), creating value for cloud computing clients and vendors alike. This paper presents an approach to foster next generation cloud service brokers through an ecosystem of trusted, interoperable and legally compliant cloud services through an added value Cloud Services intermediator. This ecosystem will offer, create, consume and assess trusted, interoperable, and standard Cloud Services, where to (semi-)automatically deploy the next generation service based software applications.

Area 17 - Services Science

Posters
Paper Nr: 93
Title:

An Enhanced Workflow Scheduling Algorithm in Cloud Computing

Authors:

Nora Almezeini and Alaaeldin Hafez

Abstract: Cloud Computing has gained high attention by provisioning resources and software as a service. Throughout the years, the number of users of clouds is increasing and thus will increase the number of tasks and load in the cloud. Therefore, scheduling tasks efficiently and dynamically is a critical problem to be solved. There are many scheduling algorithms that are used in cloud computing but most of them are concentrating on minimizing time and cost and some of them concentrate on increasing fault tolerance. However, very few scheduling algorithms that considers time, cost, and fault tolerance at the same time. Moreover, Considering pricing models in developing scheduling algorithms to provide cost-effective fault tolerant techniques is still in its infancy. Therefore, analysing the impact of the different pricing models on scheduling algorithm will lead to choosing the right pricing model that will not affect the cost. This paper proposes developing a scheduling algorithm that combines these features to provide an efficient mapping of tasks and improve Quality of Service (QoS).

Area 18 - Cloud Computing Fundamentals

Posters
Paper Nr: 96
Title:

Secure Cloud Reference Architectures for Measuring Instruments under Legal Control

Authors:

Alexander Oppermann, Jean-Pierre Seifert and Florian Thiel

Abstract: Cloud Computing has been a trending topic for years now and it seems it has finally become mature enough for widespread commercial application. In this paper, the authors describe their approach to establish a secure cloud architecture which conforms to the Measuring Instruments Directive of the European Union while keeping the flexibility and benefits that cloud computing promises for companies and customers alike. The authors introduce a modular concept of a secure cloud system architecture which will ensure cross-virtual machine collaboration and a legitimate, secure and protected flow of measurement data.

Paper Nr: 99
Title:

How Cloud Will Transform the Retail Banking Industry

Authors:

Stella Gatziu Grivas, Ruven Schürch and Claudio Giovanoli

Abstract: This paper focusses on current trends in the banking industry and on illustrating how these trends can be supported by cloud computing. The main characteristics of cloud computing that could support transformation are facilitated data accessibility, enabled processing of data from various sources and the opportunity of an easier integration of functions or data. Trends in the banking industry are increasing customer centricity, redesigning of branches and deployment of new communication and distribution channels. For each trend we report quotes general information to provide an overview of the transformation caused by this trend. We identify which business processes are influenced, how they are affected and we explain how cloud computing could support the identified changes.

Paper Nr: 104
Title:

Towards Modelling a Cloud Application's Life Cycle

Authors:

Reginald Butterfield, Silia Maksuti, Markus Tauber, Christian Wagner and Ani Bicaku

Abstract: The success of any cloud-based application depends on appropriate decisions being taken at each phase of the life cycle of that application coupled with the stage of the organisation’s business strategy at any given time. Throughout the life cycle of a cloud-based project, various stakeholders are involved. This requires a consistent definition of organizational, legal and governance issues regardless of the role of the stakeholder. We proffer that currently the models and frameworks that offer to support these stakeholders are predominantly IT focused and as such lack a sufficient focus on the business and its operating environment for the decision-makers to make strategic cloud related decisions that benefit their individual business model. We propose an emerging framework that provides a stronger platform on which to base cloud business decisions. We also illustrate the importance of this approach through extrapolating the subject of security from the initial Business Case definition phase, through the Decision Making phase and into the Application Development phase to strengthen the case for a comprehensive Business-based framework for cloud-based application decision-making. We envisage that this emerging framework will then be further developed around all phases of the Application Life Cycle as a means of ensuring consistency.

Area 19 - Cloud Computing Platforms and Applications

Full Papers
Paper Nr: 45
Title:

From Architecture Modeling to Application Provisioning for the Cloud by Combining UML and TOSCA

Authors:

Alexander Bergmayr, Uwe Breitenbücher, Oliver Kopp, Manuel Wimmer, Gerti Kappel and Frank Leymann

Abstract: Recent efforts to standardize a deployment modeling language for cloud applications resulted in TOSCA. At the same time, the software modeling standard UML supports architecture modeling from different viewpoints. Combining these standards from cloud computing and software engineering would allow engineers to refine UML architectural models into TOSCA deployment models that enable automatic provisioning of cloud applications. However, this refinement task is currently carried out manually by recreating TOSCA models from UML models because a conceptual mapping between the two languages as basis for an automated translation is missing. In this paper, we exploit cloud modeling extensions to UML called CAML as the basis for our approach CAML2TOSCA, which aims at bridging UML and TOSCA. The validation of our approach shows that UML models can directly be injected into a TOSCA-based provisioning process. As current UML modeling tools lack cloud-based refinement support for deployment models, the added value of CAML2TOSCA is emphasized because it provides the glue between architecture modeling and application provisioning.

Paper Nr: 55
Title:

A Scalable Architecture for Distributed OSGi in the Cloud

Authors:

Hendrik Kuijs, Christoph Reich, Martin Knahl and Nathan Clarke

Abstract: Elasticity is one of the essential characteristics for cloud computing. The presented use case is a Software as a Service for Ambient Assisted Living that is configurable and extensible by the user. By adding or deleting functionality to the application, the environment has to support the increase or decrease of computational demand by scaling. This is achieved by customizing the auto scaling components of a PaaS management platform and introducing new components to scale a distributed OSGi environment across virtual machines. We present different scaling and load balancing scenarios to show the mechanics of the involved components.

Paper Nr: 108
Title:

LADY: Dynamic Resolution of Assemblies for Extensible and Distributed .NET Applications

Authors:

Steffen Viken Valvåg, Robert Pettersen, Håvard D. Johansen and Dag Johansen

Abstract: Distributed applications that span mobile devices, computing clusters, and the cloud, require robust and flexible mechanisms for dynamically loading code. This paper describes LADY, a system that augments the .NET platform with a highly reliable mechanism for resolving and loading assemblies and arranges for safe execution of partially trusted code. Key benefits of LADY are the low latency and high availability achieved through its novel integration with DNS.

Short Papers
Paper Nr: 42
Title:

Providing Security SLA in Next Generation Data Centers with SPECS: The EMC Case Study

Authors:

Valentina Casola, Massimiliano Rak, Silvio La Porta and Andrew Byrne

Abstract: Next generation Data Centers (ngDC) are the cloud-based architectures devoted to offering infrastructure services in flexible ways: managing in an integrated way compute, network and storage services. This solution is very attractive from an organisation’s perspective but one of the main challenges to adoption is the perception of loss of security and control over resources that are dynamically acquired in the cloud and that reside on remote providers. For a full adoption, datacenter customers need more guarantees about the security levels provided, creating the need for tools to dynamically negotiate and monitor the security requirements. The SPECS project proposes a platform that offers security features with an as-a-service approach, furthermore it uses Security Service Level Agreements (Security SLA) as a means for establishing a clear statement between customers and providers to define a mutual agreement. This paper presents an industrial experience from EMC that integrates the SPECS Platform and their innovative solutions for ngDC. In particular, the paper will illustrate how it is possible to negotiate, enforce and monitor a Security SLA in a cloud infrastructure offering.

Paper Nr: 102
Title:

IoT-A and FIWARE: Bridging the Barriers between the Cloud and IoT Systems Design and Implementation

Authors:

Alexandros Preventis, Kostas Stravoskoufos, Stelios Sotiriadis and Euripides G. M. Petrakis

Abstract: Today, IoT systems are designed and implemented to address specific challenges based on domain specific requirements, thus not taking into consideration issues of openness, scalability, interoperability and use-case independence. As a result, they are less principled, lacking standards, vendor oriented and hardly replicable since the same IoT architecture cannot be used in more than one use-cases. To address the fragmentation of existing IoT solutions, the IoT-A project proposes an architecture reference model that defines the principles and standards for generating IoT architectures and promoting the interoperation of IoT solutions. However, IoT-A addresses the architecture design problem, and does not focus on whether existing cloud platforms can offer the tools and services to support the implementation of IoT-A compliant IoT systems. In this work we propose an architecture based on IoT-A that focuses on the FIWARE open cloud platform that in turn provides the building blocks of future Internet applications and services. We further correlate FIWARE and IoT-A projects to identify the key features for FIWARE to support IoT-A compliant system implementations.

Posters
Paper Nr: 10
Title:

Toward Cloud-based Classification and Annotation Support

Authors:

Tobias Swoboda, Michael Kaufmann and Matthias L. Hemmje

Abstract: Manually annotating content-based categories to existing documents is a time-consuming task for human domain experts. In order to ease this effort, automated text categorization is used. This paper evaluates the state of the art in cloud-based text categorization and proposes an architecture for flexible cloud-based classification and annotation support, leveraging the advantages provided by cloud-based architectures.

Rejecteds
Paper Nr: 2
Title:

RAID-5 STRATEGY IN VIRTUAL HYBRID CLOUD (R5 VHC)

Authors:

Prem kumar

Abstract: Cloud computing environment is facing a lot of issues in storage management. Storage is the process of storing data in Cloud Server. This paper proposes VMware Cloud Hybrid Service is available in two service options, giving the user flexibility and scalability that is needed to meet organization’s requirements. A dedicated cloud provides the organization with a physically isolated infrastructure with RAID-5 facility, giving them their own private cloud instance running by RAID-5 strategy physically and the most control over their resources. A virtual private cloud provides the organization with logically isolated infrastructure, with fully private networking and resource pools.

Area 20 - Cloud Computing Enabling Technology

Full Papers
Paper Nr: 17
Title:

Unified Compliance Modeling and Management using Compliance Descriptors

Authors:

Falko Koetter, Maximilien Kintz, Monika Kochanowski, Christoph Fehling, Philipp Gildein, Frank Leymann and Anette Weisbecker

Abstract: Due to innovations in the field of cloud computing business processes become distributed, encompassing a combination of services spanning multiple IT systems. Due to a growing number of regulations, managing business process compliance in this cloud environment is a necessary task for companies, leading to a growth in compliance management and compliance checking approaches. Compliance stems from laws and is implemented in all parts of enterprise IT. Thus, both a connection between business and IT as well as a broad coverage of compliance scenarios is necessary. To solve both challenges, we use an integrating compliance descriptor for conceptual compliance modeling. This descriptor is used to configure a compliance management architecture, integrating different types of compliance checking. For creating compliance descriptors, it proved necessary to introduce a formalism and a graphical notation, which is introduced and evaluated in a prototype and expert interviews.

Paper Nr: 19
Title:

Privacy-preserving Data Retrieval using Anonymous Query Authentication in Data Cloud Services

Authors:

Mohanad Dawoud and D. Turgay Altilar

Abstract: Recently, cloud computing became an essential part of most IT strategies. However, security and privacy issues are still the two main concerns that limit the widespread use of cloud services since the data is stored in unknown locations and retrieval of data (or part of it) may involve disclosure of sensitive data to unauthorized parties. Many techniques have been proposed to handle this problem, which is known as Privacy-Preserving Data Retrieval (PPDR). These techniques attempt to minimize the sensitive data that needs to be revealed. However, revealing any data to an unauthorized party breaks the security and privacy concepts and also may decrease the efficiency of the data retrieval. In this paper, different requirements are defined to satisfy a high level of security and privacy in a PPDR system. Moreover, a technique that uses anonymous query authentication and multi-server settings is proposed. The technique provides an efficient ranking-based data retrieval by using weighted Term Frequency-Inverse Document Frequency (TF-IDF) vectors. It also satisfies all of the defined security requirements that were completely unsatisfied by the techniques reported in the literature.

Paper Nr: 49
Title:

A Method for Reusing TOSCA-based Applications and Management Plans

Authors:

Sebastian Wagner, Uwe Breitenbücher and Frank Leymann

Abstract: The automated provisioning and management of Cloud applications is supported by various general-purpose technologies that provide generic management functionalities such as scaling components or automatically redeploying parts of a Cloud application. However, if complex applications have to be managed, these technologies reach their limits and individual, application-specific processes must be created to automate the execution of holistic management tasks that cannot be implemented in a generic manner. Unfortunately, creating such processes from scratch is time-consuming, error-prone, and knowledge-intensive, thus, leading to inefficient developments of new applications. In this paper, we present an approach that tackles these issues by enabling the usage of choreographies to systematically combine available management workflows of existing application building blocks. Moreover, we show how these choreographies can be merged into single, executable workflows in order to enable their automated execution. To validate the approach, we apply the concept to the choreography language BPEL4CHOR and the Cloud standard TOSCA. In addition, we extend the Cloud application management ecosystem OpenTOSCA to support executing management choreographies.

Paper Nr: 73
Title:

Benchmarking Hadoop Performance in the Cloud - An in Depth Study of Resource Management and Energy Consumption

Authors:

Aymen Jlassi and Patrick Martineau

Abstract: Virtual technologies have proven their capabilities to ensure good performance in the context of high performance computing (HPC). During the last decade, the big data tools have been emerging, they have their own needs in performance and infrastructure. Having a wide breadth of experience in the HPC domain, the experts can evaluate the infrastructures used to run big data tools easily. The outcome of this paper is the evaluation of two technologies of virtualization in the context of big data tools. We compare the performance and the energy consumption of two technologies of virtualization (Docker containers and VMware) and benchmark the software Hadoop (JoshBaer, 2015) using these environments. Firstly, the aim is the reduction of the Hadoop deployment cost using the cloud. Secondly, we discuss and analyze the assumptions learned from the HPC experiments and their applicability in the big data context. Thirdly, the Hadoop community finds an in-depth study of the resource consumption depending on the deployment environment. We come to the point that the use of the Docker container gives better performance in most experiments. Besides, the energy consumption varies according to the executed workload.

Paper Nr: 115
Title:

Context-aware Security Models for PaaS-enabled Access Control

Authors:

Simeon Veloudis, Yiannis Verginadis, Ioannis Patiniotakis, Iraklis Paraskakis and Gregoris Mentzas

Abstract: Enterprises are embracing cloud computing in order to reduce costs and increase agility in their everyday business operations. Nevertheless, due mainly to confidentiality, privacy and integrity concerns, many are still reluctant to migrate their sensitive data to the cloud. In this paper, firstly, we outline the construction of a suitable Context-aware Security Model, for enhancing security in cloud applications. Secondly, we outline the construction of an extensible and declarative formalism for representing policy-related knowledge, one which disentangles the definition of a policy from the code employed for enforcing it. Both of them will be employed for supporting innovative PaaS-enabled access control mechanisms.

Short Papers
Paper Nr: 5
Title:

Monitoring Energy Consumption on the Service Level - A Procedure Model for Multitenant ERP Systems

Authors:

Hendrik Müller, Carsten Görling, Johannes Hintsch, Matthias Splieth, Sebastian Starke and Klaus Turowski

Abstract: In this paper, we describe a procedure model for monitoring energy consumption of IT services. The model comprises the steps for identifying and extracting the required data, as well as a mathematic model to predict the energy consumption on both the infrastructure and the service level. Using the example of a distributed and shared ERP system, in which services are represented by ERP transactions, we evaluate the procedure model within a controlled experiment. The model was trained on monitoring data, gathered by performing a benchmark, which triggered more than 1,116,000 dialog steps, initiated by 6000 simulated SAP ERP users. During the benchmark, we monitored the dedicated resource usage for each transaction in terms of CPU time, database request time and database calls as well as the energy consumption of all servers involved in completing the transactions. Our developed procedure model enables IT service providers and business process outsourcers to assign their monitored hardware energy consumption to the actual consuming ERP transactions like creating sales orders, changing outbound deliveries or creating billing documents in watt per hour. The resulting dedicated energy costs can be transferred directly to overlying IT products or to individual organizations that share a multitenant ERP system. The research is mainly relevant for practitioners, especially for internal and external IT service providers. Our results serve as an early contribution to a paradigm shift in the granularity of energy monitoring, which needs to be carried forward to comply with an integrated and product-oriented information management and the ongoing extensive use of cloud- and IT service offerings in business departments.

Paper Nr: 7
Title:

ppbench - A Visualizing Network Benchmark for Microservices

Authors:

Nane Kratzke and Peter-Christian Quint

Abstract: Companies like Netflix, Google, Amazon, Twitter successfully exemplified elastic and scalable microservice architectures for very large systems. Microservice architectures are often realized in a way to deploy services as containers on container clusters. Containerized microservices often use lightweight and REST-based mechanisms. However, this lightweight communication is often routed by container clusters through heavyweight software defined networks (SDN). Services are often implemented in different programming languages adding additional complexity to a system, which might end in decreased performance. Astonishingly it is quite complex to figure out these impacts in the upfront of a microservice design process due to missing and specialized benchmarks. This contribution proposes a benchmark intentionally designed for this microservice setting. We advocate that it is more useful to reflect fundamental design decisions and their performance impacts in the upfront of a microservice architecture development and not in the aftermath. We present some findings regarding performance impacts of some TIOBE TOP 50 programming languages (Go, Java, Ruby, Dart), containers (Docker as type representative) and SDN solutions (Weave as type representative).

Paper Nr: 13
Title:

An Off-line Analytical Approach to Identify Suitable Management Policies for Autonomic Cloud Architecture

Authors:

Marwah Alansari and Behzad Bordbar

Abstract: Delivering cloud services with better quality-of- service demands infrastructures which are autonomic and self- manageable. In particular, there is a clear scope for developing automated methods for enforcing suitable management policies that would run such infrastructures. An example of a management policy is the one that governs the triggering of migration of virtual machines to manage energy consumption. Although there is extensive research on developing novel methods of implementing such policies in an autonomic manner, the identification of suitable policies in terms of cost reduction has received less attention. This requires an analysis of two given sets of policies to identify which one is more suitable. This paper presents a method involving Coloured Petri Nets for an offline modelling and analysis of an autonomic cloud platform which executes sets of policies. We use traces of execution in Petri Nets for calculating minimum cost associated to each set of policies. Petri Net models can generate infinite traces because of the appearance of loops. However, as migration of virtual machines entails cost, many of the infinite traces will not result in the identification of the minimal cost. This paper presents an analytical method using Integer Programming to find the minimum cost of energy consumption for a given policy. We evaluated our approach with the help of an energy management case study.

Paper Nr: 28
Title:

Enabling GPU Virtualization in Cloud Environments

Authors:

Sergio Iserte, Francisco J. Clemente-Castelló, Adrián Castelló, Rafael Mayo and Enrique S. Quintana-Ortí

Abstract: The use of accelerators, such as graphics processing units (GPUs), to reduce the execution time of compute-intensive applications has become popular during the past few years. These devices increment the computational power of a node thanks to their parallel architecture. This trend has led cloud service providers as Amazon or middlewares such as OpenStack to add virtual machines (VMs) including GPUs to their facilities instances. To fulfill these needs, the guest hosts must be equipped with GPUs which, unfortunately, will be barely utilized if a non GPU-enabled VM is running in the host. The solution presented in this work is based on GPU virtualization and shareability in order to reach an equilibrium between service supply and the applications’ demand of accelerators. Concretely, we propose to decouple real GPUs from the nodes by using the virtualization technology rCUDA. With this software configuration, GPUs can be accessed from any VM avoiding the need of placing a physical GPUs in each guest host. Moreover, we study the viability of this approach using a public cloud service configuration, and we develop a module for OpenStack in order to add support for the virtualized devices and the logic to manage them. The results demonstrate this is a viable configuration which adds flexibility to current and well-known cloud solutions.

Paper Nr: 63
Title:

Leveraging Use of Software-license-protected Applications in Clouds

Authors:

Wolfgang Ziegler, Hassan Rasheed and Karl Catewicz

Abstract: Running software license-protected commercial applications in IaaS or PaaS Cloud environments is still an issue that is not resolved in a satisfying way that benefit both the independent software vendor (ISV) and its customers. Due to the mandatory centralised control of license usage at application run-time, e.g. heartbeat control by the license server running at the home site of a user, traditional software licensing practices are not suitable especially when the distributed environment stretches across administrative domains. Although there have been a few bilateral agreements between ISVs and Cloud providers in the past to allow customers of these ISVs to run some of the ISVs license-protected applications in Clouds of certain providers a general solution is still lacking. In this paper we present an approach for software licensing that allows location independent use of software licenses both in form of delegation of already purchased on-site licenses to the Cloud and with authorisations for individual application executions in the Cloud.

Paper Nr: 74
Title:

Privacy-preserving Data Sharing in Portable Clouds

Authors:

Clemens Zeidler and Muhammad Rizwan Asghar

Abstract: Cloud storage is a cheap and reliable solution for users to share data with their contacts. However, the lack of standardisation and migration tools makes it difficult for users to migrate to another Cloud Service Provider (CSP) without losing contacts, thus resulting in a vendor lock-in problem. In this work, we aim at providing a generic framework, named PortableCloud, that is flexible enough to enable users to migrate seamlessly to a different CSP keeping all their data and contacts. To preserve privacy of users, the data in the portable cloud is concealed from the CSP by employing encryption techniques. Moreover, we introduce a migration agent that assists users in automatically finding a suitable CSP that can satisfy their needs.

Paper Nr: 89
Title:

Performance Analysis of an OpenStack Private Cloud

Authors:

Tamas Pflanzner, Roland Tornyai, Balazs Gibizer, Anita Schmidt and Attila Kertesz

Abstract: Cloud Computing is a novel technology offering flexible resource provisions for business stakeholders to manage IT applications and data responding to new customer demands. It is not an easy task to determine the performance of the ported applications in advance. The virtualized nature of these environments always represent a certain level of performance degradation, which is also dependent on the types of resources and application scenarios. In this paper we have set up a performance evaluation environment within a private OpenStack deployment, and defined general use cases to be executed and evaluated in this cloud. These test cases are used for investigating the internal behavior of OpenStack in terms of computing and networking capabilities of its provisioned virtual machines. The results of our investigation reveal the performance of general usage scenarios in a local cloud, give an insight for businesses planning to move to the cloud and provide hints where further development or fine tuning is needed in order to improve OpenStack systems.

Paper Nr: 103
Title:

Embedding Cloud Computing inside Supercomputer Architectures

Authors:

Patrick Dreher and Mladen Vouk

Abstract: Recently there has been a surge of interest in several prototype software systems that can embed a cloud computing image with user applications into a supercomputer’s hardware architecture. This position paper will summarize these efforts and comment on the advantages of each design and will also discuss some of the challenges that one faces with such software systems. This paper takes the position that specific types of user applications may favor one type of design over another. Different designs may have potential advantages for specific user applications and each design also brings a considerable cost to assure operability and overall computer security. A “one size fits all design” for a cost effective and portable solution for Supercomputer/cloud delivery is far from being a solved problem. Additional research and development should continue exploring various design approaches. In the end several different types of supercomputer/cloud implementations may be needed to optimally satisfy the complexity and diversity of user needs, requirements and security concerns. The authors also recommend that the community recognize a distinction when discussing cluster-type HPC/Cloud versus Supercomputer/Cloud implementations because of the substantive differences between these systems.

Paper Nr: 110
Title:

Availability Considerations for Mission Critical Applications in the Cloud

Authors:

Valentina Salapura and Ruchi Mahindru

Abstract: Cloud environments offer flexibility, elasticity, and low cost compute infrastructure. Enterprise-level workloads – such as SAP and Oracle workloads - require infrastructure with high availability, clustering, or physical server appliances. These features are often not part of a typical cloud offering, and as a result, businesses are forced to run enterprise workloads in their legacy environments. To enable enterprise customers to use these workloads in a cloud, we enabled a large number of SAP and Oracle workloads in the IBM Cloud Managed Services (CMS) for both virtualized and non-virtualized cloud environments. In this paper, we discuss the challenges in enabling enterprise class applications in the cloud based on our experience on providing a diverse set of platforms implemented in the IBM CMS offering.

Paper Nr: 111
Title:

Sublimated Configuration of Infrastructure as a Service Deployments - MING: A Model- and View-Based Approach for Cloud Datacenters

Authors:

Ta’id Holmes

Abstract: For establishing the basic cloud service model, a datacenter (DC) needs to deploy an infrastructure as a service (IaaS) solution. The planning, setup, implementation, and operation of DCs – involving hard- and software – comprises multiple activities. At least, software-related aspects such as IaaS deployment can be automated. Yet, in the forefront of an automated installation extensive configurations need to take place. These configurations often relate to the design and characteristics of the respective DC. Using existing deployment technologies, however, information from various aspects are scattered and tangled. For avoiding respective drawbacks and resulting adverse effects, MING (?) – a model-based approach – is presented. It decouples configuration from automated deployment technologies. This way, various further benefits of model-based engineering are leveraged such as separation of concerns through view-based models, platform independent representation of information, and the utilization of existing deployment technologies through code generation.

Posters
Paper Nr: 24
Title:

Towards a Proof-based SLA Management Framework - The SPECS Approach

Authors:

Miha Stopar, Jolanda Modic, Dana Petcu and Massimiliano Rak

Abstract: We present a framework that allows monitoring of the cloud-based applications and environments to verify fulfilment of Service Level Agreements (SLAs), to analyse and remediate detectable security breaches that compromise the validity of SLAs related to storage services. In particular, we describe a system to facilitate identification of the root cause of each violation of integrity, write-serializability and read-freshness properties. Such a system enables executing remediation actions specifically planned for detectable security incidents. The system is activated in an automated way on top of storage services, according to an SLA, which can be negotiated with customers.

Paper Nr: 38
Title:

A Pattern for Enabling Multitenancy in Legacy Application

Authors:

Flavio Corradini, Francesco De Angelis, Andrea Polini and Samuele Sabbatini

Abstract: Multitenancy is one the new property of cloud computing paradigm that change the way of develop software. This concept consists in the aggregation of different tenant in one single istance in contrast with the classic single-tenant concept. The aim of multitenancy is the reduction of costs, the hardware needed is less than single-tenant application, and also the mantainance of the system is less expensive. On the other hand, applications need an high configuration level in order to satisfy the requirements of each tenant. In this paper is presented a pattern that enable legacy applications to handle a multitenancy database. After the presentation of the different approach that implements multitenancy at database system, it is proposed the pattern that aims to interact with this kind of database managing the different customization of different tenant at database level.

Paper Nr: 101
Title:

Towards a Case-based Reasoning Approach for Cloud Provisioning

Authors:

Eric Kübler and Mirjam Minor

Abstract: Resource provisioning is an important issue of cloud computing. Most of the recent cloud solutions implement a simple way with static thresholds to provide resources. Some more sophisticated approaches consider the cloud provisioning problem a multi-dimensional optimization approach. However, the calculation effort for solving optimization problems is significant. An intelligent resource provisioning with a reduced calculation effort requires smart cloud management methods. In this position paper, we propose a case-based reasoning approach for cloud management. A case records a problem situation in cloud management and its solution. We introduce a case model and a retrieval method for previously solved problem cases with the aim to reuse their re-configuration actions for a recent problem situation. The case model uses the container notion correlated with QoS problems. We present a novel, composite similarity function that allows to compare a recent problem situation with the cases from the past. During retrieval, the similarity function creates a ranking of the cases according to their relevance to the current problem situation. Further, we describe the prototypical implementation of the core elements of our case based-reasoning concept. The plausiblility of the retrieval approach has been tested by means of sample cases with simulated data.