CLOSER 2014 Abstracts


Area 1 - Cloud Computing Fundamentals

Full Papers
Paper Nr: 22
Title:

Transparent Access to Relational Databases in the Cloud using a Multi-tenant ESB

Authors:

Steve Strauch, Vasilios Andrikopoulos, Santiago Goméz Sáez and Frank Leymann

Abstract: In the last years Cloud computing has become popular among IT organizations aiming to reduce their operational costs. Applications can be designed to run in the Cloud, or can be partially or completely migrated to it. Migrating the data layer of an application to the Cloud, however, implies that existing applications might need to be adapted in order to access their migrated to the Cloud databases. In this work we examine how we can use an existing ESB to enable transparent access to the relational data store running either in the Cloud or on-premise. The goal of our approach is to minimize the effort required to adapt the application. In particular, we discuss the requirements and prototype realization of a Cloud aware data access layer for transparent data access, using an existing open source and multi-tenant aware ESB as the basis. We then evaluate the performance of our proposed solution by considering different Cloud providers and using example data from an existing benchmark as application workload.
Download

Paper Nr: 36
Title:

Shadows on the Cloud: An Energy-aware, Profit Maximizing Resilience Framework for Cloud Computing

Authors:

Xiaolong Cui, Bryan Mills, Taieb Znati and Rami Melhem

Abstract: As the demand for cloud computing continues to increase, cloud service providers face the daunting challenge to meet the negotiated SLA agreement, in terms of reliability and timely performance, while achieving cost-effectiveness. This challenge is increasingly compounded by the increasing likelihood of failure in large-scale clouds and the rising cost of energy consumption. This paper proposes Shadow Replication, a novel profit-maximization resiliency model, which seamlessly addresses failure at scale, while minimizing energy consumption. The basic tenet of the model is to associate a suite of shadow processes to execute concurrently with the main process, but initially at a much reduced execution speed, to overcome failures as they occur. Two computationally-feasible schemes are proposed to achieve shadow replication. A performance evaluation framework is developed to analyze these schemes and compare their performance to traditional replication based fault tolerance methods, focusing on the inherent tradeoff between fault tolerance, the specified SLA and profit maximization. The results show Shadow Replication leads to significant energy reduction, and is better suited for compute-intensive execution models, where up to 30% more profit increase can be achieved.

Paper Nr: 47
Title:

A SWRL Bridge to XACML for Clouds Privacy Compliant Policies

Authors:

Hanene Boussi Rahmouni, Marco Casassa Mont, Kamran Munir and Tony Solomonides

Abstract: The management of privacy and personal information within multi-cultural domain such as clouds and other universal collaborative systems requires intrinsic compliance-checking and assurance modules in order to increase social trust and acceptance. Focusing mainly on medical domains, this issue is particularly important due to the sensitivity of health related data in international data protection law. The use of ontologies and semantic technologies can provide relatively easy interpretation of legislation at run time, and can allow the logging of data access events to serve for future audits. However, the enforcement of semantic web rules (SWRL rules) on complex and heterogeneous architectures is expensive and might present runtime overheads. We believe a mapping of our semantic web privacy policies to a standard access control language such as XACML would be a useful alternative. A translation to XACML, would allow the integration of these policies with existing security and privacy policies being adopted on clouds environments. This paper describes a mathematical formalism for mapping SWRL (Semantic Web Rule Language) privacy rules to XACML policies and also explains the underline implementation requirements of this formalism.
Download

Paper Nr: 52
Title:

Procurement Auctions to Maximize Players’ Utility in Cloud Markets

Authors:

Paolo Bonacquisto, Giuseppe Di Modica, Giuseppe Petralia and Orazio Tomarchio

Abstract: Cloud computing technology has definitely reached a high level of maturity. This is witnessed not just by the interest of the academic community around it, but also by the wide adoption in commercial scenarios. Today many big IT players are making huge profits from leasing their computing resources “on-demand”, i.e., letting customers ask for the amount of resources they need, and charging them a fixed price for each hour of consumption. Recently, studies authored by economists have criticized the fixed-price applied to cloud resources, claiming that a better pricing model can be devised which may increase profit for both the vendor and the consumer. In this paper we investigate how to apply the mechanism of procurement auction to the problem, faced by providers, of allocating unused resources. In particular, we focus on the strategies providers may adopt in the context of procurement auctions to maximize the utilization of their data centers. We devised a strategy, which dynamically adapts to changes in the auction context, and which providers may adequately tune to accommodate their business needs. Further, overbooking of resources is also considered as an optional strategy providers may decide to adopt to increase their revenue. Simulations conducted on a testbed showed that the proposed approach is viable.
Download

Short Papers
Paper Nr: 8
Title:

A Template Description Framework for Services as a Utility for Cloud Brokerage

Authors:

Li Zhang, Frank Fowley and Claus Pahl

Abstract: Integration and mediation are two core functions that a cloud service broker needs to perform. The description of services involved plays a central role in this endeavour to enable services to be considered as commoditised utilities. We propose a conceptual framework for a cloud service broker based on two parts: a reference architecture for cloud brokers and a service description template that describes the mediated and integrated cloud services. Structural aspects of that template will be identified, formalised in an ontology and mapped onto a set of sublanguages that can be aligned to the cloud development and deployment process.
Download

Paper Nr: 29
Title:

Performance Prediction for Unseen Virtual Machines

Authors:

John O’Loughlin and Lee Gillam

Abstract: Various papers have reported on the differential performance of virtual machine instances of the same type, and same supposed performance rating, in Public Infrastructure Clouds. It has been established that instance performance is determined in large part by the underlying hardware, and performance variation is due to the heterogeneous nature of large and growing Clouds. Currently, customers have limited ability to request performance levels, and can only identify the physical CPU backing an instance, and so associate CPU models with expected performance levels, once resources have been obtained. Little progress has been made to predict likely performance for instances on such Public Clouds. In this paper, we demonstrate how such performance predictions could be provided for, predicated on knowledge derived empirically from one common Public Infrastructure Cloud.
Download

Paper Nr: 30
Title:

Cloud Computing Adoption Factors and Processes for Enterprises - A Systematic Literature Review

Authors:

Rania Fahim El-Gazzar

Abstract: Cloud computing (CC) has received an increasing interest from enterprises since its inception. With its innovative Information Technology (IT) services delivery model, CC could add technical and strategic business values to enterprises. However, it poses highly concerning, internal and external, issues. This paper presents a systematic literature review to explore cloud computing adoption processes in the context of enterprise users and the factors that affect these processes. This is achieved by reviewing 37 articles published about CC adoption. Using the grounded theory approach, articles are classified into eight main categories: internal, external, evaluation, proof of concept, adoption decision, implementation and integration, IT governance, and confirmation. This is concluded in two abstract categories: CC adoption factors and CC adoption processes whereas the former affect the latter. The results of this review indicate that there are serious issues need to be tackled before enterprises decide to adopt CC. Based on the findings of this review, the paper provides future Information Systems (IS) research directions toward the previously under-investigated areas regarding the phenomenon. This involved the call for further theoretical and in-depth empirical contributions to the area of CC adoption by enterprises.
Download

Paper Nr: 35
Title:

Automatic Software Development as a Service (ASDaaS)

Authors:

Hind Benfenatki, Catarina Ferreira da Silva, Nabila Benharkat and Parisa Ghodous

Abstract: Cloud-based services have become a norm for business application development. With Cloud Computing and the convergence toward Everything as a Service (XaaS), we no longer consider the classical context of application development, where IT teams or integrators are solicited. Current approaches in Cloud environments are usually designed for a specific Cloud platform; moreover, they are only designed for technical users. To overcome the lack of generic and complete methodology for business application development, we propose a methodology for Automatic Software Development as a Service (ASDaaS), which is designed for non-technical users and promotes services reuse. In this paper, we focus on the phase of business software requirement gathering of our methodology. We define the requirement vocabulary based on linked data principles, and extend the Linked-USDL language to describe business stakeholder requirements as service functions, business constraints, user preferences and QoS parameters. Our approach is illustrated with an e-commerce example.
Download

Paper Nr: 41
Title:

Iris: An Inter-cloud Resource Integration System for Elastic Cloud Data Centers

Authors:

Ryousei Takano, Atsuko Takefusa, Hidemoto Nakada, Seiya Yanagita and Tomohiro Kudoh

Abstract: This paper proposes a new cloud computing service model, Hardware as a Service (HaaS), that is based on the idea of implementing “elastic data centers” that provide a data center administrator with resources located at different data centers as demand requires. To demonstrate the feasibility of the proposed model, we have developed what we call an Inter-cloud Resource Integration System (Iris) by using nested virtualization and OpenFlow technologies. Iris dynamically configures and provides a virtual infrastructure over inter-cloud resources, on which an IaaS cloud can run. Using Iris, we have confirmed an IaaS cloud can seamlessly extend and manage resources over multiple data centers. The experimental results on an emulated inter-cloud environment show that the overheads of the HaaS layer are acceptable when the network latency is less than 10 ms. In addition, we reveal the large overhead from nested virtualization and show positive prospect for this problem. We believe these results provide new insight to help establish inter-cloud computing.
Download

Paper Nr: 59
Title:

A Survey on Approaches for Interoperability and Portability of Cloud Computing Services

Authors:

Kostas Stravoskoufos, Alexandros Preventis, Stelios Sotiriadis and Euripides G. M. Petrakis

Abstract: Over the recent years, the rapid development of Cloud Computing has driven to a large market of cloud services that offer infrastructure, platforms and software to everyday users. Yet, due to the lack of common accepted standards, cloud service providers use different technologies and offer their clients services that are operated by a variety of proprietary APIs. The lack of standardization results in numerous heterogeneities (e.g., heterogeneous service descriptions, message level naming conflicts, data representation conflicts etc.) making the interoperation, collaboration and portability of services a very complex task. In this work we focus on the problems of interoperability and portability in Cloud Computing, we address their differences and we discuss some of the latest research work in this area. Finally we evaluate and point out relationships between the identified solutions.
Download

Paper Nr: 71
Title:

An Open-Source Framework for Integrating Heterogeneous Resources in Private Clouds

Authors:

Julio Proaño, Carmen Carrión and Blanca Caminero

Abstract: Heterogeneous hardware acceleration architectures are becoming an important tool for achieving high performance in cloud computing environments. Both FPGAs and GPUs provide huge computing capabilities over large amounts of data. At the same time, energy efficiency is a big concern in nowadays computing systems. In this paper we propose a novel architecture aimed at integrating hardware accelerators into a Cloud Computing infrastructure, and making them available as a service. The proposal harnesses advances in FPGA dynamic reconfiguration and efficient virtualization technology to accelerate the execution of certain types of tasks. In particular, applications that can be described as Big Data would greatly benefit from the proposed architecture.
Download

Paper Nr: 77
Title:

A Simple and Generic Interface for a Cloud Monitoring Service

Authors:

Augusto Ciuffoletti

Abstract: The paper addresses the definition of an ontology for cloud monitoring activities, with the aim of defining a standard interface for their configuration. To be widely adopted, such ontology must be extremely flexible, coping with a wide range of use cases: from the minimalist plug-and-play user, to the one governing a complex infrastructure. Our work is based on the Open Cloud Computing Interface, that is an open, community driven OGF standard allowing boundary-level interfaces to be built using RESTful patterns over HTTP. Among others, OpenStack and OpenNebula adopt OCCI. Using the OCCI ontology we define two kinds that are associated with the basic components of a monitoring infrastructure: the collector link, that performs measurements, and the sensor resource, that aggregates data and undertakes actions. This paper is a compact and self-contained revision of a document currently under discussion inside the OCCI community.
Download

Paper Nr: 80
Title:

Model-based Approach to Automatic Software Deployment in Cloud

Authors:

Franklin Magalhães Ribeiro Junior and Tarcísio da Rocha

Abstract: Cloud computing provides resources to reduce software processing costs in IT companies. There are automatic mechanisms to software deployment in cloud providers, however it demands manual coding. In this paper we present a model-based approach to automatic software deployment in cloud environment. We show a brief literature review of existent proposals to automatic software deployment in cloud. We analyzed the proposals, where five used deployment mechanisms based on script or programming language, two proposals based on manual mechanisms and two proposals use a model-based approach to software deployment in cloud, however one is still strongly tied to manual aspects and other complex to modelling. This paper presents a new detailed architecture, a use case and the conceptual view of our model-based approach to automatic software deployment in cloud. This approach aims to reduce the human efforts and time to deploy services in cloud, using UML deployment diagrams as input, in order to deploy it as much as possible on the highest abstraction layer.
Download

Paper Nr: 87
Title:

Cost-efficient Capacitation of Cloud Data Centers for QoS-aware Multimedia Service Provision

Authors:

Ronny Hans, Ulrich Lampe, Michael Pauly and Ralf Steinmetz

Abstract: Cloud infrastructure is increasingly used for the provision of sophisticated multimedia services, such as cloud gaming or Desktop as a Service, with stringent Quality of Service demands. Serving these service demands results in the need to cost-efficiently select and capacitate data centers. In the work at hand, we introduce the corresponding Cloud Data Center Capacitation Problem and propose two optimization approaches. Through a quantitative evaluation, we demonstrate that an exact solution approach is only practically applicable to small problem instances, whereas a heuristic based on Linear Program relaxation achieves significant reductions in computation time of about 80% while retaining a favorable solution quality, with cost increases of approximately 5% or less.
Download

Paper Nr: 102
Title:

Storage CloudSim - A Simulation Environment for Cloud Object Storage Infrastructures

Authors:

Tobias Sturm, Foued Jrad and Achim Streit

Abstract: Since Cloud services are billed by the pay-as-you-go principle, organizations can save huge investment costs. Hence, they want to know, what costs will arise by the usage of those services. On the other hand, Cloud providers want to provide the best-matching hardware configurations for different use-cases. Therefore, CloudSim a popular event-based framework by Calheiros et al., was developed to model and simulate the usage of IaaS (Infrastructure-as-a-Service) Clouds. Metrics like usage costs, resource utilization and energy consumption can be also investigated using CloudSim. But this favored simulation framework does not provide any mechanisms to simulate today’s object storage-based Cloud services (STaaS, Storage-as-a-Service). In this paper, we propose a storage extension for CloudSim to enable the simulations of STaaS-components. Interactions between users and the modeled STaaS Clouds are inspired by the CDMI (Cloud Data Management Interface). In order to validate our extension, we evaluated the resource utilization and costs that arise by the usage of STaaS Clouds based on different simulation scenarios.
Download

Paper Nr: 107
Title:

Robust Cloud Monitor Placement for Availability Verification

Authors:

Melanie Siebenhaar, Ulrich Lampe, Dieter Schuller and Ralf Steinmetz

Abstract: While cloud computing provides a high level of flexibility, it also implies a shift of responsibility to the cloud provider and thus, a loss of control for cloud consumers. Although existing means such as service level agreements or monitoring solutions offered by cloud providers aim to address this issue, there is still a low degree of trust on consumer side that cloud providers properly measure compliance against SLAs. A solution lies in designing reliable means for monitoring cloud-based services from a consumer's perspective. We already proposed such a monitoring approach in our former work. However, our experiments revealed that our approach is sensitive to network impairments. Hence, in the work at hand, we introduce the Robust Cloud Monitor Placement Problem and present a formal optimization model. Based on the model, we propose an initial optimization approach, that allows to obtain an exact solution using off-the-shelf algorithms.
Download

Paper Nr: 121
Title:

Migrating Relational Databases to the Cloud - Rethinking the Necessity of Rapid Elasticity

Authors:

Kevin Williams

Abstract: Rapid Elasticity is often described as an essential characteristic of cloud computing, but there are some good reasons to rethink how it is described and implemented – especially as it relates to transaction processing relational databases, which are broadly used in many organizations. These types of relational databases, which support transaction processing, strictly adhere to what has been called the ACID compliance model, where the Atomicity, Consistency, Isolation, and Durability of transactions are guaranteed to ensure a reliable transaction system. Databases in the cloud often sacrifice one or more of these essential ACID properties to achieve the desired Rapid Elasticity. This conflict between Rapid Elasticity and ACID compliance explains why relatively few existing transactional processing relational databases have been deployed to the cloud without undergoing significant revision. This paper argues for an expanded definition of the essential characteristic of cloud computing on which the underlying goal of Rapid Elasticity is based, but where the ACID compliance remains intact and many of the advantages of cloud computing can be utilized.
Download

Paper Nr: 7
Title:

Portable Green Cloud Services

Authors:

Stephan Ulbricht, Wolfram Amme, Thomas Heinze, Simon Moser and Hans-Dieter Wehle

Abstract: The areas of cloud computing and green IT are amongst the fastest growing markets in the IT industry, until now there are very few opportunities to combine the potential of both. In this paper, we present a method to combine the advantages of both to create standardized and energy efficient cloud services. For their description, we will use the cloud computing standard TOSCA(OASIS, 2013). Thereby, it is possible to create standardized and model-based cloud applications which can be deployed in many cloud environments. We will further show how it is feasible to combine policies with TOSCA to realize energy-efficient management of cloud services. To accomplish this, we will provide ideas on how to extend the TOSCA language as well as the cloud operating environment to achieve the goal of portable, energy-efficient cloud services. The core of this work is the identification and integration of the underlying system architecture for a common solution concept. For this, the architectures and necessary adjustments are explained.
Download

Paper Nr: 34
Title:

Essential Elements of an SME-specific Search of Trusted Cloud Services

Authors:

Andrea Horch, Constantin Christmann, Holger Kett, Jürgen Falkner and Anette Weisbecker

Abstract: Cloud computing holds tremendous potential for small and medium-sized enterprises (SMEs) since it offers technologies which can improve their strategic, technical and economic situation. However, along with the benefits there are also risks which concern legal, technical and economic issues associated with cloud computing, which cause concerns to the SMEs. These concerns can be reduced by improving the transparency of cloud services including the understanding of the technology and knowledge about the service providers, already when searching for the appropriate services. To overcome these concerns and increase the transparency cloud service search systems must assist the user during the search process and provide more details about the cloud services, in particular about the attributes which influence the trust of an SME into a service and its provider. The paper introduces a solution for searching appropriate cloud services by focusing on the essential elements. A unique element in this approach is the automated monitoring and evaluation of the provided attributes of the cloud services.
Download

Paper Nr: 66
Title:

VM Live Migration Algorithm Based on Stable Matching Model to Improve Energy Consumption and Quality of Service

Authors:

Abdelaziz Kella and Ghalem Belalem

Abstract: Cloud Infrastructure is a newly emerged computing platform that builds on the latest achievements of diverse research areas, such as Grid computing, Service-oriented computing, business process management and virtualization. The use of server virtualization techniques for such platforms provides great flexibility with the ability to allocate users and applications per virtual machine, ability to consolidate several virtual machines (VMs) on the same physical machine (PM), to resize a virtual machine as needed and being able to support migration of virtual machines across physical machines (PMs) to meet users and system requirements. However, datacenters hosting Cloud applications consume huge amounts of electrical energy, contributing to high operational costs for the providers' services as well as for the users' services. This paper presents a live migration strategy based on economic model named “stable marriage problem” to find a fair stable matching between VMs' requirements and PMs’ requirements. The proposed approach aims to improve the quality of service while reducing the energy consumption. We refine our approach by integrating the Coase Theorem to reduce the number of migrating VMs and maximize the system's performances. Preliminary experiments show that, with a suitable number of PMs and VMs, the strategy can reduce the energy consumption of Physical machines and datacenter while improving the quality of service.

Paper Nr: 75
Title:

A Green Model of Cloud Resources Provisioning

Authors:

Meriem Azaiez, Walid Chainbi and Hanen Chihi

Abstract: The evolution of network technologies and their reliability on the one hand, and the spread of virtualization techniques on the other hand, have motivated the use of execution and storage resources allocated by distant providers. These resources may progress on demand. Cloud computing deals with such aspects. However, these resources are greedy in energy because they consume huge amounts of electrical energy, which affects the invoicing of Cloud services which depends on run-time and used resources. The environment is affected too due to the emission of greenhouse gas. Therefore, we need Green Cloud computing solutions that reduce the environmental impact. To overcome this Challenge, we study in this paper the relationship between Cloud infrastructure and energy consumption. Then, we present a genetic algorithm based solution that schedules Cloud resources and optimizes the energy consumption and CO_2 emissions of Cloud computing infrastructure based on geographical features of data centers. Unlike previous work, we propose to optimize the use of Cloud resources by scheduling dynamically the customer’s applications and therefore reduce energy consumption as well as the emission of CO_2. The optimal solution of scheduling is found using multi-objective genetic algorithm. In order to test our model, we extended the CloudSim simulator with a module implementing the dynamic scheduling of customer’s applications. The experiments show promising results related to the adoption of our model.
Download

Paper Nr: 89
Title:

Internet of Things in the Cloud - Theory and Practice

Authors:

Philip Wright and Andrea Manieri

Abstract: The digital convergence of IT, Network and Content produced during the last years has created a new landscape for the definition and delivery of ICT services to citizens. The massive introduction of hypervisors and virtualisation techniques has allowed the (self-service) provision of any resource (software, network, data) in a seamless way to Users and Applications. The term Future Internet was coined in EU to synthesize these concepts and refer to the novel architectures enabling the next generation of Internet applications. The digital convergence includes also the so-called embedded systems that through Gateways connected to the Internet to provide a technical bridge to interact with sensors and actuators. This paper introduces the initial findings from experimental work done in the context of ClouT, a EU-funded project aiming at defining and developing a common virtualisation layer, allowing the access and management of IoT devices, as well as Cloud Services, in the same way as any other data. The authors demonstrate and provide a simple way to design and implement a real infrastructure that satisfies the following requirements: cheap, easy to maintain, open source based, compatible and interoperable with different platforms and services. It is an example of how to make a public or a private Cloud capable of hosting data that comes from different kinds of sensors. Moreover this architecture allows the interconnection of all devices (Internet of Things, IoT) and its implementation is illustrated in detail.
Download

Paper Nr: 95
Title:

A Framework for Ambient Computing

Authors:

Mohammed Fethi Khalfi and Sidi Mohamed Benslimane

Abstract: The proliferation of mobile computing and wireless communications is producing a revolutionary change in our information society. Ubiquitous Computing is a recent paradigm whose objective is to support users in accomplishing their tasks, accessing information, or communicating with other users anytime, anywhere. In other terms, Pervasive Information Systems (PIS) constitute an emerging class of Information Systems where Information Technology is gradually embedded in the physical environment, capable of accommodating user needs and wants when desired. PIS differ from Desktop Information Systems (DIS) in that they encompass a complex, dynamic environment composed of multiple artefacts instead of Personal Computers only, capable of perceiving contextual information instead of simple user input, and supporting mobility instead of stationary services. In this paper, as an initial step, we present PIS novel characteristics compared to traditional desktop information systems; we explore this domain by o ering a list of challenges and concepts of ubiquitous computing that form the core elements of a pervasive environment. As a result of this work, a generic framework for intelligent environment has been created. Based on various and related works concerning models and designs. This framework can be used to design any PIS instance.

Paper Nr: 98
Title:

Construction and Verification of the Trusted Cloud Service

Authors:

Dexian Chang, Hongqi Zhang and Yingjie Yang

Abstract: Trust assurance of cloud service is a central focus of the application of the cloud computing. We present a novel approach for constructing trusted cloud service based on the trusted computing technology, which is able to provide reliable data and service for the cloud users. Utilizing the extended LS², we model the programs loaded in the trusted cloud and reason its trustworthy property with the invariants. Result shows the validity and high efficiency of our approach.

Paper Nr: 117
Title:

Optimizing the Stretch for Virtual Screening Application on Pilot-agent Platforms on Grid/Cloud by using Multi-level Queue-based Scheduling

Authors:

T. Q. Bui, E. Medernach, V. Breton and H. Q. Nguyen

Abstract: Virtual screening has proven very effective on grid infrastructures. We focus on finding platform scheduling policy for pilot-agent platform shared by many virtual screening users. They need a suitable scheduling algorithm at platform level to ensure a certain fairness between users. Optimal criterion used in our research is the stretch, a measure for user experience on the platform. From our latest research (Quang et al., 2013), simulation result and experimentation on real pilot agent platform showed that SPT policy is the best policy in 4 different existing scheduling policies (FIFO, SPT, LPT and Round Robin) for optimizing the stretch. However, research on real grid workload (Medernach, 2005) showed that there are two types of grid user: normal users who submit frequently little jobs to grid and data challenge users who submit occasionally large number of jobs to grid. And SPT policy, in particularly, is not appropriate for data challenge user because they have to wait always normal user. In this paper, we proposed a new policy named SPT-SPT which uses multi-level queue scheduling technique for scheduling in a pilot agent platform. In SPT-SPT policy, the administrator creates two separate user groups in the platform: Normal group and Data Challenge group. Each group has their own task queue in the platform and SPT policy is applied on it. A parameter p (p ϵ [0,1]), the probability that task queue is chosen to send pilot agent their task, is assigned to one task queue and 1-p for the other one. This policy improves user experience for Data Challenge group and do not impact very much for Normal group.
Download

Paper Nr: 122
Title:

Beyond Grid Portals and Towards SaaS - A New Access Model for Computational Grids, in the dMRI Brain Context

Authors:

Tarik Zakaria Benmerar, Mouloud Kachouane and Fatima Oulebsir-Boumghar

Abstract: Acigna-G is an ongoing research project to develop a new hybrid Grid SaaS architecture. CloudMRI is our proof-of-concept Acigna-G based SaaS Service for the Brain dMRI field. The main objective of such architecture is to provide local (Browser) and remote (Grid) intensive computational capabilities completly abstracted to the SaaS user. The result is a combination of an in Browser Rendering and Computation engine, interoperable REST-SOAP Grid Services, and interoperable web-grid authentication mechanisms. Such architecture can allow new types of SaaS Services, specifically for the dMRI Brain field.
Download

Area 2 - Services Science Foundation for Cloud Computing

Full Papers
Paper Nr: 40
Title:

Cloud Asset Pricing Tree (CAPT) - Elastic Economic Model for Cloud Service Providers

Authors:

Soheil Qanbari, Fei Li, Schahram Dustdar and Tian-Shyr Dai

Abstract: Cloud providers are incorporating novel techniques to cope with prospective aspects of trading like resource allocation over future demands and its pricing elasticity that was not foreseen before. To leverage the pricing elasticity of upcoming demand and supply, we employ financial option theory (future contracts) as a mechanism to alleviate the risk in resource allocation over future demands. This study introduces a novel Cloud Asset Pricing Tree (CAPT) model that finds the optimal premium price of the Cloud federation options efficiently. Providers will benefit by this model to make decisions when to buy options in advance and when to exercise them to achieve more economies of scale. The CAPT model adapts its structure to address the price elasticity concerns and makes the demand, price inelastic and the supply, price elastic. Our empirical evidences suggest that using the CAPT model, exploits the Cloud market potential as an opportunity for more resource utilization and future capacity planning.
Download

Paper Nr: 54
Title:

Creo: Reduced Complexity Service Development

Authors:

Per-Olov Östberg and Niclas Lockner

Abstract: In this work we address service-oriented software development in distributed computing environments, and investigate an approach to software development and integration based on code generation. The approach is illustrated in a toolkit for multi-language software generation built on three building blocks; a service description language, a serialization and transport protocol, and a set of code generation techniques. The approach is intended for use in the eScience domain and aims to reduce the complexity of development and integration of distributed software systems through a low-knowledge-requirements model for construction of network accessible services. The toolkit is presented along with a discussion of use cases and a performance evaluation quantifying the performance of the toolkit against selected alternative techniques for code generation and service communication. In tests of communication overhead and response time, toolkit performance is found to be comparable to or improve upon the evaluated techniques.
Download

Paper Nr: 56
Title:

Discovering Secure Service Compositions

Authors:

Luca Pino, George Spanoudakis, Andreas Fuchs and Sigrid Gürgens

Abstract: Security is an important concern for service based systems, i.e., systems that are composed of autonomous and distributed software services. This is because the overall security of such systems depends on the security of the individual services they deploy and, hence, it is difficult to assess especially in cases where the latter services must be discovered and composed dynamically. This paper presents a novel approach for discovering secure compositions of software services. This approach is based on secure service orchestration patterns, which have been proven to provide certain security properties and can, therefore, be used to generate service compositions that are guaranteed to satisfy these properties by construction. The paper lays the foundations of the secure service orchestration patterns, and presents an algorithm that uses the patterns to generate secure service compositions and a tool realising our entire approach.
Download

Short Papers
Paper Nr: 46
Title:

Collaborative, Dynamic & Complex Systems - Modeling, Provision & Execution

Authors:

Vasilios Andrikopoulos, Santiago Goméz Sáez, Dimka Karastoyanova and Andreas Weiß

Abstract: Service orientation has significantly facilitated the development of complex distributed systems spanning multiple organizations. However, different application areas approach such systems in domain-specific ways, focusing only on particular aspects relevant for their application types. As a result, we observe a very fragmented landscape of service-oriented systems, which does not enable collaboration across organizations. To address this concern, in this work we introduce the notion of Collaborative, Dynamic and Complex (CDC) systems and position them with respect to existing technologies. In addition, we present how CDC systems are modeled and the steps to provision and execute them. Furthermore, we contribute an architecture and prototypical implementation, which we evaluate by means of a case study in a Cloud-enabled context-aware pervasive application.
Download

Paper Nr: 72
Title:

Utility-based Decision Making in Collective Adaptive Systems

Authors:

Vasilios Andrikopoulos, Marina Bitsaki, Santiago Goméz Sáez, Dimka Karastoyanova, Christos Nikolaou and Alina Psycharaki

Abstract: Large-scale systems comprising of multiple heterogeneous entities are directly influenced by the interactions of their participating entities. Such entities, both physical and virtual, attempt to satisfy their objectives by dynamically collaborating with each other, and thus forming collective adaptive systems. These systems are subject to the dynamicity of the entities’ objectives, and to changes to the environment. In this work we focus on the latter, i.e. on providing the means for entities in such systems to model, monitor and evaluate their perceived utility by participating in the system. This allows for them to make informed decisions about their interactions with other entities in the system. For this purpose we propose a utility-based approach for decision making, as well as an architecture that allows for the support of this approach.
Download

Paper Nr: 81
Title:

Task Placement in a Cloud with Case-based Reasoning

Authors:

Eric Schulte-Zurhausen and Mirjam Minor

Abstract: Moving workflow management to the cloud raises novel, exciting opportunities for rapid scalability of workflow execution. Instead of running a fixed number of workflow engines on an invariant cluster of physical machines, both physical and virtual resources can be scaled rapidly. Furthermore, the actual state of the resources gained from cloud monitoring tools can be used to schedule workload, migrate workload or conduct split and join operations for workload at run time. However, having so many options for distributing workload forms a computationally complex configuration problem which we call the task placement problem. In this paper, we present a case-based framework addressing the task placement problem by interleaving workflow management and cloud management. In addition to traditional workflow and cloud management operations it provides a set of task internal operations for workload distribution.
Download

Paper Nr: 21
Title:

Proactive Adaptation in Service Composition using a Fuzzy Logic Based Optimization Mechanism

Authors:

Silvana de Gyvés Avila and Karim Djemame

Abstract: The importance of Quality of Service management in service oriented environments has brought the need of QoS aware solutions. Proactive adaptation approaches enable composite services to detect in advance, according to their QoS values, the need for a change in order to prevent upcoming problems, and maintain the functional and quality levels of the composition. This paper presents a proactive adaptation mechanism that implements self-optimization based on fuzzy logic. The optimization model uses two fuzzy inference systems that evaluate the QoS values of composite services, based on historical and freshly collected data, and decide if adaptation is needed or not. Experimental results show significant improvements in the global QoS of the use case scenarios, providing reductions of up to 8.9% in response time and 14.7% in energy consumption, and an improvement of 41% in availability; this is achieved with an average increment in cost of 11.75 %.
Download

Paper Nr: 43
Title:

Preference-driven Refinement of Service Compositions

Authors:

Cheikh Ba, Umberto Costa, Mirian Halfeld Ferrari, Rémy Ferré, Martin A. Musicante, Veronika Peralta and Sophie Robert

Abstract: The Service Oriented Computing Paradigm proposes the construction of applications by integrating pre-existent services. Since a large number of services may be available in the Cloud, the selection of services is a crucial task in the definition of a composition. The selected services should attend the requirements of the compound application, by considering both functional and non-functional requirements (including quality and preference constraints). As the number of available services increases, the automation of the selection task becomes desirable. We propose a method for the refinement of service compositions that takes the abstract specification of a composition, the definition of concrete services and user preferences. Our algorithm produces a list of refinements in preference order. Experiments show that our method can be used in practice.
Download

Paper Nr: 63
Title:

Choreography-based Consolidation of Multi-instance BPEL Processes

Authors:

Sebastian Wagner, Oliver Kopp and Frank Leymann

Abstract: Interaction behavior between processes of different organizational units such as an enterprise and its suppliers can be modeled by choreographies. When organizations decide, for instance, to gain more control about their suppliers to minimize transaction costs, they may decide to insource these companies. This especially includes the integration of the partner processes into the organization’s processes. Existing works are able to merge single-instance BPEL process interactions where each process model is only instantiated once during choreography execution. However, there exist different interaction scenarios where one process interacts with several instances of another process and where the number of instances involved is not known at design time but determined during runtime of the choreography. In this work we investigate these interaction scenarios and extend the process consolidation approach in a way that we can emulate the multi-instance interaction scenarios in the merged process model.
Download

Paper Nr: 64
Title:

A Cloud Application for Security Service Level Agreement Evaluation

Authors:

Valentina Casola, Massimiliano Rak and Giuseppe Alfieri

Abstract: Cloud security is today considered one of the main limits to the adoption of Cloud Computing. Academic works and the Cloud community (e.g., work-groups at the European Network and Information Security Agency, ENISA) have stated that specifying security parameters in Service Level Agreements actually enables the establishment of a common semantic in order to model security among users and Cloud Service providers (CSPs). However, despite the state of the art efforts aiming at building and representing Cloud SecLAs there is still a gap on the techniques to reason about them. Moreover a lot of activities are being carrying out to clearly state which are the parameters to be shared, their meanings and how they affect service provisioning. In this paper we propose to build up a cloud application that is able to offer Security level Evaluation based on SLA expressed in many different ways. Such application can be offered as a service by Third Parties in order to help customers to evaluate the offerings from providers. Furthermore it can be used to help customers to negotiate security parameters in a Multi-Cloud system and perform Cloud brokering on the basis of a quantitative evaluation of security parameters.
Download

Paper Nr: 76
Title:

A Web Service Discovery Approach Based on Hybrid Negotiation

Authors:

Raja Bellakhal, Walid Chainbi and Khaled Ghédira

Abstract: An effective discovery system must be able to retrieve services responding to the users’ specific preferences in a changing and dynamic environment. Actually, the existing discovery systems have three problems. Firstly, some of them fail to find Web services providing the same QoS as defined in their related description files, since the QoS data are considered as static. Secondly, the discovery systems based on negotiation lack the accuracy in simulating similarly the real humans’ interactions. Thirdly, the negotiation based approaches implemented to discover services are static and do not consider contexts as well as characteristics of each provider. These shortcomings affect negatively the systems performance and usability. Consequently, the quality of the returned services as well as the systems’ reputation will be deteriorated. In this paper, we propose an hybrid discovery approach based on negotiation that solve these drawbacks. We argue that our approach enhances the discovery system performance and usability by implementing a negotiation process that is closer to humans’ interactions. Moreover, by considering the existing dependencies between the concurrent negotiations, the discovery process will be more efficient. Unlike previous work, the negotiation process is dynamic by taking into account the provider’s context and reputation.
Download

Paper Nr: 96
Title:

The Management System of INTEGRIS - Extending the Smart Grid to the Web of Energy

Authors:

Joan Navarro, Andreu Sancho-Asensio, Agustín Zaballos, Virginia Jiménez-Ruano, David Vernet and José Enrique Armendáriz-Iñigo

Abstract: The recent growth experimented by the Internet has fostered the interaction of many heterogeneous technologies under a common environment (i.e., the Internet of Things). Smart Grids entail a sound example of such situation where several devices from different vendors, running different protocols and policies, are integrated in order to reach a common goal: bring together energy delivery and smart services. Latest advances on this domain have led to effective architectures that support this idea from a technical perspective, but fail at providing powerful tools to assist this new business model. Hence, the purpose of this paper is to present a novel unified and ubiquitous management interface, driven by an intelligent system, that uses the advantages featured by the Web of Things to manage the Smart Grid. Therefore, this work opens a new path between the Internet of Things and the Web of Things resulting in a new concept coined as the Web of Energy.
Download

Area 3 - Cloud Computing Platforms and Applications

Full Papers
Paper Nr: 11
Title:

Monitoring Large Cloud-Based Systems

Authors:

Mauro Andreolini, Marcello Pietri, Stefania Tosi and Andrea Balboni

Abstract: Large scale cloud-based services are built upon a multitude of hardware and software resources, disseminated in one or multiple data centers. Controlling and managing these resources requires the integration of several pieces of software that may yield a representative view of the data center status. Today’s both closed and open-source monitoring solutions fail in different ways, including the lack of scalability, scarce representativity of global state conditions, inability in guaranteeing persistence in service delivery, and the impossibility of monitoring multi-tenant applications. In this paper, we present a novel monitoring architecture that addresses the aforementioned issues. It integrates a hierarchical scheme to monitor the resources in a cluster with a distributed hash table (DHT) to broadcast system state information among different monitors. This architecture strives to obtain high scalability, effectiveness and resilience, as well as the possibility of monitoring services spanning across different clusters or even different data centers of the cloud provider. We evaluate the scalability of the proposed architecture through a bottleneck analysis achieved by experimental results.
Download

Paper Nr: 37
Title:

Robust Performance Control for Web Applications in the Cloud

Authors:

Hector Fernandez, Corina Stratan and Guillaume Pierre

Abstract: With the success of Cloud computing, more and more websites have been moved to cloud platforms. The elasticity and high availability of cloud solutions are attractive features for hosting web applications. In particular, the elasticity is supported through trigger-based provisioning systems that dynamically add/release resources when certain conditions are met. However, when dealing with websites, this operation becomes more problematic, as the workload demand fluctuates following an irregular pattern. An excessive reactiveness turns these systems into imprecise and wasteful in terms of SLA fulfillment and resource consumption. In this paper, we propose three different provisioning techniques that expose the limitations of traditional systems, and overcomes their drawbacks without overly increasing complexity. Our experiments conducted on both public and private infrastructures show significant reductions in SLA violations while offering performance stability.
Download

Paper Nr: 94
Title:

PAEAN4CLOUD - A Framework for Monitoring and Managing the SLA Violation of Cloud Service-based Applications

Authors:

Yehia Taher, Rafiqul Haque, Dinh-Khoa Nguyen and Béatrice Finance

Abstract: Recently, Cloud Computing has become an emerging research topic in response to the shift from product oriented economy to service-oriented economy and the move from focusing on software/system development to addressing business-IT alignment. Technically speaking, Cloud Computing enables to build Cloud Service-Based Application (CSBA) which cater for the tailoring of services to specific business needs using a mixture of SaaS, PaaS and IaaS solutions - possibly from various providers. In other words, in the context of CSBAs, cloud services are rented by clients from providers instead of owning the services. Due ti this specific nature, SLA (Service Level Agreement) has become a very important and up-to-date issues in CSBAs. Therefore SLA turns to be critical for both cloud service clients and providers and needs constant monitoring for various reasons mostly detecting if any violation happens but also preventing the violation in efficient way. As in CSBAs a number of providers are involved, it is a challenge to detect and resist violations of multiple SLAs that engage different providers form different locations. To deal with such a problem, this paper introduces a framework called PAEAN4CLOUD. The framework comprises components for monitoring, detecting, and configuring SLAs. An algorithm is proposed for automatic detection of SLA violations. The configuration component underpins assembling CSBAs automatically at runtime. The components help in preventing SLA violations and optimizing application performance as well.
Download

Short Papers
Paper Nr: 2
Title:

Methodology to Determine Relationships between Performance Factors in Hadoop Cloud Computing Applications

Authors:

Luis Eduardo Bautista Villalpando, Alain April and Alain Abran

Abstract: Cloud Computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources. Cloud Computing users prefer not to own physical infrastructure, but instead rent Cloud infrastructure, a Cloud platform or software, from a third-party provider. Sometimes, anomalies and defects affect a part of the Cloud platform, resulting in degradation of the Cloud performance. One of the challenges in identifying the source of such degradation is how to determine the type of relationship that exists between the various performance metrics which affect the quality of the Cloud and more specifically Cloud applications. This work uses the Taguchi method for the design of experiments to propose a methodology for identifying the relationships between the various configuration parameters that affect the quality of Cloud Computing performance in Hadoop environments. This paper is based on a proposed performance measurement framework for Cloud Computing systems, which integrates software quality concepts from ISO 25010 and other international standards.
Download

Paper Nr: 79
Title:

Everest: A Cloud Platform for Computational Web Services

Authors:

Oleg Sukhoroslov and Alexander Afanasiev

Abstract: The ability to effortlessly reuse and combine existing computational tools is an important factor influencing research productivity in many scientific domains. While the service-oriented approach proved to be essential in order to enable wide-scale sharing of applications, we argue that its full potential in scientific computing is still not realized. In this paper, we present Everest, a cloud platform that supports publication, sharing and reuse of scientific applications as web services. The underlying approach is based on a uniform representation of computational web services and its implementation using REST architectural style. In comparison with existing work, Everest has a number of novel features such as the use of PaaS model, flexible binding of services with externally provisioned computing resources and remotely accessible API.
Download

Paper Nr: 83
Title:

Economical Aspects of Database Sharding

Authors:

Uwe Hohenstein and Michael C. Jaeger

Abstract: Database sharding is a technique to handle large data volumes efficiently by spreading data over a large number of machines. Sharding techniques are not only integral parts of NoSQL products, but also relevant for relational database servers if applications prefer standard relational database technology and also have to scale out with massive data. Sharding of relational databases is especially useful in a public cloud because of the pay-per-use model, which already includes licenses, and the fast provisioning of virtually unlimited servers. In this paper, we investigate relational database sharding thereby focussing in detail on one of the important aspects of cloud computing: the economical aspects. We discuss the difficulties of cost savings for database sharding and present some surprising findings on how to reduce costs.
Download

Paper Nr: 90
Title:

Comparison of Request Admission Based Performance Isolation Approaches in Multi-tenant SaaS Applications

Authors:

Rouven Kreb and Manuel Loesch

Abstract: In the Software-as-a-Service model one single application instance is usually shared between different tenants to decrease operational costs. However, sharing at this level may lead to undesired influence from one tenant onto the performance observed by the others. Intentionally, the application does not manage hardware resources and the responsible OS is not aware of application level entities like tenants. Consequently, it is difficult to control the performance of individual tenants to make sure they are isolated. In this paper we present an overview and classification of methods to ensure performance isolation based on request admission control. Further, informational requirements of these methods are discussed.
Download

Paper Nr: 112
Title:

Distributed File System Based on Erasure Coding for I/O-Intensive Applications

Authors:

Dimitri Pertin, Sylvain David, Pierre Evenou, Benoît Parrein and Nicolas Normand

Abstract: Distributed storage systems take advantage of the network, storage and computational resources to provide a scalable infrastructure. But in such large system, failures are frequent and expected. Data replication is the common technique to provide fault-tolerance but suffers from its important storage consumption. Erasure coding is an alternative that offers the same data protection but reduces significantly the storage consumption. As it entails additional workload, current storage providers limit its use for longterm storage. We present the Mojette Transform (MT), an erasure code whose computations rely on fast XOR operations. The MT is part of RozoFS, a distributed file system that provides a global namespace relying on a cluster of storage nodes. This work is part of our ongoing effort to prove that erasure coding is not necessarily a bottleneck for intense I/O applications. In order to validate our approach, we consider a case study involving a storage cluster of RozoFS that supports video editing as an I/O intensive application.
Download

Paper Nr: 17
Title:

A Cloud-based GWAS Analysis Pipeline for Clinical Researchers

Authors:

Paul Heinzlreiter, James Richard Perkins, Óscar Torreño, Johan Karlsson, Juan Antonio Ranea, Andreas Mitterecker, Miguel Blanca and Oswaldo Trelles

Abstract: The cost of obtaining genome-scale biomedical data continues to drop rapidly, with many hospitals and universities being able to produce large amounts of data. Managing and analysing such ever-growing datasets is becoming a crucial issue. Cloud computing presents a good solution to this problem due to its flexibility in obtaining computational resources. However, it is essential to allow end-users with no experience to take advantage of the cloud computing model of elastic resource provisioning. This paper presents a workflow that allows the end-user to perform the core steps of a genome wide association analysis where raw gene- expression data is quality assessed. A number of steps in this process are computationally intensive and vary greatly depending on the size of the study, from a few samples to a few thousand. Therefore cloud computing provides an ideal solution to this problem by enabling scalability due to elastic resource provisioning. The key contributions of this paper are a real world application of cloud computing addressing a critical problem in biomedicine through parallelization of the appropriate parts of the workflow as well as enabling the end-user to concentrate on data analysis and biological interpretation of results by taking care of the computational aspects.
Download

Paper Nr: 19
Title:

Architectural Design of a Deployment Platform to Provision Mixed-tenancy SaaS-Applications

Authors:

Matthias Reinhardt, Stefan T. Ruehl, Stephan A. W. Verclas, Urs Andelfinger and Alois Schütte

Abstract: Software-as-a-Service (SaaS) is a delivery model whose basic idea is to provide applications to the customer on demand over the Internet. In contrast to similar but older approaches, SaaS promotes multi-tenancy as a tool to exploit economies of scale. This means that a single application instance serves multiple customers. However, a major throwback of SaaS is the customers’ hesitation of sharing infrastructure, application code, or data with other tenants. This is due to the fact that one of the major threats of multi-tenancy is information disclosure due to a system malfunction, system error, or aggressive actions by individual users. So far the only approach in research to counteract on this hesitation has been to enhance the isolation between tenants using the same instance. Our approach (presented in earlier work) tackles this hesitation differently. It allows customers to choose if or even with whom they want to share the application. The approach enables the customer to make that choice not just for the entire application but specifically for individual application components and the underlying infrastructure. This paper contributes to the mixed-tenancy approach by introducing a software pattern that allows to establish communication between Application Components that are deployed following the mixed-tenancy paradigm. The contributions of this paper are evaluated based on a representative example that employs all possible kinds of communication.
Download

Paper Nr: 20
Title:

DDSF: A Data Deduplication System Framework for Cloud Environments

Authors:

Jianhua Gu, Chuang Zhang and Wenwei Zhang

Abstract: Cloud storage has been widely used because it can provide seemingly unlimited storage space and flexible access way, while the rising cost of storage and communications is an issue. In this paper, we propose a Data Deduplication System Framework(DDSF) for cloud storage environments. The DDSF consists of three major components, the client, fingerprint server and storage component. The client component divides file into chunks and calculates hash value of each chunk. And it only sends the chunk whose hash value has not existed in fingerprint server component to the storage component, so it can reduce consumption of communications and storage space. We developed an Index File System(IFS) to manage the metadata of user file in fingerprint server. The fingerprint server component maintains a hash table containing the hash values of chunks already stored in storage component. This paper presents a two-level indexing mechanism to improve the both spatial and temporal overhead to access to the hash table. The first level employs the Bloom Filter and the second level uses hash table partition mechanism. In order to reduce the possibility of accidental collision of data block hash value, we use two hash algorithms to calculate the hash value of the chunk. We further conduct some experiments in our framework. The results of these experiments demonstrate that framework proposed in this paper can improve storage space utilization and reduce consumption of communications.

Paper Nr: 88
Title:

Towards a Composition-based APIaaS Layer

Authors:

Claudio Guidi, Saverio Giallorenzo and Maurizio Gabbrielli

Abstract: Application Programming Interfaces (APIs) are a standard feature of any application that exposes its functionalities to external invokers. APIs can be composed thus obtaining new programs with new functionalities. However API composition easily becomes a frustrating and time-costly task that hinders API reuse. The issue derives from technology-dependent features of API composition such as the need of extensive documentation, protocol integration, security issues, etc.. In this paper we introduce the perspective of the API-as-a-Service (APIaaS) layer as tool to ease the development and deployment of applications based on API composition, abstracting communication protocols and message formats. We elicit the desirable features of such a layer and provide a proof-of-concept prototype implemented using a service-oriented language.
Download

Paper Nr: 99
Title:

An Open Market of Cloud Data Services

Authors:

Verena Kantere

Abstract: Cloud data services are a very attractive solution for the management of business and large-scale data. The incapability of creating fully functional data services and, moreover, providing them with clear guarantees on technical and business aspects, creates a problematic cloud service-provisioning situation. We propose the development of a framework that will enable the systematic and efficient creation and management of cloud data services. Such a framework is necessary in order to achieve the exchange of cloud data services in any open market, where cloud providers and their customers can advertise the offered and requested data services in a free manner and make contracts for service provisioning. We discuss the skeleton of such a framework, which comprises three parts: first, profiling data services, second, designing service offers and demands, and, search and match of data services.
Download

Paper Nr: 108
Title:

Mjolnirr: A Hybrid Approach to Distributed Computing - Architecture and Implementation

Authors:

Dmitry Savchenko and Gleb Radchenko

Abstract: A private PaaS enables enterprise developers to leverage all the benefits of a public PaaS to deploy, manage, and monitor applications, while meeting the security and privacy requirements your enterprise demands. In this paper we propose the design and implementation of a Mjolnirr private cloud platform for development of the private PaaS cloud infrastructure. It provides means for custom applications development which uses resources of distributed computing environment.
Download

Paper Nr: 114
Title:

Data Integrity in Cloud Transactions

Authors:

Sachi Nishida and Yoshiyuki Shinkawa

Abstract: BASE is a new principle for transaction processing in cloud computing environments, which stands for Basically Available, Soft state, and Eventually consistent. Unlike the traditional principle of ACID for transactions, BASE pays greater attention to scalability and availability of systems, and less to the data integrity. Therefore, when migrating mission critical transaction systems into cloud environments, we have to carefully evaluate whether the provided data integrity level is acceptable. This paper presents a formal, simulationbased, and model driven approach to evaluating data integrity in BASE transaction systems. We define the data integrity in cloud environments using a new concept for system states, referred to as superposed states. As a formalization tool, VDM++ and Colored Petri Nets (CPN) are used to express and simulate the BASE transaction systems accurately.

Area 4 - Cloud Computing Enabling Technology

Full Papers
Paper Nr: 45
Title:

ALPS: The Accountable Cloud Protocol Stack

Authors:

Wassim Itani, Ayman Kayssi and Ali Chehab

Abstract: The cloud model inherently invites concerns of privacy beyond the traditional worries. Technologies and architectures, physical boundaries, policies and legal constraints all contribute to elevating privacy concerns in the cloud. Existing privacy solutions lack the comprehensiveness required to assure privacy for data in transit and at rest in the cloud. Data privacy has to be protected not just from external entities but also from cloud constituents including cloud providers themselves. We propose in this research an Accountable cLoud Protocol Stack (ALPS) that consists of a set of inter-dependent protocol building blocks, designed in a layer-based architecture, for supporting the development of a wide variety of privacy-aware cloud services and applications. The significance of the stacked architecture of ALPS is that it allows an incremental identification of the different building blocks across the layers, a clean separation of services, ease of development of manageable blocks, and flexibility of evolution. We propose through this research to build this layered cloud computing privacy solution and to implement and test it in real cloud environments relying mostly on the variety of services available from the Amazon, Google, and Microsoft cloud infrastructures.

Paper Nr: 55
Title:

A User Data Location Control Model for Cloud Services

Authors:

Kaniz Fatema, Philip Healy, Vincent C. Emeakaroha, John P. Morrison and Theo Lynn

Abstract: A data location control model for Cloud services is presented that uses an authorization system as its core control element. The model is intended for use by enterprises that collect personal data from end users that can potentially be stored and processed at multiple geographic locations. By adhering to the model’s authorization decisions, the enterprise can address end users’ concerns about the location of their data by incorporating their preferences about the location of their personal data into an authorization policy. The model also ensures that the end users have visibility into the location of their data and are informed when the location of their data changes. A prototype of the model has been implemented that provides the data owner with an interface that allows their location preferences to be expressed. These preferences are stored internally as XACML policy documents. Thereafter, movements or remote duplications of the data must be authorized by submitting requests to an ISO/IEC 10181-3:1996 compliant policy enforcement point. End users can, at any time, view up-to-date information on the locations where their data is stored via a web interface. Furthermore, XACML obligations are used to ensure that end users are informed whenever the location of their data changes.
Download

Paper Nr: 92
Title:

A Cloud Accountability Policy Representation Framework

Authors:

Walid Benghabrit, Hervé Grall, Jean-Claude Royer, Mohamed Sellami, Monir Azraoui, Kaoutar Elkhiyaoui, Melek Önen, Anderson Santana De Oliveira and Karin Bernsmed

Abstract: Nowadays we are witnessing the democratization of cloud services. As a result, more and more end-users (individuals and businesses) are using these services for achieving their electronic transactions (shopping, administrative procedures, B2B transactions, etc.). In such scenarios, personal data is generally flowed between several entities and end-users need (i) to be aware of the management, processing, storage and retention of personal data, and (ii) to have necessary means to hold service providers accountable for the usage of their data. In fact, dealing with personal data raises several privacy and accountability issues that must be considered before to promote the use of cloud services. In this paper, we propose a framework for the representation of cloud accountability policies. Such policies offer to end-users a clear view of the privacy and accountability obligations asserted by the entities they interact with, as well as means to represent their preferences. This framework comes with two novel accountability policy languages; an abstract one, which is devoted for the representation of preferences/obligations in an human readable fashion, a concrete one for the mapping to concrete enforceable policies. We motivate our solution with concrete use case scenarios.
Download

Paper Nr: 93
Title:

Context-aware Cloud Application Management

Authors:

Uwe Breitenbücher, Tobias Binz, Oliver Kopp, Frank Leymann and Matthias Wieland

Abstract: The automation of application management is one of the most important issues in Cloud Computing. However, the steadily increasing number of different services and software components employed in composite Cloud applications leads to a higher risk of unexpected side effects when different technologies work together that bring their own proprietary management APIs. Due to unknown dependencies and the increasing diversity and heterogeneity of employed technologies, even small management tasks on single components may compromise the whole application functionality for reasons that are neither expected nor obvious to non-experts. In this paper, we tackle these issues by introducing a method that enables detecting and correcting unintended effects of management tasks in advance by analyzing the context in which tasks are executed. We validate the method practically and show how context-aware expert management knowledge can be applied fully automatically to running Cloud applications.
Download

Short Papers
Paper Nr: 24
Title:

Evaluating the Impact of Virtualization on Performance and Power Dissipation

Authors:

Francisco J. Clemente-Castelló, Sonia Cervera, Rafael Mayo and Enrique S. Quintana-Ortí

Abstract: In this paper we assess the impact of virtualization in both performance-oriented environments, like high performance computing facilities, and throughput-oriented systems, like data processing centers, e.g., for web search and data serving. In particular, our work-in-progress analyzes the power consumption required to dynamically migrate virtual machines at runtime, a technique that is crucial to consolidate underutilized servers, reducing energy costs while maintaining service level agreements. Preliminary experimental results are reported for two different applications, using the KVM virtualization solution for Linux, on an Intel Xeon-based cluster.
Download

Paper Nr: 31
Title:

Energy-aware VM Scheduling in IaaS Clouds using Pliant Logic

Authors:

Attila Benyi, Jozsef Daniel Dombi and Attila Kertesz

Abstract: Cloud Computing is facing an increasing attention nowadays as it is present in many consumer appliances by advertising the illusion of infinite resources towards its customers. Nevertheless it raises severe issues with energy consumption: the higher levels of quality and availability require irrational energy expenditures. This paper proposes a Pliant system based virtual machine scheduling approach for reducing energy consumption of IaaS datacenters. In order to evaluate our proposed solution, we have designed a CloudSim-based simulation environment, and applied real-world traces for the experiments. We show that significant savings can be achieved in energy consumption with our proposed Pliant-based algorithms, and by fine-tuning the parameters of the proposed Pliant strategy, a beneficial trade-off can be set between energy consumption and execution time.
Download

Paper Nr: 44
Title:

Challenges of Capacity Modelling in Complex IT Architectures

Authors:

Andrea Kő, Péter Fehér and Zoltán Szabó

Abstract: As internal cloud, and cloud technologies widespread among companies, the responsibility of providing reliable IT infrastructure and adequate capacities became the top priority for companies. While internal clouds and related technologies creates the flexibility for customer, limited IT resources arise problems for providing capacities, that has impact on IT service quality. The presented research addressing this problem, and seeks creating models describing the relationship between IT service quality and background infrastructure capacity usage with two distinct methodologies, in a complex cloud-like environment of a financial institution. The research was analysed a pilot area of a widely used electronic banking service. As multivariate statistical modelling and hypothesis testing had limited results in phase 1, but in phase 2 further modelling opportunities were explored, a model based neural networks were developed. The research analyses the limitations of pure statistical analysis in cloud-like environments, but concludes to the usability of alternative methods.
Download

Paper Nr: 48
Title:

Optimizing Access Control Performance for the Cloud

Authors:

Slim Trabelsi, Adrien Ecuyer, Paul Cervera Y Alvarez and Francesco Di Cerbo

Abstract: Cloud computing is synonym for high performance computing. It offers a very scalable infrastructure for the deployment of an arbitrarily high number of systems and services and to manage them without impacts on their performance. As for traditional systems, also such a wide distributed infrastructure needs to fulfil basic security requirements, like to restrict access to its resources, thus requiring authorization and access control mechanisms. Cloud providers still rely on traditional authorization and access control systems, however in some critical cases such solutions can lead to performance issues. The more complex is the access control structure (many authorization levels, many users and resources to protect); the slower is the enforcement of access control policies. In this paper we present a performance study on these traditional access control mechanisms like XACML, which computes the overhead generated by the authorizations checking process in extreme usage conditions. Therefore, we propose a new approach to make access control systems more scalable and suitable for cloud computing high performance requirements. This approach is based on a high speed caching access control tree that accelerates the decision making process without impacting on the consistency of the rules. Finally, by comparing the performance test results obtained by our solution to a traditional XACML access control system, we demonstrate that the ACT in-memory approach is more suitable for Cloud infrastructures by offering a scalable and high speed AC solution.
Download

Paper Nr: 65
Title:

Unified Invocation of Scripts and Services for Provisioning, Deployment, and Management of Cloud Applications Based on TOSCA

Authors:

Johannes Wettinger, Tobias Binz, Uwe Breitenbücher, Oliver Kopp, Frank Leymann and Michael Zimmermann

Abstract: There are several script-centric approaches, APIs, and tools available to implement automated provisioning, deployment, and management of applications in the Cloud. The automation of all these aspects is key for reducing costs. However, most of these approaches are script-centric and provide proprietary solutions employing different invocation mechanisms, interfaces, and state models. Moreover, most Cloud providers offer proprietary Web services or APIs to be used for provisioning and management purposes. Consequently, it is hard to create deployment and management plans integrating several of these approaches. The goal of our work is to come up with an approach for unified invocation of scripts and services without handling each proprietary interface separately. A prototype realizes the presented approach in a standards-based manner using the Topology and Orchestration Specification for Cloud Applications (TOSCA).
Download

Paper Nr: 85
Title:

A Requirements Analysis for IaaS Cloud Federation

Authors:

Alfonso Panarello, Antonio Celesti, Maria Fazio, Massimo Villari and Antonio Puliafito

Abstract: The advent of Cloud computing offers different ways both to sell and buy resources and services according to a pay-per-use model. Thanks to virtualization technology, different Cloud providers supplying cost-effective services provided in form of Infrastructure as a Service (IaaS) have been rising. Currently, there is another perspective which represents a further business opportunity for small/medium providers known as Cloud Federation. In fact, the Cloud ecosystem includes hundreds of independent and heterogeneous cloud providers, and a possible future alternative scenario is represented by the promotion of cooperation among them, thus enabling the sharing of computational and storage resources. In this paper, we specifically discuss an analysis of the requirements for the establishment of an IaaS Cloud Federation.
Download

Paper Nr: 97
Title:

Analysing the Migration Time of Live Migration of Multiple Virtual Machines

Authors:

Kateryna Rybina, Abhinandan Patni and Alexander Schill

Abstract: Workload consolidation is a technique applied to achieve energy efficiency in data centres. It can be realized via live migration of virtual machines (VMs) between physical servers with the aim to power off idle servers and thus, save energy. In spite of innumerable benefits, the VM migration process introduces additional costs in terms of migration time and the energy overhead. This paper investigates the influence of workload as well as interference effects on the migration time of multiple VMs. We experimentally show that the migration time is proportional to the volume of memory copied between the source and the destination machines. Our experiment proves that the VMs, which run simultaneously on the physical machine compete for the available resources, and thus, the interference effects that occur, influence the migration time. We migrate multiple VMs in all possible permutations and investigate into the migration times. When the goal is to power off the source machine it is better to migrate memory intensive VMs first. Kernel-based Virtual Machine (KVM) is used as a hypervisor and the benchmarks from the SPEC CPU2006 benchmark suite are utilized as the workload.
Download

Paper Nr: 106
Title:

Extending Hypervisor Architecture to Allow One Way Data Transfers from VMs to Hypervisors

Authors:

Mustafa Aydin and Jeremy Jacob

Abstract: We propose an alternative architecture to existing hypervisors, which allows for more data to be moved whilst requiring less work for hardware and networks. Our suggestion is to develop an extension to hypervisors for an interface which can allow data transfer one way from virtual machines to hypervisors. We argue that the ability to transfer data directly in this way can provide a number of benefits to cloud users and providers, namely in the areas of security (confidentiality, integrity, and through decreased overhead), reduced energy consumption, and better use of hardware resources.
Download

Paper Nr: 110
Title:

Brokering SLAs for End-to-End QoS in Cloud Computing

Authors:

Tommaso Cucinotta, Diego Lugones, Davide Cherubini and Karsten Oberle

Abstract: In this paper, we present a brokering logic for providing precise end-to-end QoS levels to cloud applications distributed across a number of different business actors, such as network service providers (NSP) and cloud providers (CSP). The broker composes a number of available offerings from each provider, in a way that respects the QoS application constraints while minimizing costs incurred by cloud consumers..
Download

Paper Nr: 116
Title:

Confidential Execution of Cloud Services

Authors:

Tommaso Cucinotta, Davide Cherubini and Eric Jul

Abstract: In this paper, we present Confidential Domain of Execution (CDE), a mechanism for achieving confidential execution of software in an otherwise untrusted environment, e.g., at a Cloud Service Provider. This is achieved by using an isolated execution environment in which any communication with the outside untrusted world is forcibly encrypted by trusted hardware. The mechanism can be useful to overcome the challenging issues in guaranteeing confidential execution in virtualized infrastructures, including cloud computing and virtualized network functions, among other scenarios. Moreover, the proposed mechanism does not suffer from the performance drawbacks typical of other solutions proposed for secure computing, as highlighted by the presented novel validation results.
Download

Paper Nr: 118
Title:

Key Completion Indicators - Minimizing the Effect of DoS Attacks on Elastic Cloud-based Applications Based on Application-level Markov Chain Checkpoints

Authors:

George Kousiouris

Abstract: The problem of DoS attacks has significant effects for any computing system available through the public domain. In the case of Clouds, it becomes even more critical since elasticity policies tied with metrics like Key Performance Indicators (KPIs) can lead a Cloud adopter to significant monetary losses. DoS attacks increase the KPIs, which in turn trigger the elastic increase of resources but without financial benefit for the owner of the cloud-enabled application (Economic Denial of Sustainability). Given the numerous scenarios of DoS attacks and the nature of services computing (in many cases involving legitimate automated traffic requests and bursts), networking mitigation approaches may not be sufficient. In this paper, the concept of Key Completion Indicators (KCIs) is provided and an analysis framework based on a probabilistic approach is proposed that can be applied on the application layer in cloud-deployed applications and elasticity policies, in order to avoid the aforementioned situation. KCIs indicate the level of completeness and provided revenue of the requests made towards a publicly available service and together with the KPIs can lead to a safer result with regard to elasticity. An initial architecture of this DoS Identification as a Service is portrayed.
Download

Paper Nr: 38
Title:

Improving Resource Utilization in Cloud Environments using Application Placement Heuristics

Authors:

Atakan Aral and Tolga Ovatman

Abstract: Application placement is an important concept when providing software as a service in cloud environments. Because of the potential downtime cost of application migration, most of the time additional resource acquisition is preferred over migrating the applications residing in the virtual machines (VMs). This situation results in under-utilized resources. To overcome this problem static/dynamic estimations on the resource requirements of VMs and/or applications can be performed. A simpler strategy is using heuristics during application placement process instead of naively applying greedy strategies like round-robin. In this paper, we propose a number of novel heuristics and compare them with round robin placement strategy and a few proposed placement heuristics in the literature to explore the performance of heuristics in application placement problem. Our focus is to better utilize the resources offered by the cloud environment and at the same time minimize the number of application migrations. Our results indicate that an application heuristic that relies on the difference between the maximum and minimum utilization rates of the resources not only outperforms other application placement approaches but also significantly improves the conventional approaches present in the literature.
Download

Paper Nr: 39
Title:

On Optimizing Resource Allocation and Application Placement Costs in Cloud Systems

Authors:

Cihan Seçinti and Tolga Ovatman

Abstract: Resource utilization problem has been widely studied for cloud systems where a number of virtualized resources are shared among applications hosted as services. Resource utilization optimization can be confronted at two levels: allocating resources to virtual machines(VM) across physical machines or assigning applications to virtual machines present in the host. With the improving capabilities on virtualization technologies, realizing resource allocation at both levels is becoming more viable using VM reconfiguration and live-migration of applications. In this paper we investigate applying resource allocation optimization at these two levels and the emerging trade-off in deciding the appropriate technique to be used. We first analyze the effect of gradually increasing the amount of resources assigned to a virtual machine using VM reconfiguration and compare our results with fully assigning host's resources without reconfiguration. Later, we investigate the amount of utilization revenue when application live-migration is used for applications having smaller/larger performance needs. Finally, we compare the host utilization for different amounts of cost rates between live-migration and reconfiguration. Consequently our analysis results identify the cost rate and application granularity levels where it is optimal to apply live-migration or VM reconfiguration.
Download

Paper Nr: 69
Title:

Analysing the Lifecycle of Future Autonomous Cloud Applications

Authors:

Geir Horn, Keith Jeffery, Jörg Domaschka and Lutz Schubert

Abstract: Though Cloud Computing has found considerable uptake and usage, the amount of expertise, methodologies and tools for efficient development of in particular distributed Cloud applications is still comparatively little. This is mostly due to the fact that all our methodologies and approaches focus on single users, even single processors, let alone active sharing of information. Within this paper we elaborate which kind of information is missing from the current methodologies and how such information could principally be exploited to improve resource utilisation, quality of service and reduce development time and effort.
Download

Paper Nr: 74
Title:

QoS- and Security-aware Composition of Cloud Collaborations

Authors:

Olga Wenge, Ulrich Lampe and Ralf Steinmetz

Abstract: While cloud computing promises virtually unlimited resource supplies, smaller providers may not be able to offer sufficient physical IT capacity to serve large customers. A solution is cloud collaborations, in which multiple providers unite forces in order to conjointly offer capacities in the market. Unfortunately, both the QoS and security properties of such collaborations will be determined by the “weakest link in the chain”, hence resulting in a trade-off between the cumulative capacity and the non-functional characteristics of a cloud collaboration. In this position paper, we examine how cloud collaborations can be optimally composed in a QoS- and security-aware fashion within a market scenario involving multiple cloud providers and users. We propose a Mixed Integer Programming-based exact optimization approach named CCCP-EXA.KOM. Based on a quantitative evaluation, we find that the practical applicability of CCCP-EXA.KOM is limited to smallscale problem instances and conclude that the development of tailored heuristic approaches is required.
Download

Paper Nr: 104
Title:

Towards a Security SLA-based Cloud Monitoring Service

Authors:

Dana Petcu and Ciprian Crăciun

Abstract: Following the community concerns related to security and trust in cloud services, services level agreements (SLAs) are nowadays revised to include security requirements. In order to speedup their take-up by the service providers and consumers, security SLAs monitoring at run-time should be ensured. Several tools for SLA management are available, but most of them are dealing with performance parameters, and not referring to security. Other tools are available for cloud security monitoring, but not currently related or mapped to security SLAs. Aiming to design and develop a security SLA-based cloud monitoring service, which can be deployed or hosted, we identify in this paper the concepts, mechanism and available tools which can lead to a proper design of such a service, as well as the main barriers to overcome.
Download

Area 5 - Mobile Cloud Computing and Services

Short Papers
Paper Nr: 84
Title:

Utilizing ICN/CCN for Service and VM Migration Support in Virtualized LTE Systems

Authors:

Morteza Karimzadeh, Triadimas Satria and Georgios Karagiannis

Abstract: One of the most important concepts used in mobile networks, like LTE (Long Term Evolution) is service continuity. A mobile user moving from one network to another network should not lose an on-going service. In cloud-based (virtualized) LTE systems, services are hosted on Virtual Machines (VMs) that can be moved and migrated across multiple networks to such locations where these services can be well delivered to mobile users. The migration of the (1) VMs and (2) the services running on such VMs, should happen in such a way that the disruption of an on-going service is minimized. In this paper we argue that a technology that can efficiently be used for supporting service and VM migration is the ICN/CCN (Information Centric Networking / Content Centric Networking) technology.
Download

Paper Nr: 86
Title:

Applying SDN/OpenFlow in Virtualized LTE to Support Distributed Mobility Management (DMM)

Authors:

Morteza karimzadeh, Luca Valtulina and Georgios Karagiannis

Abstract: Distributed Mobility Management (DMM) is a mobility management solution, where the mobility anchors are distributed instead of being centralized. The use of DMM can be applied in cloud-based (virtualized) Long Term Evolution (LTE) mobile network environments to (1) provide session continuity to users across personal, local, and wide area networks without interruption and (2) support traffic redirection when a virtualized LTE entity like a virtualized Packet Data Network Gateway (P-GW) running on an virtualization platform is migrated to another virtualization platform and the on-going sessions supported by this P-GW need to be maintained. In this paper we argue that the enabling technology that can efficiently be used for supporting DMM in virtualized LTE systems is the Software Defined Networking (SDN)/OpenFlow technology.
Download

Paper Nr: 91
Title:

Implementation of Asynchronous Mobile Web Services - Implementation and First Usage

Authors:

Marc Jansen, Javier Miranda and Juan Manuel Murillo

Abstract: Mobile devices are nowadays used almost ubiquitously by a large number of users. 2013 was the first year in which the number of sold mobile devices (tablet computers and mobile phones) outperformed the number of PCs' sold. And this trend seems to be continuing in the coming years. Additionally, the scenarios in which these kinds of devices are used, grow almost day by day. Another trend in modern landscapes is the idea of Cloud Computing, that basically allows for a very flexible provision of computational services to customers. Yet, these two trends are not well connected. Of course there exists already quite a large amount of mobile applications (apps) that utilize Cloud Computing based services. The other way round, that mobile devices provide one of the building blocks for the provision of Cloud Computing based services is not well established yet. Therefore, this paper concentrates on an extension of a technology that allows to provide standardized Web Services, as one of the building blocks for Cloud Computing, on mobile devices. The extension hereby consists of a new approach that now also allows to provide asynchronous Web Services on mobile devices, in contrast to synchronous ones. Additionally, this paper also illustrates how the described technology was already used in an app provided by a business partner.
Download

Paper Nr: 115
Title:

Semantic Data Integration for Ubiquitous Logistics - An Approach Supporting Autonomous Logistics in Urban Environments

Authors:

Stefan Wellsandt, Konstantin Klein, Marco Franke, Karl Hribernik and Klaus-Dieter Thoben

Abstract: This paper introduces an approach for seamless plug & play data integration in a novel urban logistics concept. The logistics concept is called ubiquitous logistics and contains an agent-based perspective on shared logistics resources. Each private and public vehicle participating in the concept, as well as parcels, features an intelligent agent that may request (or offer) information services to other agents or legacy systems. The technical approach of this paper suggests that each of the heterogeneous data sources delivers additional information that is used to virtually integrate the data in an automated way. This additional information concerns, the authentication, data structure and sequence – information that have to be provided manually nowadays. The technical approach is explained using a typical situation from the future urban logistics concept. This situation represents an intelligent agent trying to deliver small goods along a stream of urban commuters.
Download