PRIMORIS      Contacts      FAQs      INSTICC Portal
 

Keynote Lectures

Cloud Computing and Big Data Can Improve the Quality of Our Life
Victor Chang, Xi'an Jiaotong-Liverpool University, China

Change Alone is Unchanging - Continuous Context-aware Adaptation of Service-based Systems for Smart Cities and Communities
Paolo Traverso, Center for Information Technology - IRST (FBK-ICT), Italy

In-transit Analytics on Distributed Clouds - Applications and Architecture
Omer Rana, Cardiff University, United Kingdom

At Scale Enterprise Computing
Chung-Sheng Li, Accenture, United States

Software- and Systems Architecture for Smart Vehicles
Cornel Klein, Siemens AG, Germany

 

Cloud Computing and Big Data Can Improve the Quality of Our Life

Victor Chang
Xi'an Jiaotong-Liverpool University
China
 

Brief Bio
Victor Chang is an Associate Professor (Reader) in Information Management and Information Systems of International Business School Suzhou, Xi'an Jiaotong Liverpool University, China. He's also a Director of PhD Program. He was previously a Senior Lecturer in the School of Computing, Creative Technologies at Leeds Beckett University, UK. He’s a Visiting Researcher at the University of Southampton, UK and an Honorary Associate Professor at the University of Liverpool, UK. He is an expert on Cloud Computing and Big Data in both academia and industry with extensive experience in related areas since 1998. He completed a PGCert (Higher Education) and PhD (Computer Science) within four years while working full-time. He has over 100 peer-reviewed published papers. He won £20,000 funding in 2001 and £81,000 funding in 2009. He was involved in part of the £6.5 million project in 2004, part of the £5.6 million project in 2006 and part of a £300,000 project in 2013. He won a 2011 European Identity Award in Cloud Migration, since his work is making contributions. He has won 2016 European Identity and Cloud Award on the best project in research, involved with more than 20 collaborators worth more than $10 millions in valuation. He was selected to present his research in the House of Commons in 2011 and won the best paper in 2012 and 2015. He has demonstrated Storage as a Service, Health Informatics as a Service, Financial Software as a Service, Education as a Service, Big Data Processing as a Service, Integration as a Service, Security as a Service, Social Network as a Service, Data Visualization as a Service (Weather Science) and Consulting as Service in Cloud Computing and Big Data services in both of his practitioner and academic experience. His proposed frameworks have been adopted by several organizations. He is the founding chair of international workshops in Emerging Software as a Service and Analytics and Enterprise Security. He is the founding chair of IoTBDS and COMPLEXIS which have become popular in research communities. He is an Editor-in-Chief (EIC) in International Journal of Organizational and Collective Intelligence and a founding EIC in Open Journal of Big Data. He is the Editor of a highly prestigious journal, Future Generation Computer Systems (FGCS). He is a reviewer of numerous well-known journals and had published three books on Cloud Computing which are available on Amazon website. He is a keynote speaker for CLOSER 2015/WEBIST2015/ICTforAgeingWell 2015 and has received positive support. Dr. Victor has given or will give 10 international keynotes since the end of Year 2016. He has won the Outstanding Young Scientist of Year 2017 this February.


Abstract
The rise of Cloud Computing and Big Data has played influential roles in the evolution of IT services and has made significant contributions to different disciplines. For example, there are ten services that cannot be achieved without the combined effort from Cloud Computing and Big Data techniques: They are Storage as a Service, Health Informatics as a Service, Financial Software as a Service, Business Intelligence as a Service, Education as a Service, Big Data Processing as a Service, Integration as a Service, Security as a Service, Social Network as a Service and Data Visualization as a Service (Weather Science) respectively, in which the keynote speaker will summarize the motivation, methods, results and contributions in each service. He will explain how the unique services can improve the quality of our life by understanding the complex biological and physiological science and ensuring the best approaches of treatments and actions can be adopted. These include development projects and successful deliveries in brain segmentation and learning, proteins and body defense mechanisms, tumor studies and DNA sequencing. Research and enterprise contributions to other disciplines are available which include Business Intelligence as a Service to provide accurate and up-to-date tracking of risk and prices with regard to the investment, as well as contributions for weather data visualization and forecasting to inform the general public about the consequences of the extreme weather.



 

 

Change Alone is Unchanging - Continuous Context-aware Adaptation of Service-based Systems for Smart Cities and Communities

Paolo Traverso
Center for Information Technology - IRST (FBK-ICT)
Italy
 

Brief Bio
Paolo Traverso is the Director of FBK ICT irst, Centre for Information Technology at FBK (Fondazione Bruno Kessler) since 2007. The Centre counts about 200 people working on software and services, cloud computing, embedded systems, content and semantics, perception and interaction. 
He was also CEO of Trento RISE (the Trento Research, Innovation, and Education System) from 2011 until June 2014, the association between FBK and the University of Trento, which is part of the European Institution of Innovation and Technology (EIT) in ICT, the EIT ICT Labs. 
Paolo joined IRST after working in the advanced technology groups of companies for management information consulting in Chicago, London, and Milan, where he led projects for the development of safety critical systems, data and knowledge management, and service oriented applications. He contributed to research in automated planning and service oriented computing. 
He was Program Chair of the International Conference on Automated Planning and Scheduling (ICAPS), General and Program Chair of the International Conference on Service-Oriented Computing (ICSOC), and Program Chair of the Extended Semantic Web Conference (ESWC). His recent research interests are in the monitoring, adaptation, evolution of service oriented applications, and in the development of new-generation services delivery platforms for improving individual and societal quality of life.


Abstract
Service-based systems have to deal with highly dynamic environments in which they must often operate. Consider for instance the case of smart cities and communities, i.e., communities of people who actively participate to the creation and use of ICT-based solutions to improve their quality of life within their own city or region.   Within a smart city and community, the context in which applications must operate continuously changes, as well as the situation, the accessibility to (ICT-based) services, the people, their interactions, requirements, and preferences. Moreover, most often, the only way applications can react to such changing environment is at run-time, since we cannot predict a priori different situations, requirements, interactions, and availability of (ICT) services. Continuous context-aware and incremental adaptation becomes therefore the key enabling property for the delivery of ICT based value added services to cope with the dynamics of the continuously changing environment.
In my talk, I will present some of the compelling needs for context aware incremental adaptation in the case of service-based applications for smart cities and communities. I will discuss some alternative approaches, some lessons learned from applications we have been working with in this field, and the still many related open research challenges.



 

 

In-transit Analytics on Distributed Clouds - Applications and Architecture

Omer Rana
Cardiff University
United Kingdom
 

Brief Bio

Omer Rana is Professor of Performance Engineering at Cardiff School of Computer Science & Informatics. He was formerly the deputy director of the Welsh eScience Centre at Cardiff University -- where he had an opportunity to collaborate with a number of scientists working in computational science and engineering. He holds a PhD in "Neural Computing and Parallel Architectures" from Imperial College (University of London, UK). His research interests are in the areas of high performance distributed computing, data mining and analysis and multi-agent systems. Prior to joining Cardiff University he worked as a software developer with Marshall BioTechnology Limited in London, working on projects with a number of international biotech companies, such as Merck, Hybaid and Amersham International. He has been involved in the Distributed Programming Abstractions and the 3DPAS themes at the UK National eScience Institute. He is an associate editor of the ACM Transactions on Autonomous and Adaptive Systems, IEEE Transactions on Cloud Computing, series co-editor of the book series on "Autonomic Systems" by Birkhauser publishers, and on the editorial boards of "Concurrency and Computation: Practice and Experience" (John Wiley), the Journal of Computational Science (Elsevier) and the recently launched IEEE Cloud Computing magazine. Along with his co-researchers, he was recipient of the best paper award at CLOSER 2013 (Aachen, Germany).


Abstract
The increasing deployment of sensor network infrastructures (in a variety of applications, ranging from environmental monitoring, "Smart Cities", energy demand forecasting, social media analysis to emergency response) has led to large volumes of data becoming available, leading to new challenges in storing, processing, analysing and transferring such data. This is especially true when data from multiple sensors is pre-processed prior to delivery to users. Where such data is processed in-transit (i.e. from data capture to delivery to a user) over a shared distributed computing infrastructure, and due to the increasing availability of software defined networks, it is necessary to provide some Quality of Service (QoS) guarantees to each user. This talk provides: (i) scenarios of applications that have these types of characteristics; (ii) a computational architecture for supporting QoS for multiple concurrent scientific workflow data streams being processed (prior to delivery to a user) over a shared infrastructure. The architecture is used to demonstrate how a streaming pipeline, with intermediate data size variation (inflation/deflation), can be supported and managed using a dynamic control strategy at each node. Such a strategy supports end-to-end QoS with variations in data size between the various nodes involved in the workflow enactment process.



 

 

At Scale Enterprise Computing

Chung-Sheng Li
Accenture
United States
 

Brief Bio
Chung-Sheng Li is currently the director of the Commercial Systems Department, PI for the IBM Research Cloud Initiatives, and the executive sponsor of the Security 2.0 strategic initiative. He has been with IBM T.J. Watson Research Center since May 1990. His research interests include cloud computing, security and compliance, digital library and multimedia databases, knowledge discovery and data mining, and data center networking. He has authored or coauthored more than 130 journal and conference papers and received the best paper award from IEEE Transactions on Multimedia in 2003. He is both a member of IBM Academy of Technology and a Fellow of the IEEE. He received BSEE from National Taiwan University, Taiwan, R.O.C., in 1984, and the MS and PhD degrees in electrical engineering and computer science from the University of California, Berkeley, in 1989 and 1991, respectively.


Abstract
At scale computing” is becoming one of the most disruptive trends in recent technology history, and is becoming the central theme for the post distributed computing era and likely to become the primary driver for the innovation and IT transformation during the coming decade.  
At scale computing describes a computing environment that may involves very large amount of computation, transactions, large amount of data, large number of users, or any combinations of the above.  Recent examples of at scale computing include AWS/Google datacenters, Amazon/Alibaba e-Commerce activities, Facebook, Google search, Netflix, etc.  These are in direct contrast with more traditional at scale enterprise such as Walmart, UPS, VISA/Mastercard/Amex.
At scale computing is a paradigm shift from the traditional distributed computing.  It is both a blessing and a curse:  Very large scale of computation offers new opportunities to rethink how to achieve resiliency without having to pay for redundancy.  It definitely promotes the possibility of “fail in place” as opposed to having to perform field service of a failed component in the environment as soon as it happens.  It also offers great opportunity to amortize the potential investment involved in optimization and customization.  At scale does impose severe challenge on just about every aspect of a large scale system, including system architecture, hardware, software and the continuous operations.  It stresses the importance of continuous availability, extreme scalability, and maniac focus on cost efficiency (capex, opex) as every penny counts in this at scale environment.
In this keynote, we will discuss this new “at scale” era - which we believed has started, and perhaps well under way.   Nearly all exciting innovations for enterprise computing during the past decade originated from this environment. These innovative technologies were often motivated by at scale business models.  All of these at scale business models started small.  They invariably found the recipe for creating a virtuous cycle among the “addictive” services or content they provided, and the community and ecosystem created on top of it.  In such a virtuous cycle, addictive content or services attracted new members into the community (either as direct consumers or developers), which in turn generate more content or develop more services.  This virtuous cycle often leads to winner takes all in nearly all case studies.



 

 

Software- and Systems Architecture for Smart Vehicles

Cornel Klein
Siemens AG
Germany
 

Brief Bio

Cornel Klein is Software Architect and Project Manager for the Technology & Innovation Project “eCar” at Siemens Corporate Technologies in Munich. He is project manager and coordinator for RACE (Robust and reliable Automotive Computing Environment for future eCars) which aims at the development of an advanced automotive E/E architecture. In various positions at Siemens, he has been responsible for software technologies and SW based innovations. Starting his career 1998 at Siemens Public Networks, he has gained an extensive knowledge in communication networks, embedded systems, IT platforms and SW architecture as well as in application domains like eCars and smart environments. He holds a master and a PhD degree in Computer Science from the Technical University of Munich.


Abstract
Both fully automated driving and electromobility are promising approaches to address the challenges of mobility with regards to sustainability, urbanization and demographic change. Moreover, they also change the usage patterns and concepts for future passenger vehicles and enable new kinds of applications for special purpose vehicles, for instance in logistics. Recently, many projects and demonstrators have shown the feasiblity and tremendous potential of driving automation for building such “Smart vehicles”. However, we are convinced that for the cost-effective development, validation and deployment of automation functions current vehicle architectures are insufficient. Therefore, we present results and research directions in software- and systems architectures. Moreover, we discuss their relevance for the efficient implementation of new vehicle functions and innovative applications.



footer