Banner
Home      Log In      Contacts      FAQs      INSTICC Portal
 
Documents

Keynote Lectures

Agile Infrastructure at CERN - Moving 9'000 Servers into a Private Cloud
Helge Meinhard, Independent Researcher, Switzerland

Probabilistic Resource Allocation based on a Large Deviations Principle
Paulo Gonçalves, ENS Lyon, LIP, INRIA-DANTE, France

OpenNebula - Latest Innovations in Private Cloud Computing
Ignacio Martín Llorente, Independent Researcher, Spain

 

Agile Infrastructure at CERN - Moving 9'000 Servers into a Private Cloud

Helge Meinhard
Independent Researcher
Switzerland
 

Brief Bio
Helge Meinhard studied physics and obtained his Ph. D. in experimental particle physics in 1991. After two years of research at CERN, he provided computing support to the CHORUS and ATLAS experiments at CERN. In 2001 he joined CERN's IT Department, where until 2009 he was responsible of server and storage procurements. Since 2010 he has been the head of the "Platform and Engineering Services" group; the group provides large-scale services to the physics and the engineering communities, most notably a batch computing service on about 4'500 servers.


Abstract

CERN, the big particle physics lab close to Geneva (Switzerland), is home to the Large Hadron Collider (LHC) and its four data-intensive experiments. In 2012, a new particle was detected that was identified in 2013 as the long-sought Higgs boson. The experiments deliver some 30 Petabytes of data per year to the CERN computer centre that runs almost 10'000 servers in order to process them. For the next years, data rates will grow, and so will the number of servers. In order to cope with this increase without corresponding increase of personnel, CERN reviewed the way it runs its computer centre, and launched the "Agile Infrastructure" project with the aim of organising services into horizontal layers; moving to a private cloud is an integral part of the strategy. Major work areas are configuration, cloud computing, and monitoring. Whenever possible, existing free, open-source solutions have been selected as components for well-confined functionality, with CERN adding just a bit of glue to make the components work together, with the aim of being able to easily replace a component, should a better solution become available.

Cornerstones of the new system include Puppet and PuppetDB, Foreman, Openstack, ElasticSearch and Kibana, Flume and others. As of January 2014, some 7'000 servers (physical or virtual) are managed with Puppet, and some 2'100 physical servers have been moved into the private cloud. The presentation will discuss the motivation, describe the current choices of tools, report on experience made so far, and give an outlook of what is expected to come.



 

 

Probabilistic Resource Allocation based on a Large Deviations Principle

Paulo Gonçalves
ENS Lyon, LIP, INRIA-DANTE
France
http://perso.ens-lyon.fr/paulo.goncalves/
 

Brief Bio

Paulo Gonçalves (PhD, HDR)  is associate researcher at the Institut National de Recherche en Informatique et Automatique (INRIA). His PhD work was on the analysis of non-stationary processes with emphasis on time-frequency and time-scale representations of signals. After his post-doc at Rice University (1994-96) on closely related topics, he focused his research activity on fractal analysis and more specifically on scaling laws estimation from  wavelet decompositions. Then, P. Gonçalves broaden up the scope of his investigation to more general wavelet-based statistical inference and also on graph based semi-supervised classification. In the course, he privileged network metrology as the main application domain for his methodological research and dealt with  statistical characterisation and modelling of traffic, for protocol quality assessment and control. Recently,  he returned to  wavelet theory and more generally to multi-scale analysis of dynamic graph signals.  


Abstract
Many online services undergo a highly volatile workload due to a naturally elastic demand. This is noticeably the case for video on demand systems where buzzes (or flash crowd) periods produce time localised overloads that the actual amount of allocated resources may fail to absorb. The raised problem is then to find a good trade-off between a resources over-provisioning (unnecessarily expensive) and the incapacity to serve the current demand (yielding a users' dissatisfaction). To tackle this dynamic issue, we present an original approach that integrates to the probabilistic characterisation of the load level, the key notion of time scale. The obtained results may then allow to refining the clauses of a Service Level Agreement between operators and users of cloud environments. 

(This is a joint work with S. Shubhabrata, T. Begin and P. Loiseau)



 

 

OpenNebula - Latest Innovations in Private Cloud Computing

Ignacio Martín Llorente
Independent Researcher
Spain
 

Brief Bio
Dr. Llorente is Director of the OpenNebula Project and CEO & co-founder at C12G Labs. He is an entrepreneur and researcher in the field of cloud and distributed computing, having managed several international projects and initiatives on Cloud Computing, and authored many articles in the leading journals and proceedings books. Dr. Llorente is one of the pioneers and world's leading authorities on Cloud Computing. He has held several appointments as independent expert and consultant for the European Commission and several companies and national governments. He has given many keynotes and invited talks in the main international events in cloud computing, has served on several Groups of Experts on Cloud Computing convened by international organizations, such as the European Commission and the World Economic Forum, has contributed to several Cloud Computing panels and roadmaps, and currently serves in the Editorial Board of IEEE Transactions on Cloud Computing and Journal of Grid Computing - From Grids to Cloud Federations. He founded and co-chaired the Open Grid Forum Working Group on Open Cloud Computing Interface, and has participated in the main European projects in Cloud Computing. Llorente holds a Ph.D in Computer Science (UCM) and an Executive MBA (IE Business School), and is a Full Professor (Catedratico) and the Head of the Distributed Systems Architecture Group at UCM.


Abstract

The OpenNebula Project is an open-source project delivering a simple but feature-rich solution to build enterprise clouds and virtualized data centers. OpenNebula includes advanced features for integration, management, scalability, reliability and accounting that many Enterprise IT shops need for cloud adoption. With thousands of deployments worldwide, OpenNebula has a very wide user base that includes leading companies in banking, technology, telecom and hosting, and research and supercomputing centers. The keynote will describe the latest developments incorporated in OpenNebula to address the high performance and scalability challenges in large scale production environments.



footer