130 likes | 240 Views
Beyond BiG Grid. Dr.ir. Anwar Osseyran SARA National High Performance Computing Services. BiG Grid was a Must. ATLAS Computing requirements over time 1995: 100 TB, 107 MIPS : 2001: 1900 TB, 7*107 MIPS : 2007: 70000 TB, 55*107 MIPS 2010 LHC START 2011: 83000 TB, 61*107 MIPS
E N D
Beyond BiG Grid Dr.ir. Anwar Osseyran SARA National High Performance Computing Services
BiG Grid was a Must • ATLAS Computing requirements over time • 1995: 100 TB, 107 MIPS : • 2001: 1900 TB, 7*107 MIPS : • 2007: 70000 TB, 55*107 MIPS • 2010 LHC START • 2011: 83000 TB, 61*107 MIPS • From the start on it was clear that no center could provide ALL computing even for one LHC experiment • Similar needs in life science, Astronomy, … 45% 40% 15%
BiG Grid: a National Effort Partner agreement between BiG Grid and SARA already in 2008 National Grid infrastructure (Nikhef, RUG, SARA, Philips, …) To support: LHC: Dutch Tier-1 LOFAR: Long Term Archive Life Science: Dutch LSG But also Alfa and GammaSciences Co-Development and Support
BiG Grid: Big Effort • 104 resource requests (35 cloud) projects (and counting !) • More then 30 scientific communities served • From High Energy Physics: 1000s scientists • To Dutch Bio Banking Collaboration • SARA effort 06/2007 – 12/2012: • Operations: 93k hours • Support and Development: 31k hours • BiG Grid Investment in HPC eco-system: • contribution to Supercomputer Huygens • Grid Compute & Storage, Cluster computing, Clouds, Big Data
BiG Grid @SARAUser-Centric Innovation • Rapid prototyping and Proof-of-Concept in pilots with concrete use cases (small nr. of users) • Pre-production with limited number of users • Production for larger user community • 3 examples: • HPC Cloud • Beehub storage for science • Hadoop Big Data services • Started by SARA, early adopted and co-funded by BiG Grid
HPC Cloud services in NL • Dynamic and flexible self-service • Define your own working environment • Define your own HPC cluster • Work loads: • scale up workstation loads • scale up complex work flows • High User Productivity
BeeHub: Storage for scientists • Integral scalable cloud storage service • Easy to use, store and share data, based on WebDAV • For all desktops OS’s & distributed computing infra • Integrated with SURFconext
Hadoop Big Data Services • Open source Apache Hadoop along with a framework for map reduce jobs over data • Move processing to the data • Seamless scalability • 2009: Pilot Hadoop on Cloud • 2010: Test cluster available for scientists • 2011: Funding for production servicesPoC: 20 users • 2012: Production started with 72 machines * 8 cores / 8 TB storage / 64GB RAM, 3 devops, team of consultants • Lot of interest from science and business
Life after BiG Grid… • BiG Grid the project ends this year • (SURF)SARA is committed to continue its effort: • with the current infrastructure partners • With suppliers and support partners • with YOU… • We will strive for a seamless transition to the new situation • Beyond BiG Grid: Sector Clouds and Big Data…
Beyond the Grid @CERNHelix Nebula: The Science Cloud • Establish a sustainable European cloud computing infrastructure to provide stable computing capacities and services that elastically meet demand • Pilot phase : Proof of Concept deployments on 3 commercial cloud providers (ATOS, CloudSigma, T-Systems)
Dutch Federative CloudsA Prosumer Approach • Sustainability: • Federation of existing data centers. • Scalability & Economy of Scale • Maximize utilization factor • Security and performance: • Sector specific security in place • Community support: • Customer intimacy & sector expertise • Community collaboration and sharing • Also for better partnership with Public Clouds
To end with: Big Data is Changing Science • Big Data is changing science, medicine, business, and technology. • A whole new way of science: correlation supersedes causation, coherent models or unified theories… • Biggest challenge for science & business is not storing data but how to make sense of it.