90 likes | 195 Views
Grid Developers’ use of FermiCloud. (to be integrated with master slides). Grid Developers Use of clouds. Storage Investigation OSG Storage Test B ed MCAS P roduction S ystem Development VM OSG User Support FermiCloud Development MCAS integration system.
E N D
Grid Developers’ use of FermiCloud (to be integrated with master slides)
Grid Developers Use of clouds • Storage Investigation • OSG Storage Test Bed • MCAS Production System • Development VM • OSG User Support • FermiCloud Development • MCAS integration system
Storage Investigation: LustreTest Bed eth FCL Lustre: 3 OST & 1 MDT Dom0: - 8 CPU - 24 GB RAM FG ITB Clients (7 nodes - 21 VM) mount mount 2 TB 6 Disks Lustre Server VM BA • ITB clients vs. Lustre Virtual Server • FCL clients vs. Lustre V.S. • FCL + ITB clients vs. Lutre V.S. 7 x Lustre Client VM
ITB clts vs. FCL Virt. Srv. Lustre Use Virt I/O drivers for Net Changing Disk and Net drivers on the Lustre Srv VM… Write I/O Rates 350 MB/s read 70 MB/s write (250 MB/s write on Bare M.) Read I/O Rates Bare Metal Virt I/O for Disk and Net Virt I/O for Disk and default for Net Default driver for Disk and Net
21 Nova clt vs. bare m. & virt. srv. Read – ITB vs. bare metal BW = 12.55 0.06 MB/s (1 cl. vs. b.m.: 15.6 0.2 MB/s) Read – ITB vs. virt. srv. BW = 12.27 0.08 MB/s (1 ITB cl.: 15.3 0.1 MB/s) Read – FCL vs. virt. srv. BW = 13.02 0.05 MB/s (1 FCL cl.: 14.4 0.1 MB/s) Virtual Clients on-board (on the same machine as the Virtual Server) are as fast as bare metal for read Virtual Server is almost as fast as bare metal for read
OSG Storage Test BedOfficial test bed resources • 5 nodes purchased ~ 2 years ago • 4 VM on each node (2 VM SL5, 2 VM SL4) Test Systems: • BeStMan-gateway/xrootd • BeStMan-gateway, GridFTP-xrootd, xrootdfs • Xrootd redirector • 5 data server nodes • BeStMan-gateway/HDFS • BeStMan-gateway/GridFTP-hdfs, hdfs name nodes • 8 data server nodes • Client nodes (4 VMs): • Client installation tests • Certification tests • Apache/tomcat to monitor/display test results etc
OSG Storage Test BedAdditional test bed resources • 6 VMs on nodes outside of the official testbed Test systems: • BeStMan-gateway with disk • BeStMan-fullmode • Xrootd (Atlas-Tier3, WLCG demonstrator project) • Various test installation • In addition, 6 “old” physical nodes are used as dCache test bed • These will be migrated to FermiCloud
MCAS Production System FermiCloud hosts the production server (mcas.fnal.gov) • VM Config: 2 CPUs, 4GB RAM, 2GB swap • Disk Config: • 10GB root partition for OS and system files • 250GB disk image as data partition for MCAS software and data • Independent disk image makes is easier to upgrade the VM • On VM boot up: Data partition is staged and auto mounted in VM • On VM shutdown: Data partition is saved • Work in progress: Restart the VM without having to save and stage in the data partition to/from central image storage • MCAS services hosted on the server • Mule ESB • JBoss • XML Berkeley DB