220 likes | 331 Views
Armenian ATLAS Tier 3 Site. Atlas team of A.I.Alikanian National Scientific Laboratory. H. Oganezov. XXIV International Symposium on Nuclear Electronics & Computing. Outline. E -Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network
E N D
Armenian ATLAS Tier 3 Site Atlas team of A.I.Alikanian National Scientific Laboratory H. Oganezov XXIV International Symposium on Nuclear Electronics & Computing
Outline • E-Infrastructure hierarchy • Networking and Computational facilities in Armenia • ASNET AM Network • Armenian National Grid Initiative • Armenian ATLAS site (AM-04-YERPHI) • History (Background) • Site information • Monitoring and job statistics • Conclusion and Issues
E-Infrastructure hierarchy INFN-BOLOGNA-T3 INFN-GENOVA UTD-HEP AM-04-YERPHI ATLAS ALICE provides a distributed environment for sharing computing power, storage instruments and databases through middleware provides fast interconnection and advanced services
Outline • E-Infrastructure hierarchy • Networking and Computational facilities in Armenia • ASNET AM Network • Armenian National Grid Initiative • Armenian ATLAS site (AM-04-YERPHI) • History (Background) • Site information • Monitoring and job statistics • Conclusion and Issues
ASNET AM Network • Started to develop and realize since 1994 by the Institute for Informatics and Automation Problems (IIAP NAS RA). • ASNET-AM serves as the foundation for advanced computing applications in Armenia. • Links up academic, scientific, research and educational organizations. • Provides advanced network services for 60 organizations in the major cities of Armenia, such as Yerevan, Ashtarak, Byurakan, Abovian, Gyumri. Connection for ASNET-AM is provided by GEANTand by the channel rented from the local telecom companies (Arminco, ADC).
Agreementof Establishment of ArmenianGridJoint Research Unit was signed in September 2007 Maingoals: ToestablishArmenian Infrastructure presenceininternationalGridinfrastructures To provide operations and security services TopromotetheuptakeofGridtechnologiesinArmenia, theinterconnectionofexistingandfutureresources, andthedeploymentofnewapplications TosupportresearchingGridandGlobalComputing Armenian National Grid Initiative
Armenian National Grid Initiative Computational Resources Topology
Armenian National Grid Initiative Core Services • Access point
Outline • E-Infrastructure hierarchy • Networking and Computational facilities in Armenia • ASNET AM Network • Armenian National Grid Initiative • Armenian ATLAS site (AM-04-YERPHI) • History (Background) • Site information • Monitoring and job statistics • Conclusion and Issues
History (Background) WLCG ATLAS Site Deployment in AANL • 2007 • the AANL site has been certified as “production site” of WLCG • due to low quality of network connection( small bandwidth and frequent outages) site was put in suspended mode • 2008 • Developing of a national Grid infrastructure-ArmGrid ArmGrid project is funded by Armenian government and International funding organizations (ISTC, FP7) • 2009 • The "Black Sea Interconnection" was activated to link the Academic and Research Networks of South Caucasian countries (Armenia, Georgia and Azerbaijan) to the European Geant-2 network.This opens up new possibilities for ATLAS collaborators at AANL • 2010 • First ATLAS-SouthCaucasus Software/Computing Workshop & Tutorial. It fosters to establish contacts between ATLAS collaborators and computing people in South Caucasian countries. Workshop helps to better understand ADC requirements and configuration principles • 2011 • September: ATLAS Computingvisit to theAANL discussions between representatives of the ADC and AANL were very useful in order to make progress on the establishment of AM-04-YERPHI as a ATLAS grid center. • October 20-th : Site status as ATLAS GRID site was approved by ICB
Site information • Computational resources • Model: Dell PE1950 III Additional Quad-Core Xeon • CPU: 6 nodes x 2 cpus per node X 4 cors per cpus= 48 cors • HDD:160 GB • RAM: 8 GB • For local analysis • CPU: 6 nodes x 2 cpus per node x 2 cors per cpus= 24 • Storage Capacity • 50TB • Site Core Services • MAUI/Torque PBS • SRM v1, v2 • Supported VOs: • ATLAS • ALICE • ArmGrid
Site Information Name • AM-04-YERPHI Functionality • Grid Analysis (brokeroff), low priority production and local analysis • Tier3gs Cloud association • NL cloud Regional support • JINR Voms group • atlas/am Technical support • 2 sys admins (shared: 0.3 FTE)
Site information ATLAS VO Support • DPM 10TB (nfs) • ATLASSCRATCHDISK 2T • ATLASLOCALGROUPDISK 7.00T • ATLASPRODDISK 1024.00G Frontier/ Squid cluster xrootdcluster
Monitoring and job statistics Running Jobs Upgrade to sl6, EMI2 Commissioning Network Hardware components replacement work
Monitoring and job statistics Job Failure by Category and Exit Code maui and queue conf. should be optimized sw application, cvmfs (communication) problem communication problem
Monitoring and job statistics Efficiency and Wall Clock Consumption Good efficiency of testing and MC production jobs
Data Transfers 1G files transferring finished successfully. Problems with bigger file transferring SRM errors. Transfer failure for big files. Succeeded after big number of attempts
Outline • E-Infrastructure hierarchy • Networking and Computational facilities in Armenia • ASNET AM Network • Armenian National Grid Initiative • Armenian ATLAS site (AM-04-YERPHI) • History (Background) • Site information • Monitoring and job statistics • Conclusion and issues
Conclusion and issues • AM-04-YERPHI site is operational now. • As site administrators become more experienced, problems are resolved faster. AM-04-YERPHI
Conclusion and issues • Continuous monitoring of the infrastructure by the system administrators ensures early error detection. Diagnostics help to identify problems. • Many configuration problems had been fixed during commissioning, maintenance, but the job scheduling configuration could still be improved. • Ensuring a reliable network is critical. Issues which still need addressing include • Reliable connectivity and rapid transport of data being used in the grid environment • Related work focused on strengthening fault-tolerance.