1 / 32

St. Petersburg State University Computing Centre and ALICEV Results

Learn about the computing and communication center at St. Petersburg State University and the progress made in the DC for ALICEV in 2004.

palmieri
Download Presentation

St. Petersburg State University Computing Centre and ALICEV Results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. St.Petersburg state university computing centre and the 1stresults in the DC for ALICEV.Bychkov, G.Feofilov, Yu.Galyuck, A.Zarochensev, V.I.Zolotarev St.Petersburg State University, Russia • Contents • SPbSU computing and communication center inPetrodvorets (structure, capabilities,communications, general activity). • The progress obtained in summer 2004 in St.Petersburg in DC for ALICE. • Future plans 20.09.2004 Alice week off-line day

  2. SPbSU Informational-computing centerhistory Due to the historical reasons Saint Petersburg State University consists from 2 space-divided part. One of them located in the central part of the St. Petersburg. Other in the Petrodvorets – 40 kilometers from it. That’s why and also due to the fact that many other educational centers of the St. Petersburg are located in the central part, the optical channel from Petrodvorets to central part of St. Petersburg was created (during 1992-2004) .

  3. SPbSU Informational-computing center External net - channels

  4. SPbSU Informational-computing center External net - channels

  5. SPbSU Informational-computing center2002-2003 years

  6. Dynamics of performance of SPbSU computational center (MFlops)

  7. SPbSU Informational-computing centernet structure(2003) 8

  8. SPbSU Informational-computing centerclusters photos 11

  9. SPbSU Informational-computing centerSoftware evolution • 1999 – os freeBSD 3.3 • 2000 – OPEN PBS as users job scheduling system • 2000-2001 – os redHat 6.2, systems of the quantum-chemicals calculations: CRYSTAL 95 and GAMESS • 2001-2004 design and development of the Portal of the High Performance Computing (WEBWS) (due to our legend it is called so from words “web work space”) • 2002 first cluster for studing grid-technologies and grid-applications • 2003 participating in alien project (site http://alice.spbu.ru) • 2004 participating in data challenge

  10. SPbSU Informational-computing centerSoftware evolution 2003-… 2003 – collaboration with IBM started Due to collaboration with IBM we changed many parts of the informational-computing center: • New net-monitoring system • New storage system with SAN (storage area network) • New portal development technologies - portlets and websphere

  11. Tivoli Data Warehouse Tivoli Decision Support Tivoli Enterprise Console Tivoli NetView SPbSU Informational-computing centernet monitoring status Reports, statistics Structure, monitoring DB2 visualization events

  12. SPbSU Informational-computing centerstorage system status • HACMP (High Availability Cluster Multi-Processing) • Monitoring and management of the net - Tivoli SAN Manager • Management of the storage elements: IBM Total Storage Manager, IBM Total Storage Specialist, Brocade Advanced Web Tools • Archivating, reserve coping and restoring system TSM - Tivoli Storage Manager • RDBMSDB2 UDB (8.1) • Content Management System CM 8.1

  13. SPbSU Informational-computing centerstorage photos

  14. Portal of the High Performance Computing (WEBWS)

  15. SPbSU Informational-computing center. Portal of the High Performance Computing (WEBWS) WEBWS consists from 3 main part : • Informational part – monitoring of the computational resources based on Ganglia (open source software product) , monitoring users tasks queries • Work space – users work space for development and starting tasks • Administrative part

  16. WEBWS logical structure Informational part WEBWS (work space) Administrative part

  17. WEBWS logical structure Internet Clusters ADM DB WEBWS DB

  18. WEBWS logical structure WEBWS Server Interface PBS Server Authorization system WEBWS DB

  19. WEBWS modules User Info Session Info User Projects Info WS module Crystal module WEBWS Server Info …

  20. WEBWS monitoring part - ganglia

  21. WEBWS monitoring part - ganglia

  22. SPbSU Informational-computing center.Some plans for the future • Continue collaboration with IBM • Continue development WEBWS • … • Continue Alice data challenge • Parton String Model in parallel mode and physics performance analysis for ALICE • Participation in Mammogrid

  23. SPbSU in Data Challenge 2004 • 2002: Globus Toolkit 2.4 was installed, tests started • July 2003: AliEn was installed ( P. Saiz) • July 2004: start of tests jobs in grid-cluster “alice” .

  24. alice02.spbu.ru alice03.spbu.ru work nodes alice04.spbu.ru alice09.spbu.ru SE, CE(pbs-server) alice.spbu.ru, Alien-Services alice05.spbu.ru alice06.spbu.ru alice07.spbu.ru alice08.spbu.ru Cluster alice.

  25. Configuration of cluster in July 2004 alice: 512 MB RAM, PIII 1x733 CPU alice09: 256 MB RAM, Celeron 1x1200 CPU Alice02-08: 512 MB RAM (512 MB swap), PIII 2x600 CPU, 2x4.5 GB SCSI HDD

  26. Configuration of cluster in September 2004 (upgraded) alice: 512 MB RAM, PIII 1x733 CPU alice09: 256 MB RAM,40 GB + 0.3 TB HDD, Celeron 1x1200 CPU Alice02-08: 1 GB RAM (4 GB swap), PIII 2x600 CPU (only one CPU is used) , 40 GB IDE HDD.

  27. Available disks space 01.07-19.09

  28. Running jobs on SPbSU CE from 01.07 to 19.09 (min 1job max 7 job)

  29. Started jobs on SPbSU CE from 01.07-19.09

  30. Ganglia monitoring of alice cluster.

  31. Problems and questions • Information on started jobs and running are not correlated? • No correlation with our scripts for monitoring of started and running jobs? • We will study these problems later.

  32. Plans for Alice DC • SPbSU is planning to continue DC2004 participation with resources: alice: 512 MB RAM, 40 GB., PIII 1x733 CPU alice09: 256 MB RAM,40 GB + 0.3 TB HDD, Celeron 1x1200 CPU Alice02-08: 1 GB RAM (4 GB swap),PIII 2x600 CPU (two CPUs are used) , 40 GB IDE HDD.

More Related