1 / 15

Scientific Linux experience @ INFN/Trieste

Scientific Linux experience @ INFN/Trieste. B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste. Outline of Presentation. Introduction Evaluation phase SL Test installation Installation Configuration Management SL and Compass Computing FARM. Before moving to SL.

cheng
Download Presentation

Scientific Linux experience @ INFN/Trieste

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scientific Linuxexperience@INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste

  2. Outline of Presentation • Introduction • Evaluation phase • SL Test installation • Installation • Configuration • Management • SL and Compass Computing FARM

  3. Before moving to SL • When the end of official support from RedHat was announced in 2003 we had all PCs running Linux RedHat 7.3 and 9 • HEP community was evaluating different roads to follow • Initially we are waiting for a common answer coming from INFN Computing Committee and looking at what was moving around • At our site we tried different distribution based on RedHat Enterprise Linux like • CentOS 3.1 • Fermi Linux • Scientific Linux 3.0.1

  4. Evaluation of the needs • As time went by we decided to have a meeting in order to discuss how to meet the needs of people involved in different experiments • It was clear that there were many distributions which were a rebuild of RHEL3 from Red Hat’s source RPM stripped of parts that cannot be distributed without a license • Need to have a distribution well maintained and validated by the HEP community

  5. Scientific Linux – Final Test installation • As Scientific Linux 3.0.1 was available we installed it on a few PC clients in order to test how user’s feeling was • we got a good feedback • So we went on with the installation of SL 3.0.2 • At the time SL 3.0.3 was released we had in production • 10 PC client (desktop functionality) • 1 PC with server functionality

  6. Scientific Linux – Final test installation • No problems or complaints coming from users running their applications and software • Then the decision to upgrade all PCs running SL to the new SL 3.0.3 and to move more PCs to the new release • Right now we have 30 PCs clients and 3 PCs servers with SL 3.0.3 installed

  7. Scientific Linux - Installation • Kickstart is used to install SL on the PCs • Kickstart file generated using buildkswhich is a tool developed at INFN Trieste • Buildks allows to install and configure linux PCs automatically based on group-specific policy • a specific network group configuration • Type of PC: multiboot or singleboot • ...

  8. Scientific Linux - Management • Installed machines are currently managed via Linux-update, a tool developed locally • Linux-update allows us to modify in a centralized way • Configuration files • Specific installations • RPM packets management • Anyway YUM is easier and more powerful to manage RPM packets

  9. Scientific Linux – Configuration (1/2) • A local repository has been created for the SL 3.0.3 distribution • Every night via lftp a mirror is created copying the distribution from ftp://ftp.scientific.linux.org/linux/scientific/303/i386/ • This distribution is locally accessible using ftp anonymous just open to the local network nodes

  10. Scientific Linux – Configuration (2/2) • Linux PCs are configured as “YUM clients” of the local FTP server hosting the distribution • Update is done as the PC starts up and via a cron job using the simple command yum update as running • Kernel update is not managed by an automatic tool • After a test phase it is installed manually

  11. Scientific Linux – Release upgrade • Moving from SL 3.0.2 to 3.0.3 was smooth and painless • A script was created and run on all PCs with SL 3.0.2 installed • They were updated to the new release using mainly yum update and just a few more things as running • At the end just a reboot was necessary (for the new kernel)

  12. ACID (The COMPASS Trieste Compute Farm) • Current Environment • Hardware • 32 PC clients, 4 disk servers, 1 tape server + a small tape library (STK L40), 1 Oracle server, 1 supervisor and software repository server. All two processors machines • Software: • OS: Red Hat Linux 7.3 (but Oracle server: it runs Red Hat AS2.1) • Just few packages patched for local deeds: kernel (NFS over TCP, 32k R/W size, F.Collin patches to Tape Drivers), OpenAFS, OpenSSH (local setup), CASTOR (it is a CERN software, small changes in code, local setup) B. Gobbo

  13. ACID (The COMPASS Trieste Compute Farm) • Software • Monitoring: BigBrother 1.9e (Quest Software) • Resource Management: Grid Engine 5.3 (Sun Open Source Software) • HSM: CASTOR 1.7.1.5 (CERN Software) • RPMs upgrade using LinuxUpdate tool (Developed in INFN Trieste) B. Gobbo

  14. ACID (cont.d) • Upgrades to be done before end of the year • Hardware • ~6 more PC client nodes (with Opteron?) • Software • OS: Move to Scientific Linux (CERN Version under test) • 2 Client nodes already migrated for tests • 1 Server installed with Red Hat EL3 • All needed software already ported • Experiment software porting basically finished. • Initially we had problems with Grid Engine 6.0: AFS token is not exported (that is a major problem as user home directories are under AFS). Lack of support for AFS credential in qsub. Anyhow Grid Engine 6.0 now works. B. Gobbo

  15. ACID (cont.d) • Software • Few changes are needed to “adapt” distribution to local environment • Mainly remove some “too-CERN-related” items • We keep CERN built kernel • We are still considering “pure” SL too, as an alternative • In that case few packages need to be added. Kernel has to be rebuilt. • We keep local mirror of SLC and SL repository • Following the CERN model: upgrade via APT (from the local repository) • We had a look at QUATTOR too • But the farm is not so big. Probably such a powerful too is not needed B. Gobbo

More Related