70 likes | 230 Views
RCF Storage and Computing for AnDY. 30 March 2012. Overview. As a RHIC experiment, AnDY will be supported by the RHIC Computing Facility (RCF) with: Private networking between the facility and the AnDY counting house HPSS storage AFS storage NFS storage Computing hardware
E N D
RCF Storage and Computing for AnDY 30 March 2012
Overview • As a RHIC experiment, AnDY will be supported by the RHIC Computing Facility (RCF) with: • Private networking between the facility and the AnDY counting house • HPSS storage • AFS storage • NFS storage • Computing hardware • The level of the support is to be determined AnDY Computing - Tom Throwe
Existing Computing • To date, AnDY has made use of a rack (30 nodes) of recycled RCF computers • The machines are located in the AnDY counting house and have been in heavy use since the beginning of RHIC Run-11 • The cluster of dual core, dual CPU machines makes use of a Hadoop distributed file system and Condor to manage the data an jobs on the nodes AnDY Computing - Tom Throwe
HPSS Storage • A direct fiber, previously used by the BRAHMS experiment, between the counting house and the RCF HPSS servers will be reactivated • Raw data from AnDY will be transferred from a disk buffer in the counting house to an HPSS server in the RCF with the standard HPSS pftp client via the above fiber connection • The raw data will be stored in an HPSS COS (Class Of Service) assigned to the AnDY experiment. AnDY Computing - Tom Throwe
Non-HPSS Storage • There will be four types of disk storage made available to the AnDY experiment • Home directories in NFS – this space will be backed up • AFS storage for common software • NFS mounted central storage for common files and results files • Distributed disk on the compute nodes to support running jobs, especially those with high I/O requirements AnDY Computing - Tom Throwe
RCF Computing for AnDy • The AnDY experiment will have dedicated hardware for data analysis • The amount of dedicated hardware is to be determined • In addition, AnDY users will make use of the “Condor general queue” for opportunistic use of idle compute cycles on the PHENIX and STAR farm nodes • AnDY has greatly benefited from the use of the general queue over the last few months AnDY Computing - Tom Throwe
AnDY Computing Summary • AnDY requires resources from the RCF • Hardware • Fiber connection to the counting house • Farm compute nodes • Storage • HPSS, NFS, AFS and distributed • Infrastructure • Condor and the “general queue” • The level of RCF resources is a management decision AnDY Computing - Tom Throwe