120 likes | 132 Views
Explore the integration of storage systems with advanced networks for efficient data movement in large scientific infrastructures. Learn about the challenges, achievements, and potential performance enhancements in the realm of High Energy and Nuclear Physics data systems. Discover how shared networks compare to dedicated pipelines and the future vision for seamless data transfer in cutting-edge research.
E N D
Abstract GRID Storage Element File Stage In File Stage In File Stage Out Grid Side WAN FileSrv FileSrv FileSrv FileSrv FileSrv LAN Worker Node Side (POSIX style I/O) worker worker worker worker worker worker Don Petravick -- Fermilab
Today’s Practical Storage Elements • Very,very large commodity infrastructures have been built on LANs and used in HEP. • Specialized SANS are not used generally in HEP • It must at least be the starting point for mingling advanced networks and large HENP data systems. Don Petravick -- Fermilab
FNAL-CERN Service Challenge • Just beginning • Objective – 24x7 production file transfers • Achieved: 22 TB/day • Used R&E networks • Would have dominated ESNet. Don Petravick -- Fermilab
SC Dwarfs any flow on ESNet Don Petravick -- Fermilab
What’s Potential Performance? Many gigabits disk to disk The service Challenge for Storage Elements. SE’s must mange concurrent local and GRID data, tape xfers Disk-to-Disk transfer work gives upper limits Bandwidth Challenge is ongoing in this booth! Don Petravick -- Fermilab
R&E networks in the USA • National Lambda Rail • DOE UltraScienceNet • UltraLight • LHCNet • HOPI • FNAL <-> Starlight (humbly) Don Petravick -- Fermilab
Representative Networks • DOE UltraScienceNet • Scheduled availability of 1 and 10 Gbit light paths at its POPS • UltraLight • More lambdas • Optical switching (Glimmer glass switch controlled by Mona Lisa) Don Petravick -- Fermilab
Shared compared to pipe • Shared networks: Highest performance is achieved by knowledgeable, careful administration. • Pipes: Advanced networks provide static or dynamic pipes. • Lower cost of all-optical paths will provide immense amount of. • Congestion-free connectivity • Loss less Connectivity Except for bit-errors • Measurably good connectivity Don Petravick -- Fermilab
Connecting, Storage systems on the LAN to wide area pipes. Don Petravick -- Fermilab
Storage Element Integration • SE becomes aware of extra bandwidth when dynamic pipe is available. • SE transfers more files in parallel in parallel. • SE may alter file transfer protocol. • SE deal with ongoing transfers when dynamic pipe becomes unavailable. • SE defer and queue work until pipe is available. Don Petravick -- Fermilab
Vision • The vision is that large scale science is enabled by having systems which move data in a state-of-the-art manner. • A problem is that software time constants are many years • The tactic is to create demand and mutual understanding via interoperation of advanced networks and HEP data systems. Don Petravick -- Fermilab