190 likes | 228 Views
This article discusses the challenges and requirements in designing a data center to support genome sequencing over the next 5-7 years. It covers the network, computing, and storage infrastructure needed to handle the increasing data generated by next-generation sequencing technologies. The article also explores the limitations of space and the power and cooling needs associated with a data center.
E N D
The Genome Sequencing Center's Data Center Plans Gary Stiehr garystiehr@wustl.edu
Design a data center to support sequencing over the next 5-7 years The task garystiehr@wustl.edu
What network, computing, and storage infrastructure will be required to support our efforts over the next 5-7 years?We know what data Sanger-based sequencing methods generate and how to process them.But what about the next-generation sequencing technologies from companies like 454, Solexa/Illumina, Applied Biosystems, and Helicos?How much data will they generate?How much processing will the data require? The challenges garystiehr@wustl.edu
The challenges What about other technologies? garystiehr@wustl.edu
So in designing a data center, we found ourselves trying to answer questions not only about the future of network, computing, and storage technologies, but also about the very uncertain future of next-gen sequencing technologies.One thing was for sure: we would need a lot of everything.One other thing was for certain: there was not a lot of space to put everything The challenges garystiehr@wustl.edu
Land lockedSpace limitedDense compute and diskMassive power and cooling needsMassive power and cooling require lots of space The challenges garystiehr@wustl.edu
Initial estimates for computing and storage needs over the next three years were translated into power and cooling requirements.These requirements would have necessitated a $2.5M upgrade to building electrical and chilled water infrastructure.That is before any money was spent on the data center and further expansion of the space was not possible. 4444 Forest Park Ave garystiehr@wustl.edu
The task garystiehr@wustl.edu
The University was in the process of acquiring a building across the street from 4444 Forest ParkEstimates for purchase and razing of this space came to about $1MAs an added bonus, this space provided about 5 times as large a floor plan 222 S Newstead Ave garystiehr@wustl.edu
Of course, predicting requirements in 5-7 years in advance is difficult:New computer hardware types.New sequencer technologies.New projects.Based off of historical purchase trends adjusted for current and anticipated projects, 20 racks per year:2/3 of racks will contain disk @ 8 kW per rack1/3 of racks will contain CPU @ 25 kW per rack Power/Cooling Requirements garystiehr@wustl.edu
Power and cool an average of around 13.7 kW per rack, with some racks up to 25 kW.Desire to last at least six years before additional space is needed. With 20 racks per year, needed to fit at least 120 racks.Each rack needs redundant power paths backed by UPS and generator.Cooling system needs some redundancy.Avoid single points of failure. Data Center Requirements garystiehr@wustl.edu
Chilled water-based.-Single closed loop water piping a single point of failure?+larger initial cost, potential initial unused capacityRefrigerant-based (e.g. Liebert XD-series).-Refrigerant piping single point of failure (not sure)?-Higher maintenance costs for numerous condensers?-Components inside the data center (but shouldn’t require much maintenance?).+smaller initial cost, scales as needed (assuming piping is pre-installed). Cooling Options garystiehr@wustl.edu
Designed to cool an average of 15 kW per rack (with ability to cool 25 kW in the mix).N+1 redundancy of chilled water plants and air handlers.Floor grates rather than perfs.Hot/cold aisles partitioned with plastic barriers above racks and at ends of aisles.Closed loop water piping. Cooling Design garystiehr@wustl.edu
Redudant paths all of the way to utility: - dual utility power feeds+2MW generator - dual transformers and associated gear - dual UPS (one battery, one flywheel) - multiple panels in RPP (giving each rack access to both UPSs).Branch circuit monitoringA platform (partial second floor) built to hold additional electrical equipment. Electrical Design garystiehr@wustl.edu
Can withstand 150+ MPH windsReceiving and storage areasBuilding monitoring integrated with campus-wide systemLEED (Leadership in Energy and Environmental Design) certified.Dual fiber paths to connect to existing infrastructure Other Design Elements garystiehr@wustl.edu
Out of 16,000 square feet of available floor space (including platform), only approximately 3200 square feet of usable data center floor space.Electrical and cooling infrastructure ate most of the space. Surprises garystiehr@wustl.edu
Ground breaking to move in less than 1 year (phase 1+2).Phased build out (due to budget/timing):Phase 1 30 racks 2 chillers 3 air handlers 1 generatorPhase 2 60 racks 2 air handlers Construction garystiehr@wustl.edu
Phase 3 90 racks 1 chiller 2 air handler 1 generatorPhase 4 120 racks 1 air handlers 1 generator Construction garystiehr@wustl.edu
Standard racks, PDUs?Power whips to racks--anticipating outlet and PDU types (e.g., 1U vs. blades).Initially trying 30A 208V 3-pole circuits, power whips with L21-30R connectors.Blades with IEC C19/C20, Disks with C13/C14.Some systems with N+1 power supplies--how to maintain redundancy?For C13, 208V or 120V? Other Considerations garystiehr@wustl.edu