1 / 21

Elektroniikkayhdistys 13.1.2009 Green IT ( eng ) CSC:n superkone-ympäristö (fin)

Tervetuloa CSC:lle !. Elektroniikkayhdistys 13.1.2009 Green IT ( eng ) CSC:n superkone-ympäristö (fin) Konesalivierailu (viittoen). Imagine your toaster being the size of a matchbox !. 50 – 150W on a postage stamp Watts / socket ~ constant Multicores demand memory

kyne
Download Presentation

Elektroniikkayhdistys 13.1.2009 Green IT ( eng ) CSC:n superkone-ympäristö (fin)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tervetuloa CSC:lle! Elektroniikkayhdistys 13.1.2009 Green IT (eng) CSC:n superkone-ympäristö (fin) Konesalivierailu (viittoen)

  2. Imagineyourtoasterbeing the size of a matchbox! • 50 – 150W on a postagestamp • Watts/socket ~constant • Multicoresdemandmemory • Capacity and bandwidth • Eachmemory DIMM takes 5- 15W • Sockets/rackincreasing “Virtualization may offer significant energy savings for volume servers because these servers typically operate at an average processor utilization level of only 5 to 15 percent “ (Dietrich 2007, US EPA 2007).11 “The typical U.S. volume server will consume anywhere from 60 to 90 percent of its maximum system power at such low utilization levels “ (AMD 2006, Bodik et al. 2006, Dietrich 2007).”

  3. Thenmultiplyyourproblembythousands • Standard rackcabinetequals 0,77m2 / 1,44m3 • HPC rackdensity (cpus&ram/rack) increases • Enter the power ! • currentsystemcabinets 25 – 40 kW • thisyear 60kW/cabinet • vendorspredict 80 -100kW racks in 2-3 years • Itbecomesimpossible to feedenough air • thru the cabinet (windspeedissue) • Water is ~20 timesmoreefficientcoolant • than air (in practice) • Liquidcooling and massive 2tn + racks • Machine roomsfaceyetanotherchallenge – • sheermass of computinginfrastructure

  4. Why GREEN IT hasbecome an issue? Is it the price of energywearetalkingabout? “At a datacenter level we estimate consumption levels in Western Europe to have exceeded 40TWh in 2007 and this is expected to grow to more than 42TWh in 2008. …which translated into €4.4 billion for entire datacenters” IDC, LONDON, October 2, 2008 Source: IDC, U.S. EnvironmentalProtectionAgency

  5. GREEN machinerooms ? Metrics and equations for machineroomefficiency PUE = PowerUsageeffectiveness (totalfacilitypower / IT power) DCIE= Data Center Infrastructureefficiency (1 / PUE) * 100% And maybeonemorecoming DCP =Data Center Productivity ( Usefulwork / totalfacilitypower) (TWh) (PUE*) Worseyet! – rememberdietrich: ”Avg. Itutilization 5-15% whichtakes 60-90% of systemmaxpower” – thenadd the overhead! Source: the greengrid, *U.S. Environmental Protection Agency (2007)

  6. Aboutmachineroomefficiency.. • Q: Why PUE cannotbe 1.0 (theoreticalminimum)? • A: In order to guaranteeoperationalenvironmentyouwillneed to: • Providereliablepowersupplyin form of: uninterrublepowersupplies (UPS), generators, • backupbatteries, switchgear(s), cables, rails,.. • In general: anyelectricalcomponenthaspowerlosses and efficiencyratesbelow 100% • – the moreyouputthem into use the morepoweryoulose. • Createcoolant(coolwater & air) thatrequiresusuallyloads of extraenergyconsumedby: • coolingchillers, computingroom air conditioningunits (CRAC), coolingtowers, humidification • units, pumps, directexchangeunits … Source: the greengrid, EPA*

  7. How to improvemachineroomefficiency ! Facilitypower • Reduceredundancywhereverapplicable! (Tiers1-4) • State-of-the-arttransformers (>98%) • State-of-the-art UPS systems (>95%) • State-of-the-artswitchgears, powercables, rails,.. • Variablespeed:chillers, fans and pumps, CRACs, • Modular, upgradablefacilityapproach Facilitycooling (interior) • Donotover-cool! Tune the air and watertemperatures as high as possible * • Donotover-sizeyourcoolinggear, efficiency is worse at lowusagelevels • Hot/coldisleapproach (air) • Reduce the area/ air dimensions to becooled • Liquidcooled/closedracks (Water is ~15x moreefficientthan air) *ASHRAE

  8. And improvingfurther... Facilitycooling (exterior) …access to coldwatersupplyhence no need for largechillers • District/remotecoolingfromlocalenergycompany? • Heatdissipationfedback to districtheatingsystem etc.(morecomplex?) • Large HPC sitesconsider CPH plants of theirown to createpower and cooling • Economizer or water-side free cooling (in moderate or mild climate region) • Get the coolwaterfrom (river), deep lake, sea, groundwatersource and maybereturnitbackslightlywarmer • Cool /cold (<15 / <7 Celcius) outside air (nights, winter time) • Permafrost, ice/snow - howlikely ? Feasibility?

  9. Green computingsystems? (Green500.org) • In computingtechnologythe GREEN is definedbycomputationaloperationsachievedbyconsumedwattage i.e. Mflops/Watt – ratio (the higher the greener). • State-of-the-arttechnologyexceeds530MFlops/Watt • IBM PowerXCell and BlueGenesystems • Top result of 535MFlops/Watt • TOP-DOG /Petascalesystems out there – see the difference in architectures and howitaffects the powerconsumption: • IBM (hybridCell, AMD, PowerPC) Roadrunner (2.5MW) 445MFlops/W • Sun (AMD) Ranger (7MW) 152 Mflops/W. • Hybridsystem is ~3x timesmoreenergyefficientthantraditional x86 based

  10. Trends in 2011-2015 : • Hosting a Petaclasssystemwithdifferentscenarios • (assuming 3MW and 1MW systems , withfacilityefficiency of 1.6 and 1.25) Currenttechnologyapproach 14.5 M€ Currentcomputingapproachwithenhancedhosting 11.4 M€ State-of-the-artcomputingwithcurrenthostingtech. 4.8 M€ State-of-the-artapproach 3.8 M€

  11. MakeeveryFlopcount! • Optimizedcode in energyefficient hardware Someconclusions on Green IT • Makeevery Watt count! • Improvefacilityefficiency • Makeevery € count! • Reasonableinvestments and buy GREEN energy!

  12. CSC:n superkone-ympäristö ”LOUHI, Pohjan Akka” Suomen tieteellisen laskennan lippulaiva kuvattuna uudessa Pohja -konesalissa lokakuussa 2008.

  13. Mikä on Louhi ja mihin se pystyy • Perinteisen superkoneyhtiön (Cray Inc.) valmistama massiivisesti rinnakkainen supertietokone • Käyttää tavallisia AMD Inc. valmistamia prosessoreja kuten koti-pc:tkin (n. 2 500 kpl) • Käyttöjärjestelmänä ”viilattu” Linux • Otettu käyttöön vaiheittain huhtikuusta -07 alkaen, nyt täydessä laajuudessaan • Hinta n. 7M€ • Tehokas elinaika n. 4 vuotta • Laskentateholtaan 31. maailmassa ja 9. Euroopassa • Vastaa n. 5 000 tehokasta pc:tä • Teoreettinen laskentakyky n.16 000 laskutoimitusta/ihminen/s • Keskusmuistia n. 11 TB • Levyjärjestelmä 70TB (satoja kovalevyjä)

  14. XT4 Computeblade 1 räkki sisältää 3 x 8 bladea á 4 tai 8 CPU CPU1 muisti net1 + puhallin 1,4m3/s

  15. Louhen fyysiset mitat ja sijoitus saliin Koko järjestelmä asennettu 60x60cm laatoitukselle, joka on korotettu (80cm) teräsjalkojen varaan. Kantokyky 600kg/jalka, Laatan pistekuormakestävyys 9kN. PA = 3.6 x 6 (21.5m2) Korkeus 2 m Massa: 15 000 kg 2 x 10 laskentaräkkiä 2 dataräkkiä

  16. Sähkönsyötön periaate 72h varavoima 2500hv/2MW 63A 3000A : 100kg/jm Louhen sähköteho on 300 - 520kW, joka Syötetään kahdelta UPS-suojatulta (10min) keskukselta

  17. 475 kW sähköteho (kw) ~ 80 sähkökiukaan lämpöteho (24/7), joka pitää siirtää pois: laitteesta ilmaan, ilmasta veteen,.. 30 - 35 C 75m3 / s 1,4m3/räkki n. 40l/s n. 17 C 13 – 15 C n. 9 C

  18. ..vedestä alkoholiin ja katolle. Glykoliputket Katolle (12. krs) Kattolauhduttimet Kompressorijäähdytin 1,3MW

  19. Kiitos mielenkiinnosta, kysymyksiä?

More Related