420 likes | 485 Views
GreenIT and the HE/FE. Overview. The problem within an HE/FE context Example solutions Desktops Research Computing HTC HPC Discuss: Research and green computing Hardware Software Systems Institutional ICT and policies. The problem within HEI?.
E N D
Overview • The problem within an HE/FE context • Example solutions • Desktops • Research Computing • HTC • HPC • Discuss: • Research and green computing • Hardware • Software • Systems • Institutional ICT and policies
The problem within HEI? With thanks to Prof Peter James(SustainIT)
HE/FE HAVE A LARGE FOOTPRINT • 760,000 PCs • 215,000 servers • 147,000 networked printers • 512,000 Mwh of electricity • 275,000 tonnes of CO2 • Over £60 million in 2009
HEEPI • Higher Education Environmental Performance Improvement (www.heepi.org.uk)- Green Gown Awards- 70+ events with 6000+ attendees- Benchmarking- Case studies, guidance and tools
SUSTEIT OUTPUTS • Strategic review • Technical papers- data centres; desktop; printing • Case studies • Open source Excel tools- energy and carbon footprinting- thin client • Event presentations
Low carbon computing: a view to 2050 and beyond • Data-intensive sectors such as HE/FE will probably find themselves facing harsher targets than other sectors • Key outputs are: • Best practice measures and standards for metrics • Short term 'quick fixes' based on simple staff actions and/or low cost investment • Longer term solutions that either represent a more costly investment, or are based on more experimental technologies • Discussion of the factors that are likely to affect how these technologies develop in the future • A first attempt at a Low Carbon ICT Roadmap that puts technology developments into a framework that also takes into account what is currently known about the targets associated with the Climate Change Act • A discussion of the factors and technologies that are likely to feature in the long-term plans and decisions that senior managers in tertiary education will need to make.
Methods for driving change • Sticks- Regulations- Energy and other costs- Stakeholder requirements • Carrots- Financial and operational benefits- Teaching and research
Howard Noble (OUCS) Kang Tang (OeRC) David Wallom (OeRC) Project supported by: Joint Information Systems Committee (JISC) Oxford University Estates Department Oxford Environmental Change Institute (ECI) Oxford e-Research Centre (OeRC) Oxford University Computing Services (OUCS) Green desktop computing
Green IT Services in Oxford • Monitoring Service • Wake-on-LAN Service
Monitoring Service • No installation • Single Sign-on • Attributes based AuthZ • Central managed data repository • Ping sweep vs. ARP scan Network ICMP Data Link ARP Physical
Monitoring Service in Oxford Monitoring Server Gateway WebAuth SSO Gateway Gateway Gateway Oak LDAP
Explore by category (cont’d) • Desktop • Server • Virtual Machine • Network device • Other
What do users need? A gateway server in the right place 1.2GHz Marvell Sheeva CPU 512 MB RAM 512 MB flash memory Gigabit LAN interface and USB 2.0 port. 1.6GHz Atom CPU 1 GB RAM 80 GB SATA-2 HDD Gigabit LAN interface and USB 2.0 ports
Possible outcomes • Everybody turns off their computer • Nobody turns off their computer • Somewhere between
Wake on LAN (WoL) Service • Encourage OFF by enable ON • A standard of decade old • Supported by most motherboards • One gateway, two services
Who can use the service? • Registered owner • Registered IT admin • Scheduled timer • Third party services
Service interfaces WS-SECURITY 3rd PARTY SERVICES SOAP MACHINE CENTRAL SERVER HUMAN HTTP REQ BROWSER KEBEROS
WoL Service in Oxford OUCS Central WOL Server Gateway WebAuth SSO Registration Server Gateway Gateway Gateway HFS Service
Secured communications Central Server INTERNET Gateway X.509 Signature + SSL Encryption
WoL outside Oxford Subnet Central WOL Server Shibboleth SP Gateway Subnet IdP Gateway Subnet Subnet WoL Service in Liverpool University IdP Gateway +
Desktop computers and energy consumption Power (kW) x Time (hours) x Number of devices x Cost (£ per kWh) 0.105 x 8760 x 16 000 x 0.12 = £1,766,000 0.105 x 1808 x 16 000 x 0.12 = £ 364,000
Five steps: Estimate Scenario A: 100 computers (80W) and monitors (25W) left on all year will consume 92,000 kWh over the next year: • 49,400 kg CO2eq. • £11,000 (at 12p/kWh) Scenario B: Same stock switched off at the end of each working day (over night, weekends and 25 days of holiday) will consume 19,800 kWh over the next year: • 10,600 kg CO2eq. • £2,400 (at 12p/kWh)
Five steps: Research Energy Star has compiled a list of case studies (mostly for US organisations) and we have started to do the same at Oxford e.g. policy at OUCS: http://www.oucs.ox.ac.uk/greenit/oucs.xml
Five steps: Implement Four tools: • Monitor and report • Switch computers on remotely • Automatically power down computers safely and reliably • Display real time electricity meter data
Five steps: Communicate • The Carbon Reduction Commitment league table • IT-related energy costs • Staff morale: It all comes down to protecting the brand of your group and the collegiate University as a whole
Five steps: Share Write your approach up so others can learn from your experience. For more information about the 5 steps: http://www.oucs.ox.ac.uk/greenit/desktop.xml
Participating Institutions • Liverpool, national service • Manchester • York • Southampton Solent
Research Computing • Large contributor for institutional consumption • Crucial research facility with significant user community from across the university constituency • Institutional HPC may consume ~4-5MW • Utilisation not always 100% • Therefore; • Increasing efficiency is essential as every little step counts
Possible solutions • Already • Virtualisation – OK for smaller services, not large resource utilisation • Resource Management – Interface for starting and stopping workers within a task farm/beowulf cluster
Condor HTC Power Optimization • Integration between Condor resource management system and power control facilities • Separate daemon that manages which resources (worker nodes) are running compared to incoming task queue • Insert damping factor and ‘round-robin’ listing of workers to ensure systems aren’t turned on and off too frequently
Powering down Supercomputers Dr Jon Lockley
Project • 9 month JISC Funded Demonstrator project • Oxford Supercomputing centre and Streamline Computing • Possible to make 10-20% energy savings during normal operation
Background • 25-30 UK HEI with supercomputing resources • Energy use (directly and in associated facilities) is large • Any reduction in nominal consumption would result in a large saving • Resource utilisation managed by a relatively small number of job schedulers • Little vendor drive to reduce consumption
The plan • Actively control the compute nodes • Switch ‘on and off’ depending on load • Containing enough intelligence to power on the right number of nodes for the work queued • Job scheduler independent development to allow for widespread utilization and integration by academic and commercial systems integrators in their management stack • Initial targets are PBSPro, Torque and SGE • Maui and other DRM/JS as resources allow
Future Research • Hardware • Data centres • Software • Optimization of operation • Systems • Overall design