320 likes | 513 Views
DC Power for Data Centers – a demonstration. Open House Presentation Summer 2006 Sun Microsystems Newark, CA. My Ton – Ecos Consulting Brian Fortenbery – EPRI Solutions Bill Tschudi – Lawrence Berkeley National Laboratory. Sponsored by:
E N D
DC Power for Data Centers – a demonstration Open House Presentation Summer 2006 Sun Microsystems Newark, CA My Ton – Ecos Consulting Brian Fortenbery – EPRI Solutions Bill Tschudi – Lawrence Berkeley National Laboratory Sponsored by: California Energy Commission (CEC)─Public Interest Energy Research (PIER), California Institute for Energy Efficiency (CIEE).
Welcome & Overview • Project background • Project objectives • Demonstration configurations • Technical & safety details • Project results Guided Tour of Equipment Questions/Answers/Discussion Open House: Agenda
Thomas Edison: “My personal desire would be to prohibit entirely the use of alternating currents. They are as unnecessary as they are dangerous. I can therefore see no justification for the introduction of a system which has no element of permanency and every element of danger to life and property.”
California Energy Commission Public Interest Energy Research High-tech Buildings Project Objectives • Research, develop, and demonstrate, innovative energy efficient technologies • 10-year initiative focusing on high-tech industries – e.g. data centers • Help move the market to more efficient technologies • Research and demonstration projects include technology transfer
DC Demonstration – Timeline • Stakeholders first met – Fall 2005 • Kick-off meeting – April 2006 • Equipment assembly – May 2006 • Initial “Team Open House” June 7, 2006 • Public Open House events: June 21, July 12, 26; Aug 9, 16 • End date – August 16, 2006
Industry Partners Made it Happen • Alindeska Electrical Contractors • APC • Baldwin Technologies • Cisco Systems • Cupertino Electric • Dranetz-BMI • Emerson Network Power • Industrial Network Manufacturing (IEM) Equipment and Services Contributors: • Intel • Nextek Power Systems • Pentadyne • Rosendin Electric • SatCon Power Systems • Square D/Schneider Electric • Sun Microsystems • UNIVERSAL Electric Corp.
Other Partners Collaborated • 380voltsdc.com • CCG Facility Integration • Cingular Wireless • Dupont Fabros • EDG2, Inc. • EYP Mission Critical • Gannett • Hewlett Packard Stakeholders: • Morrison Hershfield Corporation • NTT Facilities • RTKL • SBC Global • TDI Power • Verizon Wireless
Data Center Power Use • Data center power use nationally is large and growing. • Two studies estimated data center energy use: • 2004 EPRI/Ecos estimated 14.8 TWh • 2000 Arthur D. Little estimated 10.1 TWh • 0ne terawatthour = 1,000,000,000 kilowatthours or • one million megawatthours • Saving a fraction of this energy is substantial
Loads Power delivery Cooling Cumulative Power Typical Data Center Power Use ~50% Power Efficiency 9 Source: Intel Corp.
Power Consumption: 100 W System Load Load 100W Total 275W VR 20W PSU 50W Server fans 15W UPS +PDU 20W Room cooling system 70W source: Intel Corporation Source: Intel Corp. 10
This demonstration focuses on reducing power delivery and conversion losses observed in our prior work: Power Supplies in IT equipment Uninterruptible Power Supplies (UPS)
UPS and Power Supply efficiency • We observed a wide range of performance from the worst to the best • Our original goal was to move the market to the higher performing systems • Incentive programs, labeling, education programs were all options – and still are
Data Center Power Delivery System DC/DC 78 - 85% UPS 88 - 92% Power Dist 98 - 99% Power Supply 68 - 72% The heat generated from the losses at each step of power conversion requires additional cooling power HVAC: Power for cooling can equal or exceed the direct losses
Then we asked the question: Could some of the conversion steps be eliminated to improve efficiency? Could a demonstration be devised to measure actual savings?
DC Demonstration - Objectives The demonstration’s original objectives were to show a rack level solution: • DC powered server equipment exists in the same form factor or can readily be built from existing components • DC powered server equipment can provide the same level of functionality and computing performance when compared to similarly configured and operating AC server equipment • Efficiency gains from the elimination of multiple conversion steps can be measured by comparing traditional AC delivery to a DC architecture • DC system reliability is as good or better than AC system reliability
The project team rapidly defined additional objectives: • Demonstration of 380 V. DC distribution at the facility level compared to conventional AC systems • Demonstration of other DC solutions (48 volt systems) • Evaluation of safety considerations • Demonstrate ability to connect alternative energy solutions (PV, fuel cells, etc.)
What the demonstration included • Side-by-side comparison of traditional AC system with new DC system • Facility level distribution • Rack level distribution • Power measurements at conversion points • Servers modified to accept 380 V. DC • Artificial loads to more fully simulate data center
Additional items included • Racks distributing 48 volts to illustrate that other DC solutions are available, however no energy monitoring was provided for this configuration • DC lighting was included!
Facility-Level DC Distribution 380V.DC
Details • Safety was reviewed by a committee of the partners. No significant issues were identified. Only concern was whether fault currents would be large enough to trip protective devices. • All distribution equipment is UL rated for DC applications • No commercially available DC connector exists in a size convenient for use with servers • Reliability should be improved – fewer potential points of failure. Eliminating heat sources should help. • Final report will address safety and applicable codes and standards
Measured Results • Facility level overall efficiency improvement: 10 to 20% • Smaller rack level overall efficiency improvement but other benefits include: • Thermal benefits • Smaller power supply in server • Transition strategy for existing centers
AC system loss compared to DC 9% measured improvement 2-5% measured improvement
Implications could be even better for a typical data center • Redundant UPS and server power supplies operate at reduced efficiency • Cooling loads would be reduced. • The UPS system used in the AC base case system performed better than benchmarked systems – efficiency gains could be higher. • Further optimization of conversion devices/voltages is possible
UPS XFMR PS Total Efficiency System Efficiency 87.00% 98.00% 90.00% 76.73% High Efficiency (DC Option) 92.00% 100.00% 92.00% 84.64% Compute Load (W) Input Load (W) Difference System Load 10,000 13032.03 High Efficiency (DC Option) 10,000 11814.74 9.34% Data Center Power Delivery System UPS 87 - 92% XFMR 98% - NA Power Supply 90 - 92%
UPS XFMR PS Total Efficiency Typical System Efficiency 85.00% 98.00% 73.00% 60.81% High Efficiency (DC Option) 92.00% 100.00% 82.00% 75.44% Optimized DC Option 92.00% 100.00% 92.00% 84.64% Compute Load (W) Input Load (W) Difference Typical Load 10,000 16444.93 High Efficiency (DC Option) 10,000 13255.57 19.39% Optimized DC Option 10,000 11814.74 28.16% Data Center Power Delivery System UPS 85 - 92% Power Dist 98% - NA Power Supply 73 - 92%
Results What does 15% increase in efficiency mean to the bottom line? Actual mileage will vary
Results What does 15% increase in efficiency mean to the electrical power grid?
See the results on-line • Actual results • Lawrence Berkeley National Laboratory websites for more information • http://hightech.lbl.gov/ • http://hightech.lbl.gov/dc-powering/
Additional Information Project Coordination & Contacts: • My Ton, Ecos Consultingmton@ecosconsulting.com • Brian Fortenbery, EPRI Solutionsbfortenbery@eprisolutions.com Lawrence Berkeley National Laboratory • Bill Tschudi, Principal Investigator wftschudi@lbl.gov • Dr. Evan Mills, Press and publicity contact emills@lbl.gov THANK YOU FOR YOUR INTEREST!