1 / 32

Facebook: Infrastructure at Scale Using Emerging Hardware

Facebook: Infrastructure at Scale Using Emerging Hardware. Bill Jia, PhD. Manager, Performance and Capacity Engineering Oct 8 th , 2013. Agenda. of monthly active users are outside of the U.S. (as of Q2 ’ 13). 84%. Facebook Scale. Data Centers in 5 regions. Facebook Stats.

orli
Download Presentation

Facebook: Infrastructure at Scale Using Emerging Hardware

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Facebook: Infrastructure at Scale Using Emerging Hardware Bill Jia, PhD Manager, Performance and Capacity Engineering Oct 8th, 2013

  2. Agenda

  3. of monthly active users are outside of the U.S. (as of Q2’13) 84% Facebook Scale Data Centers in 5 regions.

  4. Facebook Stats • 1.15 billion users (6/2013) • ~700 million people use facebook daily • 350+ million photos added per day (1/2013) • 240+ billion photos • 4.5 billion likes, posts and comments per day (5/2013)

  5. Cost and Efficiency • Infrastructure spend in 2012(from our 10-K): • “...$1.24 billion for capital expenditures related to the purchase of servers, networking equipment, storage infrastructure, and the construction of data centers.” • Efficiency work has been a top priority for several years

  6. Search Photos UDB ADS-DB Tao Leader Msg Others Architecture Front-End Cluster Adsfew racks Web Multifeedfew racks Other smallservices Cache Back-End Cluster Service Cluster

  7. Lots of “vanity free” servers.

  8. L L L L L L Life of a “hit” Front-End Back-End Front end Back end requeststarts MC Feed agg MC Web Database Time MC Ads requestcompletes MC

  9. Five Standard Servers

  10. Five Server Types • Advantages: • Volume pricing • Re-purposing • Easier operations - simpler repairs, drivers, DC headcount • New servers allocated in hours rather than months • Drawbacks: • 40 major services; 200 minor ones - not all fit perfectly • The needs of the service change over time.

  11. Agenda

  12. Efficiency at FB • Data Centers • Heat management, electrical efficiency & operations • Servers • “Vanity free” design & supply chain optimization • Software • Horizontal wins like HPHP/HHVM, cache, db, web & service optimizations

  13. Next Opportunities? • Disaggregated Rack • Better component/service fit • Extending component useful life • Developing New Components • Alternatives in CPU, RAM, Disk & Flash

  14. Agenda

  15. Leaf Aggregator Network Switch L A Type-6 Server L A Type-6 Server . . . . => L A Type-6 Server Type-6 Server A rack of news feed servers... 80 processors 640 cores COMPUTE RAM 5.8 TB . . . STORAGE 80 TB Type-6 Server FLASH 30 TB The application lives on a rack of equipment--not a single server.

  16. Compute • Standard Server • N processors • 8 or 16 DIMM slots • no hard drive - small flash boot partition. • big NIC - 10 Gbps or more

  17. Ram Sled • Hardware • 128GB to 512GB • compute: FPGA, ASIC, mobile processor or desktop processor • Performance • 450k to 1 million key/value gets/sec • Cost • Excluding RAM cost: $500 to $700 or a few dollars per GB

  18. Storage Sled (Knox) • Hardware • 15 drives • Replace SAS expander w/ small server • Performance • 3k IOPS • Cost • Excluding drives: $500 to $700 or less than $0.01 per GB

  19. Flash Sled • Hardware • 175GB to 18TB of flash • Performance • 600k IOPS • Cost • Excluding flash cost: $500 to $700

  20. Three Disaggregated Rack Wins • Server/Service Fit - across services • Server/Service Fit - over time • Longer useful life through smarter hardware refreshes.

  21. Longer Useful Life • Today servers are typically kept in production for about 3 years. • With disaggregated rack: • Compute - 3 to 6 years • RAM sled - 5 years or more • Disk sled - 4 to 5 years depending on usage • Flash sled - 6 years depending on write volume

  22. Disaggregated Rack • Strengths: • Volume pricing, serviceability, etc. • Custom Configurations • Hardware evolves with service • Smarter Technology Refreshes • Speed of Innovation • Potential issues: • Physical changes required • Interface overhead

  23. Approximate Win Estimates Conservative assumptions show a 12% to 20% opex savings. More aggressive assumptions promise between 14% and 30% opex savings. * These are reasonable savings estimates of what may be possible across several use cases.

  24. Agenda

  25. 20 years of the same... • The servers we deploy in the datacenter are pretty much identical to a 386 computer from 20 years ago. • New Technologies: • Math coprocessor • SMP - N processors per server • Multi-core processors • GPUs - vector math • Flash memory • The exponential improvement of CPU, RAM, disk and NIC density has been incredible.

  26. CPU Today: Server CPUs are more compute-dense versions of desktop processors. Mobile phones are driving development of lower power processors. System on a Chip (SoC) processors for this market integrate several modules into a single package to minimize energy consumption and board real-estate. Next: Smaller Server CPUs derived from mobile processors may become compelling alternatives.

  27. RAM Today: The DDR standard RAM modules were developed for desktop servers. The DDR standard is difficult to hit on a cost/performance basis with alternative technologies. Next: Lower latency memory technology is interesting for “near RAM.” Slower memory (between RAM and Flash) could be accommodated for “far RAM.”

  28. Storage (ZB of data; 1 ZB = 1 billion TB) Today: Photo and data storage uses 3.5” disk drives almost exclusively. Applications and file systems expect low-latency writes and reads and unlimited write cycles. By changing these assumptions optical and solid-state media may be viable. Next: Cold and big capacity storage. Lower power density and slower performance

  29. Flash (solid state storage) Today: Flash SSD drives are commonly used in databases and applications that need low-latency high-throughput storage. SLC – Premium MLC: Expensive, performant, high enduring (10K) MLC: Less enpensive, performant, lower enduring (3K) TLC: Cheap, performant (100-300) Worm TLC: Extremely cheap (1)

  30. Network Interfaces Today: 10 Gbps NICs are becoming standard in servers. 40Gbps are emerging and in production Next: By focusing on rack distances (3 meters) and allowing proprietary technologies, 100 Gbps could be affordable in a few years.

More Related