1 / 23

Accelerating Applications with NVM Express™ Computational Storage

Explore the potential of NVMe™ for computational storage, featuring low latency, high throughput, and management at scale. Learn about NVMe-based computational storage processors and arrays, and their applications for offloading tasks from CPUs.

kguy
Download Presentation

Accelerating Applications with NVM Express™ Computational Storage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Accelerating Applications with NVM Express™ Computational Storage2019 NVMe™ Annual Members Meeting and Developer DayMarch 19, 2019 Prepared by Stephen Bates, CTO, Eideticom & Richard Mataya, Co-Founder & EVP, NGD Systems

  2. Agenda What?? Why?? Who?? How??

  3. WHAT??

  4. NVMe™ is a transport Michael Corwell, GM Storage, Microsoft Azure, Dec 5th 2018

  5. One Driver to Rule Them All?! NVMe™ has been incredibly successful as a storage protocol. Also being used for networking (NVMe-oF™ and things like AWS Nitro and Mellanox’s Sexy NVMe Accelerator Platform (SNAP)). Why not extend NVMe to compute and make it the one driver to rule them all?

  6. What is Computational Storage? SNIA has Defined the Following Computational Storage Drive (CSD): A component that provides persistent data storage and computational services Computational Storage Processor (CSP): A component that provides computational services to a storage system without providing persistent storage Computational Storage Array (CSA): A collection of computational storage drives, computational storage processors and/or storage devices, combined with a body of control software

  7. WHY??

  8. NVMeprovides: Low latency High throughput Low CPU overhead Multicore awareness Management at scale QoS awareness Why NVMe™? Accelerators require: • Low latency • High throughput • Low CPU overhead • Multicore awareness • Management at scale • QoS awareness Real question is “Why not NVMe?”

  9. Let’s Go Fishing for Data

  10. NVMe™ Computational Storage • NVMe based Computational Storage Processor (CSP) advertises zlib compression. • Operating System detects the presence of the NVMe CSP • Used by the device-mapper to offload zlib compression to NoLoad. • This can be combined with p2pdma to further offload IO. • With standardization this can be vendor-neutral and upstreamed. CPU DRAM PCIe Subsystem . . . CMB NVMe CSP NVMe SSDs

  11. NVMe-oF™ Computational Storage • An NVMe™ CSP is represented as an NVMe Computation Namespace. Therefore it can be exposed over Fabrics. • Compute nodes can borrow CSPs, CSDs and standard NVMe SSDs via fabrics from Computational Storage Arrays (CSAs). • NVMe Computational Storage can use the same fabrics commands that are used by legacy NVMe-oF. • Application code is identical regardless Computation is local (PCIe) or remote (Fabrics) Ethernet TOR Switch Compute Node Compute Node Compute Node Computational Storage Array NVMe CSPs, CSDs and SSDs

  12. Example of a Hadoop Cluster - In-Situ Processing Ability to Migrate Data Nodes into drives Allow for user to reduce CPU Core count Current example:

  13. Network Monitoring Offload

  14. WHO?

  15. SNIA Computational Storage TWG

  16. HOW?

  17. NVMe™ for Computation: Software Applications Management nvme-clinvme-of Userspace libcsnvme SPDK OS Hardware NVMe CSPs, CSDs and CSAs 18

  18. NVMe™ for Computation: Standards NVMe Computation Namespaces: A new namespace type with its own namespace ID, command set and admin commands. Operating Systems can treat these namespaces different to storage namespaces. Fixed Purpose Computation: Some computation can be defined a way that an Operating System can consume it directly (e.g. zlib compression tied into the crypto API in Linux). General Purpose Computation: Some Computation Namespaces will be flexible and can be programmed and used in user-space (/dev/nvmeXcsY anyone?) NVMe Computation over Fabrics: User-space does not know or care if /dev/nvmeXcsY is local (PCIe) or remote (Fabrics)

  19. There are many paths to Computational Storage

  20. Processor Path in an NGD Systems NVMe™ SSD It’s an NVMe SSD at the core • No impact on host read/write • No impact on NVMe driver • Standard protocols But then there is MORE (Patented IP) • Dedicated compute resources • HW acceleration for data analytics • Seamless programming model • Scalable

  21. Call to Arms! • If this all sounds interesting, please join the SNIA Computational Storage TWG. • End-users and software people are needed! • If you have thoughts on how you would consume NVMe™ Computation, please let us know • As SNIA starts interfacing with NVMe please participate in the TPAR/TP discussions! + computation = awesome

  22. Questions?

More Related