230 likes | 247 Views
Explore the potential of NVMe™ for computational storage, featuring low latency, high throughput, and management at scale. Learn about NVMe-based computational storage processors and arrays, and their applications for offloading tasks from CPUs.
E N D
Accelerating Applications with NVM Express™ Computational Storage2019 NVMe™ Annual Members Meeting and Developer DayMarch 19, 2019 Prepared by Stephen Bates, CTO, Eideticom & Richard Mataya, Co-Founder & EVP, NGD Systems
Agenda What?? Why?? Who?? How??
NVMe™ is a transport Michael Corwell, GM Storage, Microsoft Azure, Dec 5th 2018
One Driver to Rule Them All?! NVMe™ has been incredibly successful as a storage protocol. Also being used for networking (NVMe-oF™ and things like AWS Nitro and Mellanox’s Sexy NVMe Accelerator Platform (SNAP)). Why not extend NVMe to compute and make it the one driver to rule them all?
What is Computational Storage? SNIA has Defined the Following Computational Storage Drive (CSD): A component that provides persistent data storage and computational services Computational Storage Processor (CSP): A component that provides computational services to a storage system without providing persistent storage Computational Storage Array (CSA): A collection of computational storage drives, computational storage processors and/or storage devices, combined with a body of control software
NVMeprovides: Low latency High throughput Low CPU overhead Multicore awareness Management at scale QoS awareness Why NVMe™? Accelerators require: • Low latency • High throughput • Low CPU overhead • Multicore awareness • Management at scale • QoS awareness Real question is “Why not NVMe?”
NVMe™ Computational Storage • NVMe based Computational Storage Processor (CSP) advertises zlib compression. • Operating System detects the presence of the NVMe CSP • Used by the device-mapper to offload zlib compression to NoLoad. • This can be combined with p2pdma to further offload IO. • With standardization this can be vendor-neutral and upstreamed. CPU DRAM PCIe Subsystem . . . CMB NVMe CSP NVMe SSDs
NVMe-oF™ Computational Storage • An NVMe™ CSP is represented as an NVMe Computation Namespace. Therefore it can be exposed over Fabrics. • Compute nodes can borrow CSPs, CSDs and standard NVMe SSDs via fabrics from Computational Storage Arrays (CSAs). • NVMe Computational Storage can use the same fabrics commands that are used by legacy NVMe-oF. • Application code is identical regardless Computation is local (PCIe) or remote (Fabrics) Ethernet TOR Switch Compute Node Compute Node Compute Node Computational Storage Array NVMe CSPs, CSDs and SSDs
Example of a Hadoop Cluster - In-Situ Processing Ability to Migrate Data Nodes into drives Allow for user to reduce CPU Core count Current example:
NVMe™ for Computation: Software Applications Management nvme-clinvme-of Userspace libcsnvme SPDK OS Hardware NVMe CSPs, CSDs and CSAs 18
NVMe™ for Computation: Standards NVMe Computation Namespaces: A new namespace type with its own namespace ID, command set and admin commands. Operating Systems can treat these namespaces different to storage namespaces. Fixed Purpose Computation: Some computation can be defined a way that an Operating System can consume it directly (e.g. zlib compression tied into the crypto API in Linux). General Purpose Computation: Some Computation Namespaces will be flexible and can be programmed and used in user-space (/dev/nvmeXcsY anyone?) NVMe Computation over Fabrics: User-space does not know or care if /dev/nvmeXcsY is local (PCIe) or remote (Fabrics)
Processor Path in an NGD Systems NVMe™ SSD It’s an NVMe SSD at the core • No impact on host read/write • No impact on NVMe driver • Standard protocols But then there is MORE (Patented IP) • Dedicated compute resources • HW acceleration for data analytics • Seamless programming model • Scalable
Call to Arms! • If this all sounds interesting, please join the SNIA Computational Storage TWG. • End-users and software people are needed! • If you have thoughts on how you would consume NVMe™ Computation, please let us know • As SNIA starts interfacing with NVMe please participate in the TPAR/TP discussions! + computation = awesome