1 / 37

Architecture of Parallel Computers CSC / ECE 506 OpenFabrics Alliance Lecture 18

Architecture of Parallel Computers CSC / ECE 506 OpenFabrics Alliance Lecture 18. 7/17/2006 Dr Steve Hunter. Outline. Infiniband and Ethernet Review DDP and RDMA OpenFabrics Alliance IP over Infiniband (IPoIB) Sockets Direct Protocol (SDP) Network File System (NFS)

astegner
Download Presentation

Architecture of Parallel Computers CSC / ECE 506 OpenFabrics Alliance Lecture 18

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Architecture of Parallel ComputersCSC / ECE 506 OpenFabrics AllianceLecture 18 7/17/2006 Dr Steve Hunter

  2. Outline • Infiniband and Ethernet Review • DDP and RDMA • OpenFabrics Alliance • IP over Infiniband (IPoIB) • Sockets Direct Protocol (SDP) • Network File System (NFS) • SCSI RDMA Protocol (SRP) • iSCSI Extensions for RDMA (iSER) • Reliable Datagram Sockets (RDS) CSC / ECE 506

  3. Infiniband Goals - Review • Interconnect for server I/O and efficient interprocess communications • Standard across the industry • backed by all the major players • 200+ companies • With an architecture able to match future systems: • Low overhead • Scalable bandwidth, up and down • Scalable fanout, few to thousands • Low cost, excellent price/performance • Robust reliability, availability, and serviceability • Leverages Internet Protocol suite and paradigms CSC / ECE 506

  4. The Basic Unit: an IB Subnet - Review • Basic whole IB system is a subnet • Elements: • Endnodes • Links • Switches • What it does: Communicate • endnodes with endnodes, • via message queues, • which process messages over several transport types, • and are SARed into packets, • which are placed on links, • and routed by switches. End Node End Node End Node End Node Switch Links Switch Switch End Node End Node Switch End Node End Node End Node End Node CSC / ECE 506

  5. Host CPU CPU CPU CPU Memory Controller Verbs HCA QPs CQs IB Layers Memory Tables Adapter IB Layers TCA QPs CQs IO Controller End Node Attachment to IB - Review • End nodes attach to IB via Channel Adapters: • Host CAs (HCAs) • O/S API/KPIs not specified • Queues and memory accessible via verbs • QP, CQ, and RDMA engines • Must support three IB Transports • Can include: • Dual ports • load balancing, availability (path migration) • Attach to same or different subnets • Partitioning • Atomics, … • Target CAs (TCAs) • Queue access method is vendor unique • QP and CQ engines • Need only support Unreliable Datagram • ULP can be standard or proprietary • In other words… • A smaller subset of required functions. CSC / ECE 506

  6. Infiniband Summary • InfiniBand architecture is a very high performance, low latency interconnect technology based on an industry-standard approach to Remote Direct Memory Access (RDMA) • An InfiniBand fabric is built from hardware and software that are configured, monitored and operated to deliver a variety of services to users and applications • Characteristics of the technology that differentiate it from comparative interconnects such as the traditional Ethernet include: • End-to-end reliable delivery, • Scalable bandwidths from 10 to 60 Gbps available today moving to 120 Gbps in the near future • Scalability without performance degradation • Low latency between devices • Greatly reduced server CPU utilization for protocol processing • Efficient I/O channel architecture for network and storage virtualizations CSC / ECE 506

  7. Advanced Ethernet - Review iSER / RNIC Model shown with SCSI application • It’s expected the OpenFabrics effort (i.e., OpenIB / OpenRDMA merger) will enable even more advanced functions into NIC technology TCP/IP Model SCSI Service RDMA Service SCSl app Examples SCSI Internet SCSI (iSCSI) TCP Service iSCSI Extensions for RDMA (iSER) IP Service HTTP, SMTP, FTP Application Remote Direct Memory Access Protocol (RDMAP) MAC Service Markers with PDU Alignment (MPA) Direct Data Placement (DDP) Transport TCP, UDP Transmission Control Protocol (TCP) RDMA NIC (RNIC) IP Network Internet Protocol (IP) Ethernet Link Media Access Control (MAC) Copper, Optical Physical Physical CSC / ECE 506

  8. Advanced Ethernet Summary • The iWARP technology, implemented as RDMA Network Interface Card (RNIC), achieves Zero-copy, RDMA, and protocol offload over existing TCP/IP networks • It was demonstrated that a 10GbE based RNIC can reduce the CPU processing overhead from 80-90% to less than 10% comparing to its host stack equivalent • Additionally, its achievable end-to-end latency is now 5 microseconds or less. • iWARP together with the emerging low latency (low hundreds of nanoseconds) 10 GbE switches can also provide a powerful infrastructure for clustered computing, server-to-server processing, visualization and file system • The advantage of the iWARP technology includes its ability to leverage the widely deployed TCP/IP infrastructure, its broad knowledge base, and mature management and monitoring capabilities. • In addition, an iWARP infrastructure is a routable infrastructure, thereby eliminating the need for gateways to connect to the LAN or WAN internet. CSC / ECE 506

  9. DDP and RDMA • IETF RFC http://rfc.net/rfc4296.html • The central idea of general-purpose DDP is that a data sender will supplement the data it sends with placement information that allows the receiver's network interface to place the data directly at its final destination without any copying. • DDP can be used to steer received data to its final destination, without requiring layer- specific behavior for each different layer. • Data sent with such DDP information is said to be `tagged'. • The central components of the DDP architecture are the “buffer”, which is an object with beginning and ending addresses, and a method (set()), which sets the value of an octet at an address. • In many cases, a buffer corresponds directly to a portion of host user memory. However, DDP does not depend on this; a buffer could be a disk file, or anything else that can be viewed as an addressable collection of octets. CSC / ECE 506

  10. DDP and RDMA • Remote Direct Memory Access (RDMA) extends the capabilities of DDP with two primary functions. • It adds the ability to read from buffers registered to a socket (RDMA Read). • This allows a client protocol to perform arbitrary, bidirectional data movement without involving the remote client. • When RDMA is implemented in hardware, arbitrary data movement can be performed without involving the remote host CPU at all. • RDMA specifies a transport-independent untagged message service (Send) with characteristics that are both very efficient to implement in hardware, and convenient for client protocols. • The RDMA architecture is patterned after the traditional model for device programming, where the client requests an operation using Send-like actions (programmed I/O), the server performs the necessary data transfers for the operation (DMA reads and writes), and notifies the client of completion. • The programmed I/O+DMA model efficiently supports a high degree of concurrency and flexibility for both the client and server, even when operations have a wide range of intrinsic latencies. CSC / ECE 506

  11. OpenFabrics Alliance • The OpenFabric Alliance is an international organization comprised of industry, academic and research groups that have developed a unified core of open source software stacks (OpenSTAC) leveraging RDMA architectures for both the Linux and Windows operating systems over both InfiniBand and Ethernet. • RDMA is a communications technique allowing data to be transmitted from the memory of one computer to the memory of another computer without passing through either devices CPU, without needing extensive buffering, and without calling to an operating system kernel • The core OpenSTAC software supports all the well known standard upper layer protocols such as MPI, IP, SDP, NFS, SRP, iSER, and RDS on top of Ethernet and InfiniBand (IB) infrastructures • The OpenFabric software and supporting services better enables low-latency InfiniBand and 10 GbE to deliver clustered computing, server-to-server processing, visualization and file system access CSC / ECE 506

  12. SA Subnet Administrator MAD Management Datagram SMA Subnet Manager Agent PMA Performance Manager Agent IPoIB IP over InfiniBand SDP Sockets Direct Protocol SRP SCSI RDMA Protocol (Initiator) iSER iSCSI RDMA Protocol (Initiator) RDS Reliable Datagram Service UDAPL User Direct Access Programming Lib HCA Host Channel Adapter R-NIC RDMA NIC OpenFabrics Software Stack IP Based App Access Sockets BasedAccess (IBM DB2) Various MPIs Block Storage Access Clustered DB Access (Oracle 10g RAC) Access to File Systems Application Level User Space Diag Tools Open SM UDAPL User APIs User Level MAD API SDP Library User Level Verbs / API Kernel Space Upper Layer Protocol IPoIB SDP SRP iSER RDS NFS-RDMA RPC Cluster File Sys Connection Manager Abstraction (CMA) Mid-Layer SA Client MAD SMA ConnectionManager Connection Manager InfiniBand Verbs / API R-NIC Driver API Provider Hardware Specific Driver Hardware Specific Driver Key Common Apps & AccessMethodsfor usingOF Stack InfiniBand Hardware InfiniBand HCA iWARP R-NIC iWARP CSC / ECE 506

  13. IP over IB (IPoIB) • IETF Standard for mapping Internet protocols to Infiniband • IETF IPoIB Working Group • Covers • Fabric initialization • Multicast/Broadcast • Address resolution (IPv4/IPv6) • IP Datagram encapsulation (IPv4/IPv6) • MIBs CSC / ECE 506

  14. IP over IB (IPoIB) • Communication Parameters • Obtained from Subnet Manager (SM) • P_Key (Partition Key) • SL (Service Level) • Path Rate • Link MTU (for IPv6 can be reduced with router advert) • GRH parameters – TClass, Flow Label, HopLimit • Obtained from address resolution • Data Link Layer Address (GID) • Perstent Data Link layer address necessary • Enables IB Routers to be deployed eventually • QPN (queue pair number) CSC / ECE 506

  15. IP over IB (IPoIB) • Address Resolution • IPv4 • ARP request is sent on Broadcast MGID • ARP reply is unicast back and contains GID and QPN • IPv6 • Neighbor discovery using all IP-hosts multicast address • Existing RFCs • Summary • Feels like Ethernet with 2KB MTU • Doesn’t utilize most of Inifinband custom hardware • e.g., SAR, Reliable Transport, Zero Copy, RDMA Reads/Writes, Kernel Bypass • SDP is the enhanced version CSC / ECE 506

  16. Sockets Direct Protocol (SDP) • Based on Microsoft’s Winsock Direct Protocol • SDP Feature Summary • Maps sockets SOCK_STREAM to RDMA semantics • Optimizations for transaction oriented protocols • Optimizations for mixing of small and large messages • Uses advanced Infiniband features • Reliable Connected (RC) service • Uses RDMA Writes, Reads, and Sends • Supports Automatic Path Migration CSC / ECE 506

  17. SDP Terminology • Data Source • Side of connection which is sourcing the ULP data to be transferred • Data Sink • Side of connection which is receiving (sinking) the ULP data • Data Transfer Mechanism • To move ULP data from Data Source to Data Sink (e.g., Bcopy, Receiver Initiated Zcopy, Read Zcopy) • Flow Control Mode • State that the half connection is currently in (Combined, Pipelined, Buffered) • Bcopy Threshold • If message length is under threshold, use Bcopy mechanism. Threshold is locally defined. CSC / ECE 506

  18. SDP Modes • Flow Control Modes restrict data transfer mechanisms • Buffered Mode • Used when receiver wishes to force all transfers to use the Bcopy Mechanism • Combined Mode • Used when receiver is not pre-posting buffers and uses peek/select interface (Bcopy or Read Zcopy, only one outstanding) • Pipelined Mode • Highly optimized transfer mode – multiple write or read buffers outstanding, can use all data transfer mechanisms (Bcopy, Read Zcopy, Receive Initiated Write Zcopy) CSC / ECE 506

  19. SDP Private Buffer Pool (Fixed Size) Buffer Copy Path Buffer Copy Path Data Soure User Buffer Data Sink User Buffer CA CA Zero Copy Path Zero Copy Path Infiniband Reliable Connection (RC) SDP Terminology • Enables buffer-copy when • Transfer is short • Application needs buffering • Enables zero-copy when • Transfer is long CSC / ECE 506

  20. Network File System (NFS) • Network File System (NFS) is a protocol originally developed by Sun Microsystems in 1984 and defined in RFCs 1094, 1813, and 3530 (obsoletes 3010), as a distributed file system which allows a computer to access files over a network as easily as if they were on its local disks. • NFS is one of many protocols built on the Open Network Computing Remote Procedure Call system (ONC RPC) • Version 2 of the protocol • originally operated entirely over UDP and was meant to keep the protocol stateless, with locking (for example) implemented outside of the core protocol • Version 3 added: • support for 64-bit file sizes and offsets, to handle files larger than 4GB • support for asynchronous writes on the server, to improve write performance; • additional file attributes in many replies, to avoid the need to refetch them; • a READDIRPLUS operation, to get file handles and attributes along with file names when scanning a directory; • assorted other improvements. CSC / ECE 506

  21. Network File System (NFS) • Version 4 (RFC 3530) • Influenced by AFS and CIFS, includes performance improvements, mandates strong security, and introduces a stateful protocol. Version 4 was the first version developed with the Internet Engineering Task Force (IETF) after Sun Microsystems handed over the development of the NFS protocols. • Various side-band protocols have been added to NFS, including: • The byte-range advisory Network Lock Manager (NLM) protocol which was added to support System V UNIX file locking APIs. • The remote quota reporting (RQUOTAD) protocol to allow NFS users to view their data storage quotas on NFS servers. • WebNFS is an extension to Version 2 and Version 3 which allows NFS to be more easily integrated into Web browsers and to enable operation through firewalls. CSC / ECE 506

  22. SCSI RDMA Protocol (SRP) • SRP defines a SCSI protocol mapping onto the InfiniBand Architecture and/or functionally similar cluster protocols • RDMA Consortium voted to create iSER instead of porting SRP to IP • SRP doesn’t have a wide following • SRP doesn’t have a discovery or management protocol • Version 2 of SRP hasn’t been updated for 1.5 years CSC / ECE 506

  23. iSCSI Extensions for RDMA (iSER) • iSER combines SRP and iSCSI with new RDMA capabilities • iSER is maintained as part of iSCSI in IETF • Recently extended to IB by IBM, Voltaire, HP, EMC, and others • Benefits to add iSER to IB • Combines same (almost) storage protocol across all RDMA Networks • Easier to train staff • Bridging products more staight-forward • Motivate storage community to iSCSI/iSER mentality and may help with acceptance on IP • Desire for a common Discovery and Management protocol across iSCSI, iSER/iWARP, and IP • i.e., same Management and discovery process and software to handle IP networks and IB networks CSC / ECE 506

  24. iSCSI Extensions for RDMA (iSER) • iSCSI’s main performance deficiencies stem from TCP/IP • TCP is a complex protocol requiring significant processing • Stream based, making it hard to separate data and headers • Requires copies that increase latency and CPU overhead • Using checksums requiring additional CRCs in the ULP • iSER eliminates the bottlenecks through: • Zero copy using RDMA • CRC calculated by hardware • Work with message boundaries instead of streams • Transport protocol implemented in hardware (minimal CPU cycles per iO) CSC / ECE 506

  25. iSCSI Extensions for RDMA (iSER) • iSER leverages on iSCSI management, discovery, and RAS • Zero-Configuration, Discovery and global storage name server (SLP, iSNS) • Change Notifications and active monitoring of devices and initiators • High-Availability, and 3 levels of automated recovery • Multi-pathing and storage aggregation • Industry standard management interfaces (MIB) • 3rd party storage managers • Security: Partitioning, Authentication, Central login control, etc. • Working with iSER over IB doesn’t require any changes • Focused effort from both communities • More advanced than SRP CSC / ECE 506

  26. iSCSI Extensions for RDMA (iSER) • iSCSI specification: • http://www.ietf.org/rfc/rfc3720.txt • iSER and DA Introduction • http://www.rdmaconsortium.org/home/iSER_DA_intro.pdf • iSER specification • http://www.ietf.org/internet-drafts/draft-ietf-ips-iser-05.txt • iSER over IB Overview • http://www.haifa.il.ibm.com/satran/ips/iSER-in-an-IB-network-V9.pdf CSC / ECE 506

  27. Reliable Datagram Sockets (RDS) • Goals • Provide reliable datagram service • performance • scalability • high availability • simplify application code • Maintain sockets API • application code portability • faster time-to-market CSC / ECE 506

  28. Reliable Datagram Sockets (RDS) Stack Overview UDP Applications Oracle 10g Socket Applications User Kernel TCP UDP SDP RDS IP IPoIB OpenIB Access Layer Host Channel Adapter CSC / ECE 506

  29. Reliable Datagram Sockets (RDS) • Application connectionless • RDS maintains node-to-node connection • IP addressing • Uses CMA • On-demand connection setup • connect on first sendmsg()or data recv • disconnect on error or policy like inactivity • Connection setup/teardown transparent to applications CSC / ECE 506

  30. Reliable Datagram Sockets (RDS) • Data and Control Channel • Uses RC QP for node level connections • Data and Control QPs per session • Selectable MTU • b-copy send/recv • H/W flow control CSC / ECE 506

  31. The End CSC / ECE 506

  32. RDS - Send • Connection established on first send • sendmsg() • allows send pipelining • ENOBUF returned if insufficient send buffers, application retries CSC / ECE 506

  33. RDS - Receive • Identical to UDP recvmsg() • similar blocking/non-blocking behavior • “Slow” receiver ports are stalled at sender side • combination of activity (LRU) and memory utilization used to detect slow receivers • sendmsg() to stalled destination port returns EWOULDBLOCK, application can retry • Blocking socket can wait for unblock • recvmsg() on a stalled port un-stalls it CSC / ECE 506

  34. RDS - High Availability (failover) • Use of RC and on-demand connection setup allows HA • connection setup/teardown transparent to applications • every sendmsg() could “potentially” result in a connection setup • if a path fails, connection is torn down, next send can connect on an alternate path (different port or different HCA) CSC / ECE 506

  35. Preliminary performance RDS on OpenIB *Dual 2.4GHz Xeon 2G memory 4x PCI-X HCA **Sdp ~3700Mb/sec TCP_STREAM CSC / ECE 506

  36. Preliminary performance RDS on OpenIB *Dual 2.4GHz Xeon 2G memory 4x PCI-X HCA **Sdp ~3700Mb/sec TCP_STREAM CSC / ECE 506

  37. Preliminary performance RDS on OpenIB CSC / ECE 506

More Related