380 likes | 488 Views
Kernel-Kernel Communication in a Shared-memory Multiprocessor Eliseu Chaves, et. al. May 1993 Presented by Tina Swenson May 27, 2010. Agenda. Introduction Remote Invocation Remote Memory Access RI/RA Combinations Case Study Conclusion. Introduction. Introduction.
E N D
Kernel-Kernel Communication in a Shared-memory Multiprocessor Eliseu Chaves, et. al. May 1993 Presented by Tina Swenson May 27, 2010
Agenda • Introduction • Remote Invocation • Remote Memory Access • RI/RA Combinations • Case Study • Conclusion
Introduction • There’s more than one way to handle large shared memory systems • Remote Memory • we’ve studied this a lot! • Remote Invocation • message passing • Trade-offs are discussed • Theories tested with a case study
Motivation • UMA design won’t scale • NUMA was seen as the future • It is implemented in commercial CPU’s • NUMA allows programmers to choose shared memory or remote invocation • The authors discuss the trade-offs
Kernel-kernel Communication • Each processor has: • Full range of kernel services • Reasonable performance • Access to all memory on the machine • Locality – key to RI success • Previous kernel experience shows that most memory access tend to be local to the “node” “...most memory accesses will be local even when using remote memory accesses for interkernel communication, and that the total amount of time spent waiting for replies from other processors when using remote invocation will be small...”
NUMA • NUMA without cache-coherence • 3 methods of kernel-kernel communication • Remote Memory Access • Operation executes on node i, accessing node j’s memory as needed. • Remote Invocation • Node i processor sends a message to node j processor asking j to perform i’s operations. • Bulk data transfer • Kernel moves data from node to node.
Remote Invocation (RI) • Instead of moving data around the architecture, move the operations to the data! • Message Passing
Interrupt-Level RI (ILRI) • Fast • For operations that can be safely executed in an interrupt handler • Limitations: • Non-blocking (thus no locks) operations only • interrupt handles lack process context • Deadlock Prevention • severely limits when we can use ILRI
Process-Level RI (PLRI) • Slower • Requires context switch and possible synchronization with other running processes • Used for longer operations • Avoid deadlocks by blocking
Memory Considerations • If remote memory access is used how is it affected by memory consistency models (not in this paper)? • Strong consistency models will incur contention • Weak consistency models widen the cost gap between normal instructions and synchronization instructions • And require use of memory barriers From Professor Walpole’s slides.
Mixing RI/RA • ILRI, PLRI and shared memory are compatible, as long as guidelines are followed. “It is easy to use different mechanisms for unrelated data structures.”
Using RA with PLRI • Remote Access and Process-level Remote Invocation can be used on the same data structure if: • synchronization methods are compatible
Using RA with ILRI • Remote Access and Interrupt-level Remote Invocation can be used on the same data structure if: • A Hybrid lock is used • interrupt masking AND spin locks
Using PLRI and ILRI • PLRI & ILRI on the same data structure if: • Avoid deadlock • Always be able to perform incoming invocations while waiting for outgoing invocation. • Example: Cannot make PLRI with ILRIs blocked in order to access data that is shared by normal and interrupt-level code (from Professor Walpole’s slides)
The Costs • Latency • Impact on local operations • Contention and Throughput • Complement or clash conceptually with the kernel’s organization
Latency • What’s the latency between performing RA and RI? • If (R-1)n < C • then implement using RA • If operations require a lot of time • then implement using RI
Impact on Local Operations • Implicit Synchronization: • PLRI is used for all remote accesses, then it could allow the data structure • This solution depends on the no pre-emption • Explicit Synchronization: • Bus-based nodes
Contention and Throughput • Operations are serialized at some point! • RI: Serialize on processor executing those operations • Even if there is no data in common • RA: Serialize at the memory • If access competes for same lock
Complement or Clash • Types of kernels • procedure-based • no distinction between user & kernel space • user program enters kernel via traps • fits RA • message-based • each major kernel resource is its own kernel process • ops require communication among these kernel processes • fits RI
Psyche on Butterfly Plus • Procedure-based OS • Uses share memory as primary kernel communication mechanism • Authors built in message-based ops • RI – reorganized code; grouped together accesses allowing a single RI call. • non-CC-NUMA • 1 CPU/node • C = 12:1 (remote -to-local access time)
Psyche on Butterfly Plus • High degree of node locality • RI implemented optimistically • Spin locks used • Test-and-test-and-set used to minimize latency in absence of contention. Otherwise, some atomic instruction is used • This can be decided on the fly
Factors Affecting the choice of RI/RA • Cost of the RI mechanism • Cost of atomic operations for synchronization • Ratio of remote to local memory access time • For cache-coherent machines: • cache line size • false sharing • caching effects reducing total cost of kernel ops.
Using PLRI, ILRI, and RA • PLRI • Use it once the cost of PLRI surpasses ILRI • Must consider latency, throughput, appeal of eliminating explicitly synch • IRLI • Node locality is hugely important • Use it for low-latency ops when you can’t do RA • Use it when the remote node is idle. • Authors used IRLI for console IO, kernel debugging and TLB Shootdown.
Observations • On Butterfly Plus: • ILRI was fast • Explicit sync is costly • Remote references much more expensive than local references. • Except for short operations, RI had lower latency. RI might have lower throughput.
Conclusions? • Careful design is required for OSs to scale on modern hardware! • Which means you better understand the effects of your underlying hardware. • Keep communication to a minimum no matter what solution is used. • Where has mixing of RI/RA gone? • Monday’s paper, for one. • What else? • ccNUMA is in wide-spread use • How is RI/RA affected?