220 likes | 354 Views
Ralf Juengling Portland State University. David D. Clark, “The Structuring of Systems using Upcalls”, Proc. of the 10 th Symposium on Operating System Principles , pp. 171-180, 1985. The Structuring of Systems using Upcalls. When you bake a big cake or write a big program, you will
E N D
Ralf Juengling Portland State University David D. Clark, “The Structuring of Systems using Upcalls”,Proc. of the 10th Symposium on Operating System Principles,pp. 171-180, 1985. The Structuring of Systems using Upcalls
When you bake a big cake or write a big program, you will probably do it in layers Layers
When writing big code you need abstractions to be able to… • Think about your code • Communicate your code to others • Test your code • Adapt your code later to changed requirements For many applications layered abstractions are natural • Protocol stacks • Compilers • Database management • Scientific computing applications • Operating systems Layers as one way of abstracting
Clients XYZ Library (stateless) May have any number of concurrent threads if code is reentrant Flow of control in layered code
Handle device interrupts timely • Support dynamic updating of modules (e.g., device drivers)but don’t compromise safety Solutions: • Have interrupt handlers communicate with devices and let other code communicate with interrupt handlersasynchronously (buffers, messages) • Contain modules in own address spaces • Use IPC to let different modules communicate across protection boundaries Additional requirements in OS kernel code
…we have: • Abstraction boundaries • Protection boundaries • Downward control flow • Upward control flow … communication between layers is more costly because: • Control flow across protection boundaries (RPC, messages,…) • Upward control flow across abstraction boundaries (buffers) In kernel code…
Note: • Layers have state • Need to synchronize shared data • Call across layers crosses protection boundary • Upward data flow is asynchronous (buffers) • For some layers there isa dedicated task (pipeline) • Downward control flowmay be asynchronous orsynchronous Flow of control in kernel code
… communication between layers is more costly because: • Control flow across protection boundaries • Upward control flow across abstraction boundaries Clark’s solution: • Let upward flow control proceed synchronously with upcalls • Get rid of protection boundaries In kernel code…
Idea: • Leave “blanks” in lower-level code • Let higher level code “fill in the blanks” in form of handlers In functional programming this technique is used every day, in OO programming every other day. Other terms: Handler function, Callback function,Virtual method Does using upcalls abolish abstraction boundaries? Upcalls
It looks a bit more like layered library code • Procedure calls insteadof IPC • Plus Upcalls • But we can’t docompletely without buffering Flow of control in kernel code
transport-receive is a handler for net-receive • display-receive is a handler for transport-receive • a handler gets registered by an xxx-open call Protocol package example display-start display-receive transport-open transport-receive transport-get-port net-open net-receive net-dispatch create-task wakeup
display-start display-receive transport-open transport-receive transport-get-port net-open net-receive net-dispatch Protocol package example display-start(): local-port = transport-open(display-receive)end transport-open(receive-handler): local-port = net-open(transport-receive) handler-array(local-port) = receive-handler return local-portend net-open(receive-handler): port = generate-uid() handler-array(port) = receive-handler task-array(port) = create-task(net-receive, port) return portend
display-start display-receive transport-open transport-receive transport-get-port net-open net-receive net-dispatch Protocol package example transport-get-port(packet): // determine whose packet this is extract port from packet return portend net-dispatch(): read packet from device restart device port = transport-get-port(packet) put packet on per port queue task-id = task-array(port) wakeup-task(task-id)end
display-start display-receive transport-open transport-receive transport-get-port net-open net-receive net-dispatch Protocol package example transport-get-port(packet): // determine whose packet this is extract port from packet return portend net-dispatch(): read packet from device restart device port = transport-get-port(packet) put packet on per port queue task-id = task-array(port) wakeup-task(task-id)end not quite clean
display-start display-receive transport-open transport-receive transport-get-port net-open net-receive net-dispatch Protocol package example display-receive(char): write char to displayend transport-receive(packet, port): handler = handler-array(port) validate packet header for each char in packet: handler(char)end net-receive(port): handler = handle-array(port) do forever remove packet from per port queue handler(packet, port) block() endend
This must not leave any shared data inconsistent! Two things need to be recovered: • The task • The per-client data in each layer/module Solution: • Cleanly separate shared state from per-client data • Have a per-layer cleanup procedure and arrangefor the system to call it in case of a failure • Unlock everything before an upcall What if an upcall fails?
This is a source of potential, subtle bugs: Indirect recursive call may change state unexpectedly Some solutions: Check state after an upcall (ugly) Don’t allow a handler to downcall (simple & easy) Have upcalled procedure trigger future action instead of down-calling (example: transport-arm-for-send) May upcalled code call down?
Don’t really know. With downcall-only there is a simple locking discipline: • Have each layer use its own set of locks • Have each subroutine release its lock before return • No deadlock as a partial order is implied by the call graph Doesn’t work when upcalls are allowed. The principle behind their recipe “release any locks before an upcall” is asymmetry of trust: • Trust the layers you depend on, but not your clients How to use locks?
We get rid of protection boundaries for the sake of performance and to make upcalls practical We seemingly keep abstraction boundaries intact as we: • Don’t leak information about the implementation byoffering an upcall interface • Don’t know our clients, they must register handlers But we need to observe some constraints to make it work: • downcall policy • locking discipline • Cleanup interface Upcalls & Abstraction boundaries
Monitors for synchronization • Task-scheduling with a “deadline priority” scheme • Dynamic priority adjustment if a higher prioritytask waits for a lower priority task (“deadline promotion”) • Inter-task communication per shared memory • High-level implementation language (CLU, anyone?) • Mark & Sweep garbage collector Oh, and “multi-task modules” are just layers with state prepared for multiple concurrent execution. Other things in Swift