1 / 63

CSEP505: Programming Languages Lecture 9: Finish Concurrency; Start OOP

CSEP505: Programming Languages Lecture 9: Finish Concurrency; Start OOP. Dan Grossman Winter 2009. Where are we. Thread creation Communication via shared memory Synchronization with join, locks Message passing a la Concurrent ML Very elegant

gyvonne
Download Presentation

CSEP505: Programming Languages Lecture 9: Finish Concurrency; Start OOP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSEP505: Programming LanguagesLecture 9: Finish Concurrency; Start OOP Dan Grossman Winter 2009

  2. Where are we • Thread creation • Communication via shared memory • Synchronization with join, locks • Message passing a la Concurrent ML • Very elegant • First done for Standard ML, but available in several functional languages • Can wrap synchronization abstractions to make new ones • In my opinion, quite under-appreciated • Back to shared memory for software transactions CSE P505 Winter 2009 Dan Grossman

  3. The basics • Send and receive return “events” immediately • Sync blocks until “the event happens” • Separating these is key in a few slides (* event.mli; Caml’s version of CML *) type ’a channel (* messages passed on channels *) val new_channel : unit -> ’a channel type ’a event (* when sync’ed on, get an ’a *) valsend:’a channel -> ’a -> unit event valreceive:’a channel -> ’a event valsync: ’a event -> ’a CSE P505 Winter 2009 Dan Grossman

  4. Simple version Note: In SML, the CML book, etc: send = sendEvt receive = recvEvt sendNow = send recvNow = recv Helper functions to define blocking sending/receiving • Message sent when 1 thread sends, another receives • One will block waiting for the other let sendNow ch a = sync (send ch a)(* block *) let recvNow ch = sync (receive ch) (* block *) CSE P505 Winter 2009 Dan Grossman

  5. Example Make a thread to handle changes to a bank account • mkAcct returns 2 channels for talking to the thread • More elegant/functional approach: loop-carried state type action = Putof float |Getof float type acct = action channel * float channel let mkAcct() = let inCh = new_channel() in let outCh = new_channel() in let bal = ref 0.0 in (* state *) let rec loop() = (match recvNow inCh with (* blocks *) Putf -> bal := !bal +. f; | Getf -> bal := !bal -. f);(*allows overdraw*) sendNow outCh !bal; loop () in Thread.create loop (); (inCh,outCh) CSE P505 Winter 2009 Dan Grossman

  6. Example, continued get and put functions use the channels let getacctf = let inCh,outCh = acct in sendNow inCh (Get f); recvNow outCh let putacctf = let inCh,outCh = acct in sendNow inCh (Put f); recvNow outCh Outside the module, don’t see threads or channels!! • Cannot break the communication protocol typeacct val mkAcct : unit -> acct val get : acct->float->float val put : acct->float->float CSE P505 Winter 2009 Dan Grossman

  7. Key points • We put the entire communication protocol behind an abstraction • The infinite-loop-as-server idiom works well • And naturally prevents races • Multiple requests implicitly queued by CML implementation • Don’t think of threads like you’re used to • “Very lightweight” • Asynchronous = spawn a thread to do synchronous • System should easily support 100,000 threads • Cost about as much space as an object plus “current stack” • Quite similar to “actors” in OOP • Cost no time when blocked on a channel • Real example: A GUI where each widget is a thread CSE P505 Winter 2009 Dan Grossman

  8. Simpler example • A stream is an infinite set of values • Don’t compute them until asked • Again we could hide the channels and thread let squares = new_channel() let rec loopi = sendNow squares (i*i); loop (i+1) let _ = Thread.create loop 1 letone= recvNow squares letfour= recvNow squares letnine= recvNow squares … CSE P505 Winter 2009 Dan Grossman

  9. So far • sendNow and recvNow allow synchronous message passing • Abstraction lets us hide concurrency behind interfaces • But these block until the rendezvous, which is insufficient for many important communication patterns • Example: add: int channel -> int channel -> int • Must choose which to receive first; hurting performance or causing deadlock if other is ready earlier • Example: or: bool channel -> bool channel -> bool • Cannot short-circuit • This is why we split out sync and have other primitives CSE P505 Winter 2009 Dan Grossman

  10. The cool stuff type ’a event (* when sync’ed on, get an ’a *) valsend:’a channel -> ’a -> unit event valreceive:’a channel -> ’a event valsync: ’a event -> ’a valchoose: ’a event list -> ’a event valwrap: ’a event -> (’a -> ’b) -> ’b event • choose: when synchronized on, block until 1 of the events occurs • wrap: An event with the function as post-processing • Can wrap as many times as you want • Note: Skipping a couple other key primitives (e.g., for timeouts) CSE P505 Winter 2009 Dan Grossman

  11. “And from or” • Choose seems great for “until one happens” • But a little coding trick gets you “until all happen” • Code below returns answer on a third channel let add in1 in2 out = letans= sync(choose[ wrap (receive in1) (funi-> sync (receive in2) + i); wrap (receive in2) (funi-> sync (receive in1) + i)]) in sync (send out ans) • 1st hw5 problem a more straightforward use of choose/wrap CSE P505 Winter 2009 Dan Grossman

  12. Another example • Not blocking in the case of inclusive or would take a little more cleverness • Spawn a thread to receive the second input (and ignore it) let or in1 in2 = letans= sync(choose[ wrap (receive in1) (funb-> b || sync (receive in2)); wrap (receive in2) (funb-> b || sync (receive in1))]) in sync (send out ans) CSE P505 Winter 2009 Dan Grossman

  13. Circuits If you’re an electrical engineer: • send and receive are ends of a gate • wrap is combinational logic connected to a gate • choose is a multiplexer (no control over which) So after you wire something up, you sync to say “wait for communication from the outside” And the abstract interfaces are related to circuits composing If you’re a UNIX hacker: • UNIX select is “sync of choose” • A pain that they can’t be separated – want to nest chooses CSE P505 Winter 2009 Dan Grossman

  14. Remaining comments • The ability to build bigger events from smaller ones is very powerful • Synchronous message passing, well, synchronizes • Key by-design limitation is that CML supports only point-to-point communication • By the way, Caml’s implementation of CML itself is in terms of queues and locks • Works okay on a uniprocessor CSE P505 Winter 2009 Dan Grossman

  15. Where are we • Thread creation • Communication via shared memory • Synchronization with join, locks • Message passing a la Concurrent ML • Back to shared memory for software transactions • And an important digression to memory-consistency models CSE P505 Winter 2009 Dan Grossman

  16. Atomic An easier-to-use and harder-to-implement primitive void deposit(int x){ synchronized(this){ int tmp = balance; tmp += x; balance = tmp; }} void deposit(int x){ atomic { int tmp = balance; tmp += x; balance = tmp; }} lock acquire/release (behave as if) no interleaved computation CSE P505 Winter 2009 Dan Grossman

  17. Syntax irrelevant / Versus TM • In a language with higher-order functions, no need for a new statement form • atomic : (unit -> ‘a) -> ‘a • “Just a library” to the parser/type-checker • But not just a library to the compiler and run-time system • Atomic blocks vs. transactional memory (TM) • One high-level language construct vs. one way to implement • Neither necessarily needs the other (though common) • TM: Start, sequence of r/w, end • Implicit conflict detection, abort, restart, atomic commit CSE P505 Winter 2009 Dan Grossman

  18. Viewpoints Software transactions good for: • Software engineering (easier to avoid races & deadlocks) • Performance (optimistic “no conflict” without locks) Why are they good: • Get parallelism unless there are actual run-time memory conflicts • As easy as coarse-grained locks but parallelism of fine-grained • Push to language implementation conflict detection/recovery • Much like garbage collection: convenient but has costs Shameless plug: The Transactional Memory / Garbage Collection Analogy (OOPSLA 07) • Hope to talk about at end of next week, time permitting • TM-based implementations super cool, but not this course CSE P505 Winter 2009 Dan Grossman

  19. Solving “tough” problems synchronizedlength() {…} synchronizedgetChars(…) {…} synchronizedappend(StringBuffer sb) { int len = sb.length(); if(this.count + len > this.value.length) this.expand(…); sb.getChars(0,len,this.value,this.count); … } • Trivial to write append correctly in an atomic world • Even if previous version didn’t “expect” an append method • And get all the parallelism you can reasonably expect CSE P505 Winter 2009 Dan Grossman

  20. Another tough problem Operations on a double-ended queue void enqueue_left(Object) void enqueue_right(Object) Object dequeue_left() Object dequeue_right() Correctness • Behave like a queue, even when ≤ 2 elements • Dequeuers wait if necessary, but can’t “get lost” Parallelism • Access both ends in parallel, except when ≤ 1 elements (because ends overlap) Example thanks to Maurice Herlihy CSE P505 Winter 2009 Dan Grossman

  21. Good luck with that… • One lock? • No parallelism • Locks at each end? • Deadlock potential • Gets very complicated, etc. • Waking blocked dequeuers? • Harder than it looks CSE P505 Winter 2009 Dan Grossman

  22. Actual Solution • A clean solution to this apparent “homework problem” would be a publishable result(!) • In fact it was: [Michael & Scott, PODC 96] • So locks and condition variables are not a “natural methodology” for this problem • Implementation with transactions is trivial • Wrap 4 operations written sequentially in atomic • With retry for dequeuing from empty queue • Correct and parallel CSE P505 Winter 2009 Dan Grossman

  23. Not a panacea • Over-sellers say barely-technically-accurate things like “deadlock is impossible” • Really… classLock { bool b = false; void acquire() { while(true) { while(b) /*spin*/; atomic { if(b) continue; b = true; return; } } } void release() { b = false; } } CSE P505 Winter 2009 Dan Grossman

  24. Not a panacea Problems “we’re workin’ on” • Abort/retry interacts poorly with “launch missiles” • Many software TM implementations provide a weaker and under-specificed semantics when there are transactional/non-transactional data races (a long and crucial story) • Memory-consistency model questions remain and may be worse than with locks… CSE P505 Winter 2009 Dan Grossman

  25. Memory models • A memory-consistency model (or just memory model) for a shared-memory language specifies “what write(s) a read can see.” • The gold standard is sequential consistency (Lamport): “the results of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual process appear in this sequences in the order specified by its program” CSE P505 Winter 2009 Dan Grossman

  26. Abusing SC Assuming sequential consistency (SC), assert below cannot fail • Despite data races initially x=0, y=0 r = y; s = x; assert(s>=r); x = 1; y = 1; CSE P505 Winter 2009 Dan Grossman

  27. You don’t get SC • Modern imperative and OO languages do not promise SC • (If they say anything at all) • The hardware makes it prohibitively expensive • Renders unsound almost every compiler optimization • Example: common-subexpression elimination initially a=0, b=0 x = a+b; y = a; z = a+b; assert(z>=y); b = 1; a = 1; CSE P505 Winter 2009 Dan Grossman

  28. Relaxed != Nothing • But (especially in a safe language), have to promise something • When is code “correctly synchronized”? • What can the implementation do if the code is not “correctly synchronized”? • The definitions are very complicated and programmers can usually ignore them, but do not assume SC CSE P505 Winter 2009 Dan Grossman

  29. Real languages • Java: If every sequentially consistent execution of program P is data-race free, then every execution of program P is equivalent to some sequentially consistent execution • Not the definition; a theorem about the definition • Actual definition balances need of programmers, compilers, and hardware • Not defined in terms of “allowed optimizations” • Even bad code can’t corrupt the SecurityManager • C++ (proposed): Roughly, any data race is as undefined as an array-bounds error. No such thing as a benign data race and no guarantees if you have one. (In practice, programmers will assume things, like they do with casts.) • Many other languages: Eerie silence CSE P505 Winter 2009 Dan Grossman

  30. Synchronization and ordering • In relaxed memory models, synchronization operations typically impose ordering constraints • Example: this code cannot violate the assertion initially x=0, y=0 x = 1; sync(lk){} y = 1; r = y; sync(lk){} s = x; assert(s>=r); • Recent research papers on what ordering constraints atomic blocks impose • Is an empty atomic block a no-op? CSE P505 Winter 2009 Dan Grossman

  31. Onto OOP Now let’s talk about object-oriented programming • What’s different from what we have been doing • Boils down to one important thing • How do we define it (will stay informal) • Supporting extensibility • Several other “advanced” topics CSE P505 Winter 2009 Dan Grossman

  32. OOP the sales pitch OOP lets you: • Build extensible software concisely • Exploit an intuitive analogy between interaction of physical entities and interaction of software pieces It also: • Raises tricky semantic and style issues that require careful investigation • Is more complicated than functions • Does notnecessarilymean it’s worse CSE P505 Winter 2009 Dan Grossman

  33. So what is it? OOP “looks like this”, but what is the essence classPt1 extends Object { int x; int get_x() { x } unit set_x(int y) { self.x = y } int distance(Pt1 p) { p.get_x() – self.get_x() } constructor() { x = 0 } } classPt2 extends Pt1 { int y; int get_y() { y } int get_x() { 34 + super.get_x() } constructor() { super(); y = 0 } } CSE P505 Winter 2009 Dan Grossman

  34. Class-based OOP In (pure) class-based OOP: • Everything is an object • Objects communicate via message (handled by methods) • Objects have their own state • Every object is an instance of a class • A class describes its instances’ behavior Why is this approach such a popular way to structure software?... CSE P505 Winter 2009 Dan Grossman

  35. OOP can mean many things • An ADT (private fields) • Inheritance: method/field extension, method override • Implicit self/this • Dynamic dispatch • Subtyping • All the above in one (class) definition Design question: Better to have small orthogonal features or one “do it all” feature? Anyway, let’s consider how “unique to OO” each is… CSE P505 Winter 2009 Dan Grossman

  36. OO for ADTs Object/class members (fields, methods, constructors) often have visibilities What code can invoke a member/access a field? • Methods of the same object? • Methods defined in same class? • Methods defined in a subclass? • Methods in another “boundary” (package, assembly, friend, …) • Methods defined anywhere? CSE P505 Winter 2009 Dan Grossman

  37. Subtyping for hiding • As seen before, can use upcasts to “hide” members • Modulo downcasts • Modulo binary-method problems • With just classes, upcasting is limited • With interfaces, can be more selective interfaceI { int distance(Pt1 p); } classPt1 extends Object { … I f() { self } … } CSE P505 Winter 2009 Dan Grossman

  38. Records of functions If OOP = functions + private state, we already have it • But it’s more (e.g., inheritance) typept1= {get_x : unit -> int; set_x : int -> unit; distance : pt1 -> int} letpt1_constructor () = let x = ref 0 in let recself= { get_x = (fun() -> !x); set_x = (fun y-> x := y); distance = (fun p-> p.get_x() +self.get_x()) } in self CSE P505 Winter 2009 Dan Grossman

  39. Subtyping Most class-based OO languages purposely “confuse” classes & types • If C is a class, then C is a type • If C extends D (via declaration) then C ≤ D • Subtyping is reflexive and transitive Novel subtyping? • New members in C just width subtyping • “Nominal” (by name) instead of structural • What about override… CSE P505 Winter 2009 Dan Grossman

  40. Subtyping, continued • If C extends D, overriding m, what do we need: • Arguments contravariant (assume less) • Result covariant (provide more) • Many “real” languages are more restrictive • Often in favor of static overloading • Some languages try to be more flexible • At expense of run-time checks/casts Good we studied this in a simpler setting CSE P505 Winter 2009 Dan Grossman

  41. Inheritance & override Subclasses: • inherit superclass’s members • can override methods • can use super calls Can we code this up in Caml? • No because of field-name reuse and lack of subtyping • But ignoring that we can get close… CSE P505 Winter 2009 Dan Grossman

  42. Almost OOP? letpt1_constructor () = let x = ref 0 in let recself= { get_x = (fun() -> !x); set_x = (fun y-> x := y); distance = (fun p-> p.get_x()+self.get_x()) } in self (* note: field reuse precludes type-checking *) letpt2_constructor () =(* extends Pt1 *) let r= pt1_constructor () in let y= ref 0 in let rec self = { get_x = (fun() -> 34 + r.get_x()); set_x = r.set_x; distance = r.distance; get_y = (fun() -> !y); } in self CSE P505 Winter 2009 Dan Grossman

  43. Problems Small problems: • Have to change pt2_constructor whenever pt1_constructor changes • But OOPs have tons of “fragile base class” issues too • Motivates C#’s version support • No direct access to “private fields” of superclass Big problem: • Distance method in a pt2 doesn’t behave how it does in OOP • We do not have late-binding of self (i.e., dynamic dispatch) CSE P505 Winter 2009 Dan Grossman

  44. The essence Claim: Class-based objects are: • So-so ADTs • Same-old record and function subtyping • Some syntactic sugar for extension and override • A fundamentally different rule for what self maps to in the environment CSE P505 Winter 2009 Dan Grossman

  45. More on late-binding Late-binding, dynamic-dispatch, and open-recursion all related issues (nearly synonyms) Simplest example I know: letc1 () = let recr= { even = (fun i-> i=0 || r.odd (i-1)); odd = (fun i-> i<>0 && r.even (i-1)) } in r letc2 () = let r1= c1() in let recr= { even = r1.even; (* still O(n) *) odd = (fun i-> i % 2 == 1) } in r CSE P505 Winter 2009 Dan Grossman

  46. More on late-binding Late-binding, dynamic-dispatch, and open-recursion all related issues (nearly synonyms) Simples example I know: classC1 { int even(int i) {i=0 || odd (i-1)) } int odd(int i) {i!=0 && even (i-1)) } } classC2 { /* even is now O(1) */ int odd(int i) {i % 2 == 1} } CSE P505 Winter 2009 Dan Grossman

  47. The big debate Open recursion: • Code reuse: improve even by just changing odd • Superclass has to do less extensibility planning Closed recursion: • Code abuse: break even by just breaking odd • Superclass has to do more abstraction planning Reality: Both have proved very useful; should probably just argue over “the right default” CSE P505 Winter 2009 Dan Grossman

  48. Our plan • Dynamic dispatch is the essence of OOP • How can we define/implement dynamic dispatch? • Basics, not super-optimized versions (see P501) • How do we use/misuse overriding? • Why are subtyping and subclassing separate concepts worth keeping separate? CSE P505 Winter 2009 Dan Grossman

  49. Defining dispatch Methods “compile down” to functions taking self as an extra argument • Just need self bound to “the right thing” Approach #1: • Each object has 1 “code pointer” per method • For new C() where C extends D: • Start with code pointers for D (recursive definition!) • If C adds m, add code pointer for m • If C overrides m, change code pointer for m • self bound to the (whole) object in method body CSE P505 Winter 2009 Dan Grossman

  50. Defining dispatch Methods “compile down” to functions taking self as an extra argument • Just need self bound to “the right thing” Approach #2: • Each object has 1 run-time tag • For new C() where C extends D: • Tag is C • self bound to the object • Method call to m reads tag, looks up (tag,m) in a global table CSE P505 Winter 2009 Dan Grossman

More Related