160 likes | 327 Views
Operating Systems CMPSC 473. Processes (5) September 22 2008 - Lecture 11 Instructor: Bhuvan Urgaonkar. Announcements. Quiz 1 will be out tonight and due in a week Suggested reading: Chapter 4 of SGG If you want to do more work/learn more things Get in touch with us
E N D
Operating SystemsCMPSC 473 Processes (5) September 22 2008 - Lecture 11 Instructor: Bhuvan Urgaonkar
Announcements • Quiz 1 will be out tonight and due in a week • Suggested reading: Chapter 4 of SGG • If you want to do more work/learn more things • Get in touch with us • We can provide more work in your projects • Honors credits? • Honors thesis? • Just for fun?! • Impress me and get good letters when you apply to grad school?
Overview of Process-related Topics Done • How a process is born • Parent/child relationship • fork, clone, … • How it leads its life • Loaded: Later in the course • Executed • CPU scheduling • Context switching • Where a process “lives”: Address space • OS maintains some info. for each process: PCB • Process = Address Space + PCB • How processes request services from the OS • System calls • How processes communicate • Some variants of processes: LWPs and threads • How processes die Done Done Partially done Start today
The notion of a thread code • Roughly, a flow of execution that is a basic unit of CPU utilization • E.g., a process, a KCP • Note: this is not showing an address space (Fig. 4.1 from GGN) data files heap registers stack thread A single-threaded process
Multi-process Applications • Many applications need to do multiple activities simultaneously • E.g., Web browser • Parse requested URL and find IP address from DNS server • Use system calls to send request to some Web server • Receive response • Assemble response and display it • Can you give another example? • Solution #1: Write multi-process application as follows: • forks off multiple processes, each responsible for a certain “flow of execution” • Programmer’s choice/decision • Employ IPC mechanisms for these processes to communicate (coming up soon) • We already know about signals, how many have used pipes? Shared memory? • Employ synchronization (ccoming up in a few lectures) • We would like these “flows of execution” (and not just the initiating process or the entire application) to be the basic unit across which CPU is partitioned (schedulable entity) • Why? • What about resources other than the CPU? (Will discuss this in a little while) • The OS design we have studied so far already achieves this
Multi-process Applications • Many applications need to do multiple activities simultaneously • E.g., Web browser • Parse requested URL and find IP address from DNS server • Use system calls to send request to some Web server • Receive response • Assemble response and display it • Can you give another example? • Approach #1: Write multi-process application as follows: • forks off multiple processes, each responsible for a certain “flow of execution” • Programmer’s choice/decision • Employ IPC mechanisms for these processes to communicate (coming up soon) • We already know about signals, how many have used pipes? Shared memory? • Employ synchronization (coming up in a few lectures) • We would like these “flows of execution” (and not just the initiating process or the entire application) to be the basic unit across which CPU is partitioned (schedulable entity) • Why? • What about resources other than the CPU? (Will discuss this in a little while) • The OS design we have studied so far already achieves this
Multi-process Applications • Many applications need to do multiple activities simultaneously • E.g., Web browser • Parse requested URL and find IP address from DNS server • Use system calls to send request to some Web server • Receive response • Assemble response and display it • Can you give another example? • Approach #1: Write multi-process application as follows: • forks off multiple processes, each responsible for a certain “flow of execution” • Programmer’s choice/decision • Employ IPC mechanisms for these processes to communicate (coming up soon) • We already know about signals, how many have used pipes? Shared memory? • Employ synchronization (coming up in a few lectures) • We would like these “flows of execution” (and not just the initiating process or the entire application) to be the basic unit across which CPU is partitioned (schedulable entity) • Why? • What about resources other than the CPU? (Will discuss this again for VMM and IO) • The OS design we have studied so far already achieves this
Approach #1: Writing a multi-process Application In virtual memory code data files code data files code data files code data files • E.g., a Web browser • What’s wrong with (or lacking in) this approach to programming? • Hint: Approach #1 has performance problems, although it is great for the programmer (why?) • Potentially lot of redundancy in code and data segments! • Virtual memory wastage => More contention for precious RAM heap heap heap heap registers stack registers stack registers stack registers stack URL parsing process Network sending process Network reception process Interprets response, composes media together and displays on browser screen
Approach #1: Writing a multi-process Application In virtual memory code data files code data files code data files code data files • E.g., a Web browser • What’s wrong with (or lacking in) this approach to programming? • Hint: Approach #1 has performance problems, although it is great for the programmer (why?) • Potentially, lot of redundancy in code and data segments! • Virtual memory wastage => More contention for precious RAM => More work for the memory manager => Reduction in computer’s throughput heap heap heap heap registers stack registers stack registers stack registers stack URL parsing process Network sending process Network reception process Interprets response, composes media together and displays on browser screen
Approach #2: Share code, data, files! In virtual memory code data • E.g., a Web browser • Share code, data, files (mmaped), via shared memory mechanisms (coming up) • Burden on the programmer • Better yet, let kernel or a user-library handle sharing of these parts of the address spaces and let the programmer deal with synchronization issues • User-level and kernel-level threads heap files registers stack registers stack registers stack registers stack URL parsing process Network sending process Network reception process Interprets response, composes media together and displays on browser screen
Approach #3: User or kernel support to automatically share code, data, files! In virtual memory code data • E.g., a Web browser • Share code, data, files (mmaped), via shared memory mechanisms (coming up) • Burden on the programmer • Better yet, let kernel or a user-library handle sharing of these parts of the address spaces and let the programmer deal only with synchronization issues heap files registers stack registers stack registers stack registers stack threads URL parsing process Network sending process Network reception process Interprets response, composes media together and displays on browser screen
Approach #3: User or kernel support to automatically share code, data, files! In virtual memory code data • E.g., a Web browser • Share code, data, files (mmaped), via shared memory mechanisms (coming up) • Burden on the programmer • Better yet, let kernel or a user-library handle sharing of these parts of the address spaces and let the programmer deal with synchronization issues • User-level and kernel-level threads heap files registers stack registers stack registers stack registers stack threads URL parsing process Network sending process Network reception process Interprets response, composes media together and displays on browser screen
Multi-threading Models • User-level thread libraries • E.g., the one provided with Project 1 • Implementation: You are expected to gain this understanding as you work on Project 1 • Pop quiz: Context switch overhead smaller. Why? • What other overheads are reduced? Creation? Removal? • Kernel-level threads • There must exist some relationship between user threads and kernel threads • Why? • Which is better?
Multi-threading Models: Many-to-one Model User thread • Thread management done by user library • Context switching, creation, removal, etc. efficient (if designed well) • Blocking call blocks the entire process • No parallelism on uPs? Why? • Green threads library on Solaris k Kernel thread
Multi-threading Models: One-to-many Model User thread • Each u-l thread mapped to one k-l thread • Allows more concurrency • If one thread blocks, another ready thread can run • Can exploit parallelism on uPs • Popular: Linux, several Windows (NT, 2000, XP) k k k k Kernel thread
Multi-threading Models: Many-to-many Model User thread • # u-l threads >= #k-l threads • Best of both previous approaches? k k k Kernel thread