1 / 33

CS 345 Operating Systems

BYU CS 345. Computer Systems. 2. Topics to Cover

dorinda
Download Presentation

CS 345 Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. BYU CS 345 Computer Systems 1 CS 345 – Operating Systems Fall 2010 Section 001, 1:00 – 1:50 pm MWF Section 002, 2:00 – 2:50 pm MWF Instructor: Paul Roper Office: TMCB 3370, 422-8149 Email: proper@cs.byu.edu Office Hours: 9:30 – 10:50 am MWF

    2. BYU CS 345 Computer Systems 2 Topics to Cover… OS Objectives OS Services Resource Manager Evolution Achievements Processes Memory management Information protection and security Scheduling and resource management System architecture

    3. BYU CS 345 Computer Systems 3 Processes and Threads Chapter 4, Part II

    4. BYU CS 345 Computer Systems 4 Symmetric Multiprocessing Kernel can execute on any processor Typically each processor does self-scheduling from the pool of available process or threads Timer interrupt Ready queue SMP Support Any thread (including kernel threads) can run on any processor Soft affinity – Try to reschedule a thread on the same processor Hard affinity – Restrict a thread to certain processors

    5. BYU CS 345 Computer Systems 5 Symmetric Multiprocessor Organization

    6. BYU CS 345 Computer Systems 6 SMP Organization Generally each processor has its own cache, share memory and I/O Design issues Simultaneous concurrent processes or threads Kernel routines must be reentrant to allow for multiple threads Scheduling (Chap 10) Must avoid conflicts May be able to run threads concurrently Synchronization (Chap 5) Mutual exclusion, event ordering Memory management (Chap 7, 8) Deal with multiport memory Have a unified paging scheme Reliability and fault tolerance Solutions similar to normal case

    7. BYU CS 345 Computer Systems 7 Microkernels Popularized by use in Mach O.S. Monolithic O.S. Built as a single large program, any routine can call any other routine Used in most early systems Layered O.S. Based on modular programming Major changes still had wide-spread effects on other layers Microkernel Only essential functions in the kernel File System, Device drivers, etc., are now external subsystems/processes Processes interact through messages passed through the kernel

    8. BYU CS 345 Computer Systems 8 Microkernel Identify and isolate a small operating system core that contains only essential OS functions Move many services included in the traditional kernel OS to external subsystems device drivers file systems virtual memory manager windowing system and security services

    9. BYU CS 345 Computer Systems 9 Microkernel Design Primitive Memory Management Kernel handles virtual®physical mapping, rest is a user mode process V.M. module can decide what pages to move to/from disk Module can allocate memory Three microkernel memory operations Grant – Grant pages to someone else (gives up access to pages) Map – Map pages in another space (both can access page) Flush – Reclaim pages granted or mapped Interprocess Communication Based on messages I/O and Interrupts Handle interrupts as messages

    10. BYU CS 345 Computer Systems 10 Microkernel OS

    11. BYU CS 345 Computer Systems 11 Microkernel Benefits Uniform Interface Same message for user/system services Extensibility Easy to add new services Modifications need only change directly affected components Could have multiple file services Flexibility Can customize system by omitting services Portability Isolate nearly all processor-specific code in the kernel Changes tend to be in logical areas

    12. BYU CS 345 Computer Systems 12 Microkernel Benefits (continued…) Reliability Easy to rigorously test kernel Fewer system calls to master Less interaction with other components Distributed System Support Just as easy to send a message to another machine as this machine Need system-wide unique Ids Processes don’t have to know where a service resides Object-Orientated O.S. Lends discipline to the kernel Some systems (NT) incorporate OO principles into the design

    13. BYU CS 345 Computer Systems 13 Kernel Performance Sending a message generally slower than simple kernel call Depends on size of the microkernel First generation systems slower Then tried to include critical system items into kernel (Mach) Fewer user/system mode switches Lose some microkernel benefits Trying approach of very small kernel L4 - 12K code, 7 system calls Speed seems to match Unix

    14. BYU CS 345 Computer Systems 14 Win 2000 Threads Thread States Ready – Able to run Standby – Scheduled to run Running Waiting – Blocked or suspended Transition – Not blocked, but can’t run (paged out of memory) Terminated Support for O.S. Subsystem Process creation Begins with request from application Goes to protected subsystem Passed to executive, returns handle Win32, OS/2 use handle to create thread Return process/thread information Win2000 - client requests a new thread Thread inherits limits, etc. from parent

    15. BYU CS 345 Computer Systems 15 Solaris Threads Four thread-related concepts Process – Normal Unix process User-level Thread – Thread library Lightweight Process – Mapping between ULTs and Kernel Threads Kernel Thread – Fundamental kernel scheduling object Also used for system functions

    16. BYU CS 345 Computer Systems 16 Linux Threads Task structure maintained for each process/thread State (executing, ready, zombie, etc.) Scheduling information Process, user, group identifiers Interprocess communication info Links to parent, siblings, children Timers (time used, interval timer) File system – Pointers to open files Virtual memory Processor-specific context Threads are implemented as processes that share files, virtual memory, signals, etc. “Clone” system call to create a thread <pthread.h> library provides more user-friendly thread support

    17. BYU CS 345 Computer Systems 17 Chapter 10 Multiprocessor Scheduling

    18. BYU CS 345 Computer Systems 18 Classifications of Multiprocessors Loosely coupled multiprocessor. each processor has its own memory and I/O channels Functionally specialized processors. such as I/O processor controlled by a master processor Tightly coupled multiprocessing. processors share main memory controlled by operating system

    19. BYU CS 345 Computer Systems 19 Synchronization Granularity

    20. BYU CS 345 Computer Systems 20 Independent Parallelism Separate processes running. No synchronization. An example is time sharing. average response time to users is less more cost-effective than a distributed system

    21. BYU CS 345 Computer Systems 21 Very Coarse Parallelism Distributed processing across network nodes to form a single computing environment. In general, any collection of concurrent processes that need to communicate or synchronize can benefit from a multiprocessor architecture. good when there is infrequent interaction network overhead slows down communications

    22. BYU CS 345 Computer Systems 22 Coarse Parallelism Similar to running many processes on one processor except it is spread to more processors. true concurrency synchronization Multiprocessing.

    23. BYU CS 345 Computer Systems 23 Medium Parallelism Parallel processing or multitasking within a single application. Single application is a collection of threads. Threads usually interact frequently.

    24. BYU CS 345 Computer Systems 24 Fine-Grained Parallelism Much more complex use of parallelism than is found in the use of threads. Very specialized and fragmented approaches.

    25. BYU CS 345 Computer Systems 25 Assigning Processes to Processors How are processes/threads assigned to processors? Static assignment. Advantages Dedicated short-term queue for each processor. Less overhead in scheduling. Allows for group or gang scheduling. Process remains with processor from activation until completion. Disadvantages One or more processors can be idle. One or more processors could be backlogged. Difficult to load balance. Context transfers costly.

    26. BYU CS 345 Computer Systems 26 Assigning Processes to Processors Who handles the assignment? Master/Slave Single processor handles O.S. functions. One processor responsible for scheduling jobs. Tends to become a bottleneck. Failure of master brings system down. Peer O.S. can run on any processor. More complicated operating system. Generally use simple schemes. Overhead is a greater problem Threads add additional concerns CPU utilization is not always the primary factor.

    27. BYU CS 345 Computer Systems 27 Process Scheduling Single queue for all processes. Multiple queues are used for priorities. All queues feed to the common pool of processors. Specific scheduling disciplines is less important with more than one processor. Simple FCFS discipline or FCFS within a static priority scheme may suffice for a multiple-processor system.

    28. BYU CS 345 Computer Systems 28 Thread Scheduling Executes separate from the rest of the process. An application can be a set of threads that cooperate and execute concurrently in the same address space. Threads running on separate processors yields a dramatic gain in performance. However, applications requiring significant interaction among threads may have significant performance impact w/multi-processing.

    29. BYU CS 345 Computer Systems 29 Multiprocessor Thread Scheduling Load sharing processes are not assigned to a particular processor Gang scheduling a set of related threads is scheduled to run on a set of processors at the same time Dedicated processor assignment threads are assigned to a specific processor Dynamic scheduling number of threads can be altered during course of execution

    30. BYU CS 345 Computer Systems 30 Load Sharing Load is distributed evenly across the processors. Select threads from a global queue. Avoids idle processors. No centralized scheduler required. Uses global queues. Widely used. FCFS Smallest number of threads first Preemptive smallest number of threads first

    31. BYU CS 345 Computer Systems 31 Disadvantages of Load Sharing Central queue needs mutual exclusion. may be a bottleneck when more than one processor looks for work at the same time Preemptive threads are unlikely to resume execution on the same processor. cache use is less efficient If all threads are in the global queue, all threads of a program will not gain access to the processors at the same time.

    32. BYU CS 345 Computer Systems 32 Gang Scheduling Schedule related threads on processors to run at the same time. Useful for applications where performance severely degrades when any part of the application is not running. Threads often need to synchronize with each other. Interacting threads are more likely to be running and ready to interact. Less overhead since we schedule multiple processors at once. Have to allocate processors.

    33. BYU CS 345 Computer Systems 33 Dedicated Processor Assignment When application is scheduled, its threads are assigned to a processor. Advantage: Avoids process switching Disadvantage: Some processors may be idle Works best when the number of threads equals the number of processors.

    34. BYU CS 345 Computer Systems 34 Dynamic Scheduling Number of threads in a process are altered dynamically by the application. Operating system adjusts the load to improve use. assign idle processors new arrivals may be assigned to a processor that is used by a job currently using more than one processor hold request until processor is available new arrivals will be given a processor before existing running applications

More Related