1 / 19

Topic 6: Resource Management

Topic 6: Resource Management. 6.1.1 Identify resources that need to be managed within a computer system. These are the resources mentioned in the guide: cache primary memory secondary storage disk storage processor speed sound processor graphics processor bandwidth screen resolution

phil
Download Presentation

Topic 6: Resource Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Topic 6:Resource Management

  2. 6.1.1 Identify resources that need to be managed within a computer system These are the resources mentioned in the guide: • cache • primary memory • secondary storage • disk storage • processor speed • sound processor • graphics processor • bandwidth • screen resolution • network connectivity • These can be grouped as: • Storage • Processing • I/O (Input/Output)

  3. Why do resources need to be managed at all? • Let's say you have two jobs to do: cooking a meal and washing some clothes. • You could do them like this: • Put the clothes into the washing machine • Wait while they get washed • Hang the clothes up • Start the food cooking • Wait till it cooks • Serve the food • But it would save time to do this: • Put the clothes into the washing machine • Start the food cooking • Hang the clothes up • Serve the food Computer scientists worked out very early on (ie in the 1960s) that the most efficient way to use a computer is to get it to do several things at the same time. This concept was originally called multi-programming. This idea, together with a set of other strategies for sharing the CPU, are now called multi-tasking.

  4. Multi-tasking • As soon as we have a multi-tasking system, we have the problem of how to share one set of resources among a group of running programs. • How best to do this is the subject of a lot of research. There is not single best answer, but there are a lot of different strategies. Here are the main ones: • Multiple CPUs ("cores") e.g. dual core, quad core , graphics processor, etc • Time-slicing • Prioritisation • Polling • Interrupts • Blocking • Swapping A CPU process called the scheduleris responsible for deciding which programs get CPU time and when. There are all sorts of ways of doing it, some of which you will learn about more in your Computer System Architecture and Operating Systems courses at university. Check here for more details: http://en.wikipedia.org/wiki/Scheduling_(computing)

  5. Important Concepts • Multiple CPUs ("cores") e.g. dual core, quad core, graphics processor, etc: It is obvious that more CPUs will give greater processing power but an extra layer of complexity is introduced in deciding which core should be used when. Another idea is to dedicated resources to a particular function. Graphics is a common one with modern high-performance gaming computers dedicating extra CPUs and RAM for use by graphics cards alone. • Time-slicing: This is the idea that n running programs get one nth of the time available by the processor. This works if all running programs are as demanding of CPU time as each other, but this is seldom the case. • Prioritisation: This is the concept that some running processes can be treated as more important than others and so they get more CPU time. • Polling: This is used by the CPU to find out if a program needs CPU time. Essentially the CPU keeps asking the program (or hardware device) over and over again. Polling and interrupting are alternative methods of achieving the same end and are dealt with separately on another slide. • Interrupts: Instead of the CPU continually polling a process to see if it needs CPU time, it is left up to the process to "interrupt" the CPU and tell it that it needs CPU time. • Blocking: This is a method by which a program can declare itself unable to proceed until some condition is met, ie a resource has become available, eg the hard disk, or some input has been provided by the user. • Swapping: A blocked process can be "swapped out" of memory by the OS and its state saved to disk. When it is ready to be resumed, the OS can swap it back in and start running it again. This ensures that memory is not wasted.

  6. The Problem of I/O • Consider the following program: public static void main(String[] args){ Scanner in = new Scanner(System.in); System.out.println("Enter your name:"); String userName = in.readLine(); while (!userName.equalsIgnoreCase("x")){ System.out.println("Your name has " + userName.length() + " letters."; System.out.println("Enter your name:"); userName= in.readLine(); } } • What does this program spend most if its time doing? • The fact is, this program spends the VAST majority of its time waiting for input at the calls to in.readLine(). • In all computer systems, waiting for I/O (e.g. reading from or writing to disk, sending or receiving data on a network, etc) takes orders of magnitude longer than the execution of other program instructions.

  7. Data transfer between the CPU and hardware devices • CPUs can process data much faster than hardware devices (or people) can. • Imagine having a conversation with someone who only says one word per minute, and who can only listen to what you're saying if you say it at the same slow speed. • Very quickly you will find that you are spending the vast majority of your time sitting waiting. • You might decide it's easier if she writes down her message, on a piece of paper, very slowly, while you go off and do something else. You can then come back later, quickly read the message, quickly write a reply and then go off and do something else again, while she takes ages reading it and writing her reply. • This is precisely what happens when the CPU talks to a hardware device…. • The piece of paper on which you write and receive your notes to and from the hardware device is called a buffer. A buffer allows the CPU to queue up a meaningful amount of work each time it communicates with a hardware device.

  8. I/O and Hardware • One of the main reasons I/O is so slow, is that it involves hardware, ie actually moving stuff in the physical world. • That could be the read head on adisk drive, or the moving parts in printer. • Moving these things is much, muchslower than the speed at which electrical impulses travel arounda silicon chip. • You can think of the time wastedby I/O has being something like having a Skype chat by snail mail. It takes you seconds to write your message, but ages to get a reply.

  9. Solving the I/O problem: Blocking • It clearly makes sense to do something else while you're waiting for your snail mail reply if possible. (The alternative is known as "busy waiting", ie not doing anything, but not able to yield to another process either, and is clearly undesirable.) • A program that is waiting for I/O and can't do anything until it arrives is said to be blocked on I/O. • The OS detects this and swaps it out of memory and gets on with other tasks. • But how does the OS know when your snail mail reply has arrived? • Two options: • It can keep checking for it (polling) • It can have some sort of alert system that tells it (interrupt) It is possible simultaneously for process A to be blocked on a resource held by process B, and process B to be blocked on a resource held by process A. This situation is known as deadlock, and operating systems employs a variety of algorithms to detect and/or prevent it.

  10. Hardware Interrupts • An interrupt is a signal that stops the CPU and forces it to do something else immediately. The interrupt does this without waiting for the current program to finish. It is unconditional and immediate which is why it is called an interrupt. The whole point of a interrupt is that the main program can perform a task without worrying about an external event. • Programs cause these interrupts constantly. These are called software interrupts. • Some hardware can interrupt the CPU in this way. This is called a hardware interrupt.

  11. Polling • If the piece of hardware cannot interrupt the CPU, then the CPU has to keep checking with the hardware, to see if it is finished.

  12. Interrupts vs Polling • Interrupts save CPU time because the CPU doesn’t have to keep checking.  • But too many interrupts can slow the CPU down. • Polling is easy to implement because the hardware doesn’t need to be able to do anything special. • Polling gives the CPU more control over what does. • Polling wastes CPU time. • Verdict:Almost all hardware devices use interrupts where possible.

  13. Task • Think of a normal household telephone. • Think of what happens when someone calls. • How do you find out that someone wants to talk? • Is this analogous to an interrupt or polling? • Once you have decided which strategy this is analogous to, interrupt or polling, describe what a telephone would be like if it used the other strategy.

  14. A third way: direct memory access • Because sending and receiving data to and from hardware peripherals is slow, the CPU often has to waste its time either: • polling the device to see if it wants to read from or write to RAM • being interrupted by device whenever it wants to read from or write to RAM • A recent development, DMA, allows the hardware device to bypass the CPU and access RAM directly, to save or retrieve the data it needs • Instead of the CPU having to be involved in the exchange of data, a DMA Controller (a bit like a mini-CPU dedicated to the task) coordinates the exchange • This frees up the CPU • (An interrupt will still be used to notify the CPU that the peripheral has finished its task, but no interrupts will have been necessary during data transfer.)

  15. Multi-user environments • Lots of operating systems, especially servers, are multi-user environments. • The OS divides its time and resources up between users, just as it does between programs. • The OS must manage each user's data and memory space, as well as each process's data and memory space, to ensure that it is secure from access by other users or processes.

  16. Memory Management • Multi-tasking environment: keeping the memory space of each process safe from other running processes • Multi-user environment: keeping the memory space (primary and secondary) of each user safe from other users • Allocating and deallocatingmemory for each process • Paging: Dividing virtual memory up into equal-sized blocks (pages) • Paging allows OSs to allocate non-contiguous chunks of memory to the same process, thus reducing fragmentation problems

  17. Memory Management: Virtual Memory • Virtual memory: The use of secondary memory as if it were primary memory • The OS make it easier for programs to reference memory because they don’t need to worry about the complications of the underlying physical structure of memory and disk

  18. Types of Operating System • Single user, single task: Early computers used to be like this. You would write your program on punch cards and book time on the computer to run it. Users would have to queue up to use the computer. If your program generated an error, you would have to come back next week! Modern examples of single-user, single-tasking OS's are Palm OS and early versions of the iPhone and iPad. Mobile phones are slowly developing multi-tasking capability though. • Single user, multi-tasking: A basic standalone home PC has one user who can run lots of different programs at the same time, e.g. Mac OS or Windows 7. • Multi-user: A network operating system, such as the one at school, in which multiple users can run multiple programs simultaneously, e.g. Windows Server 2012.

  19. Operating System Virtualization • Virtualization is the process of making the interface to different types of hardware or software seem uniform, thereby simplifyingits use. • Virtual memory is an example. The OS presents a simple uniform list of bytes that a running program can access, but behind the scenes the program's data may be stored all over the place, in cache, RAM, disk, or on a network. • OS's may also virtualize storage, presenting drives as a homogeneous set of letters, when in fact some may be USB drives, some may be hard disks, some may be DVD-ROM. • Dropboxis a good example of virtualization. It's a sort of virtual folder. It looks and behaves like any other folder. But behind the scenes it is quite different. • Virtualization is all about hiding the complexity of the system. It is similar to the concept of abstraction in programming.

More Related