1 / 9

I/O Management and Disk schedule

I/O Management and Disk schedule. Device. Purpose. Partner. Data rate (KB/sec). I/O Devices. Keyboard. Input. Human. .01. Mouse. Input. Human. .02. Voice input. Input. Human. .02. Scanner. Input. Human. 200. Voice output. Output. Human. .06. Line printer. Output. Human.

taya
Download Presentation

I/O Management and Disk schedule

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. I/O Management and Disk schedule

  2. Device Purpose Partner Data rate (KB/sec) I/O Devices Keyboard Input Human .01 Mouse Input Human .02 Voice input Input Human .02 Scanner Input Human 200 Voice output Output Human .06 Line printer Output Human 1 Laser printer Output Human 100 Graphics display Output Human 30,000 CPU to frame buffer Output Human 200 Network terminal Input or output Machine .05 Network-LAN Input or output Machine 200 Optical disk Storage Machine 500 Magnetic tape Storage Machine 2,000 Magnetic disk Storage Machine 2,000

  3. No interrupts Interrupts I/O Techniques I/O to memory transfer through processor Programmed I/O Interrupt-driven I/O Direct I/O to memory transfer Direct memory access (DMA) • DMA unit uses the bus to transfer data to/from memory • when the CPU is not using it, or • forces CPU to temporarily suspend (cycle stealing) • I/O channels • selector • multiplexor

  4. Device I/O Device I/O Device I/O Scheduling & Control Scheduling & Control Scheduling & Control Hardware Hardware Hardware User processes User processes User processes I/O organization File System Logical I/O Communic. architecture Physical organization Local File system Remote

  5. I/O Buffering • Why buffer? • I/O too slow • not possible to swap out a whole process • risk of single-process deadlock • page in memory would have to be locked • I/O devices are: • block-oriented: disk and other storage devices • stream-oriented: terminals, printers, com ports, mouse • Types of buffers (blocks, line or byte at-a-time) • single buffer • double • circular

  6. Disk concepts: review See the following page http://home.ubalt.edu/abento/751/6iomgmt/os0606.html

  7. Seek time • time to move disk arm to a track • S = m x n + s (m is a constant per disk, number of tracks transversed, s startup time) • Rotational delay • waiting time for a given sector align with the head • disks: =~ 3,600 rpm, average = 8.3 msec • diskettes= 300-600 rpm, average = 100 and 200 msec • Access time = seek + rotational delay • Transfer time • time to spin record by the head (to read or write) • T = b/ rN (b is number of bytes to be transferred, r rotation speed rpm/sec, N number of bytes on a track) Disk performance parameters

  8. OS maintains a queue of requests for each I/O device Name Description Characteristic Disk scheduling policies Selection according to requestor: RSS random scheduling for analysis only FIFO first-in, first-out fairest of them all PRI priority by process control not based on disk queue management LIFO last-in, first-out maximize locality and resource utilization Selection according to requested item: SSTF shortest service first high use, small queues SCAN back and forth over disk better service distribution C-SCAN one way with fast return lower service variability N-step-SCAN SCAN of N records at a time service guarantee FSCAN N-step-SCAN with N= queue size load sensitive

  9. a buffer in main memory for disk sectors • locality and locality of reference is what makes it to score hits • reading ahead and write-back delay • data delivery from the cache: • move from cache to user memory • shared memory and pointers: point do not move • Replacement strategy (similar to pages) • LRU (least recently used) • LFU (least frequently used) • FBR (frequency-based replacement) • Hits are not moved to top of queue as they might, pointers are used • Empirical: misses and hits are a function of the cache size Disk Cache

More Related