700 likes | 998 Views
INF1060: Introduction to Operating Systems and Data Communication. Operating Systems: Storage: Disks & File Systems. Pål Halvorsen 5/10 - 2005. Overview. Disks Disk scheduling Memory caching File systems. Disks. Disks. Disks ... are used to have a persistent system
E N D
INF1060:Introduction to Operating Systems and Data Communication Operating Systems:Storage: Disks & File Systems Pål Halvorsen 5/10 - 2005
Overview • Disks • Disk scheduling • Memory caching • File systems
Disks • Disks ... • are used to have a persistent system • are cheaper compared to main memory • have more capacity • are orders of magnitude slower • Two resources of importance • storage space • I/O bandwidth • Because... • ...there is a large speed mismatch (ms vs. ns - 106) compared to main memory (this gap still increases) • ...disk I/O is often the main performance bottleneck ...we must look closer on how to manage disks
Mechanics of Disks Spindleof which the platters rotate around Tracksconcentric circles on asingle platter Platterscircular platters covered with magnetic material to provide nonvolatile storage of bits Disk headsread or alter the magnetism (bits) passing under it. The heads are attached to an arm enabling it to move across the platter surface Sectorssegment of the track circle – usually each contains 512 bytes –separated by non-magnetic gaps.The gaps are often used to identifybeginning of a sector Cylinderscorresponding tracks on the different platters are said to form a cylinder
Disk Specifications Note 1:disk manufacturers usually denote GB as 109 whereascomputer quantities often arepowers of 2, i.e., GB is 230 • Some existing (Seagate) disks today: Note 2:there is a difference between internal and formatted transfer rate. Internal is only between platter. Formatted is after the signals interfere with the electronics (cabling loss, interference, retransmissions, checksums, etc.) Note 3:there is usually a trade off between speed and capacity
Disk Capacity • The size (storage space) of the disk is dependent on • the number of platters • whether the platters use one or both sides • number of tracks per surface • (average) number of sectors per track • number of bytes per sector • Example (Cheetah X15): • 4 platters using both sides: 8 surfaces • 18497 tracks per surface • 617 sectors per track (average) • 512 bytes per sector • Total capacity = 8 x 18497 x 617 x 512 4.6 x 1010 = 42.8 GB • Formatted capacity = 36.7 GB Note:there is a difference between formatted and total capacity. Some of the capacity is used for storing checksums, spare tracks, gaps, etc.
Disk Access Time • How do we retrieve data from disk? • position head over the cylinder (track) on which the block (consisting of one or more sectors) are located • read or write the data block as the sectors move under the head when the platters rotate • The time between the moment issuing a disk request and the time the block is resident in memory is called disk latency or disk access time
block x in memory I want block X Disk Access Time Disk platter Disk access time = Disk head Seek time +Rotational delay +Transfer time Disk arm +Other delays
Time ~ 10x - 20x x Cylinders Traveled 1 N Disk Access Time: Seek Time • Seek time is the time to position the head • the heads require a minimum amount of time to start and stop moving the head • some time is used for actually moving the head – roughly proportional to the number of cylinders traveled • Time to move head: number of tracks seek time constant fixed overhead “Typical” average: 10 ms 40 ms 7.4 ms (Barracuda 180) 5.7 ms (Cheetah 36) 3.6 ms (Cheetah X15)
head here block I want Disk Access Time: Rotational Delay • Time for the disk platters to rotate so the first of the required sectors are under the disk head Average delay is 1/2 revolution“Typical” average: 8.33 ms (3.600 RPM) 5.56 ms (5.400 RPM) 4.17 ms (7.200 RPM) 3.00 ms (10.000 RPM) 2.00 ms (15.000 RPM)
amount of data per tracktime per rotation Disk Access Time: Transfer Time • Time for data to be read by the disk head, i.e., time it takes the sectors of the requested block to rotate under the head • Transfer rate = • Transfer time = amount of data to read / transfer rate • Example – Barracuda 180:406 KB per track x 7.200 RPM 47.58 MB/s • Example – Cheetah X15:316 KB per track x 15.000 RPM 77.15 MB/s • Transfer time is dependent on data density and rotation speed • If we have to change track, time must also be added for moving the head Note:one might achieve these transfer rates reading continuously on disk, but time must be added for seeks, etc.
Disk Access Time: Other Delays • There are several other factors which might introduce additional delays: • CPU time to issue and process I/O • contention for controller • contention for bus • contention for memory • verifying block correctness with checksums (retransmissions) • waiting in scheduling queue • ... • Typical values: “0” (maybe except from waiting in the queue)
Writing and Modifying Blocks • A write operation is analogous to read operations • must add time for block allocation • a complication occurs if the write operation has to be verified – must wait another rotation and then read the block to see if it is the block we wanted to write • Total write time read time (+ time for one rotation) • A modification operation is similar to reading and writing operations • cannot modify a block directly: • read block into main memory • modify the block • write new content back to disk • (verify the write operation) • Total modify time read time (+ time to modify) + write time
Disk Controllers • To manage the different parts of the disk, we use a disk controller, which is a small processor capable of: • controlling the actuator moving the head to the desired track • selecting which platter and surface to use • knowing when right sector is under the head • transferring data between main memory and disk • New controllers acts like small computers themselves • both disk and controller now has an own buffer reducing disk access time • data on damaged disk blocks/sectors are just moved to spare room at the disk – the system above (OS) does not know this, i.e., a block may lie elsewhere than the OS thinks
Efficient Secondary Storage Usage • Must take into account the use of secondary storage • there are large access time gaps, i.e., a disk access will probably dominate the total execution time • there may be huge performance improvements if we reduce the number of disk accesses • a “slow” algorithm with few disk accesses will probably outperform a “fast” algorithm with many disk accesses • Several ways to optimize ..... • block size - 4 KB • file management / data placement - various • disk scheduling - SCAN derivate • multiple disks - a specific RAID level • prefetching - read-ahead prefetching • memory caching / replacement algorithms - LRU variant • …
Disk Scheduling • Seek time is a dominant factor of total disk I/O time • Let operating system or disk controller choose which request to serve next depending on the head’s current position and requested block’s position on disk (disk scheduling) • Note that disk scheduling CPU scheduling • a mechanical device – hard to determine (accurate) access times • disk accesses can/should not be preempted – run until it finishes • disk I/O often the main performance bottleneck • General goals • short response time • high overall throughput • fairness (equal probability for all blocks to be accessed in the same time) • Tradeoff: seek and rotational delay vs. maximum response time
Disk Scheduling • Several traditional algorithms • First-Come-First-Serve (FCFS) • Shortest Seek Time First (SSTF) • SCAN (and variations) • Look (and variations) • … • A LOT of different algorithms exist depending on expected access pattern
cylinder number scheduling queue 1 5 10 15 20 25 time First–Come–First–Serve (FCFS) FCFS serves the first arriving request first: • Long seeks • “Short” average response time incoming requests (in order of arrival, denoted by cylinder number): 12 14 2 7 21 8 24 12 14 2 7 21 8 24
cylinder number 1 5 10 15 20 25 time SCAN SCAN (elevator) moves head edge to edge and serves requests on the way: • bi-directional • compromise between response time and seek time optimizations • several optimizations: C-SCAN, LOOK, C-LOOK, … incoming requests (in order of arrival): 12 14 2 7 21 8 24 12 14 2 7 21 8 24 scheduling queue
time time SCAN vs. FCFS 12 14 2 7 21 8 24 incoming requests (in order of arrival): • Disk scheduling makes a difference! • In this case, we see that SCAN requires much less head movement compared to FCFS(here 37 vs. 75 tracks) cylinder number 1 5 10 15 20 25 FCFS SCAN
Modern Disk Scheduling • Disk used to be simple devices and disk scheduling used to be performed by OS (file system or device driver) only… • … but, new disks are more complex • hide their true layout, e.g., • only logical block numbers • different number of surfaces, cylinders, sectors, etc. OS view real view
Modern Disk Scheduling • Disk used to be simple devices and disk scheduling used to be performed by OS (file system or device driver) only… • … but, new disks are more complex • hide their true layout • transparently move blocks to spare cylinders • e.g., due to bad disk blocks OS view real view
Modern Disk Scheduling • Disk used to be simple devices and disk scheduling used to be performed by OS (file system or device driver) only… • … but, new disks are more complex • hide their true layout • transparently move blocks to spare cylinders • have different zones OS view real view • Constant angular velocity (CAV) disks • constant rotation speed • equal amount of data in each track • thus, constant transfer time • Zoned CAV disks • constant rotation speed • zones are ranges of tracks • typical few zones • the different zones have different amount of data, i.e., more better on outer tracks • thus, variable transfer time NB! illustration of transfer time, not rotation speed
Modern Disk Scheduling • Disk used to be simple devices and disk scheduling used to be performed by OS (file system or device driver) only… • … but, new disks are more complex • hide their true layout • transparently move blocks to spare cylinders • have different zones • head accelerates – most algorithms assume linear movement overhead Time ~ 10x - 20x x Cylinders Traveled N 1
buffer disk disk Modern Disk Scheduling • Disk used to be simple devices and disk scheduling used to be performed by OS (file system or device driver) only… • … but, new disks are more complex • hide their true layout • transparently move blocks to spare cylinders • have different zones • head accelerates – most algorithms assume linear movement overhead • on device buffer caches may use read-ahead prefetching
Modern Disk Scheduling • Disk used to be simple devices and disk scheduling used to be performed by OS (file system or device driver) only… • … but, new disks are more complex • hide their true layout • transparently move blocks to spare cylinders • have different zones • head accelerates – most algorithms assume linear movement overhead • on device buffer caches may use read-ahead prefetching • are “smart” with build in low-level scheduler (usually SCAN-derivate) • we cannot fully control the device (black box) • OS could (should?) focus on high level scheduling only
application file system communication system disk network card I/O controller hub memory controller hub Data Path (Intel Hub Architecture) Pentium 4 Processor registers cache(s) file system RDRAM RDRAM application RDRAM RDRAM PCI slots PCI slots disk PCI slots
caching possible Buffer Caching • How do we manage a cache? • how much memory to use? • how much data to prefetch? • which data item to replace? • how do lookups quickly? • … application cache file system communication system expensive disk network card
Buffer Caching: Windows XP • I/O manager perform caching • centralized facility to all components (not only file data) • I/O requests processing: process • I/O request from process • I/O manager forwards to cache manager • in cache: • cache manager locates and copies to process buffer via VMM • VMM notifies process • on disk: • cache manager generates a page fault • VMM makes a non-cached service request • I/O manager makes request to file system • file system forwards to disk • disk finds data • reads into cache • cache manager copies to process buffer via VMM • virtual memory manager notifies process I/O manager virtual memory manager (VMM) file system cache drivers manager disk drivers Kernel
Buffer Caching: Linux / Unix • File system perform caching • caches disk data (blocks) only • may hint on caching decisions • prefetching • I/O requests processing: Process Kernel virtual file • I/O request from process • virtual file system forwards to local file system • local file system finds requested block number • requests block from buffer cache • data located… • … in cache: • return buffer memory address • … on disk: • make request to disk driver • data is found on disk and transferred to buffer • return buffer memory address • file system copies data to process buffer • process is notified system FAT32 HFS Linux ext2fs (Windows) (Macintosh) buffers disk drivers
Files?? • A file is a collection of data – often for a specific purpose • unstructured files, e.g., Unix and Windows • structured files, e.g., MacOS (to some extent) and MVS • In this course, we consider unstructured files • for the operating system, a file is only a sequence of bytes • it is up to the application/user to interpret the meaning of the bytes • simpler file systems
File Systems • File systems organize data in files and manage access regardless of device type, e.g.: • file management – providing mechanisms for files to be stored, referenced, shared, secured, … • auxiliary storage management – allocating space for files on secondary storage • file integrity mechanisms – ensuring that information is not corrupted, intended content only • access methods – provide methods to access stored data • …
Organizing Files - Directories • A system usually has a large amount of different files • To organize and quickly locate files, file systems use directories • contain no data itself • file containing name and locations of other files • several types • single-level (flat) directory structure • hierarchical directory structure
Root directory Four files Single-level Directory Systems • CP/M • Microcomputers • Single user system • VM • Host computers • “Minidisks”: one partition per user
Hierarchical Directory Systems • Tree structure • nodes = directories,root node = root directory • leaves = files • Directories • stored on disk • attributes just like files • subdirectories need names • To access a file • must test all directories in path for • existence • being a directory • permissions • similar tests on the file itself / /
\ \ Device D Device C WINNT EXPLORER.EXE Hierarchical Directory Systems • Windows: one tree per partition or device Complete filename example: C:\WinNT\EXPLORER.EXE
/ doc Howto Hierarchical Directory Systems • Unix: single acyclic graphspanning several devices / cdrom Complete filename example: /cdrom/doc/Howto
INF1060:Introduction to Operating Systems and Data Communication Operating Systems:Storage: Disks & File Systems(cnt’d) Pål Halvorsen 19/10 - 2005
File: create delete open close read write append seek get/set attributes rename link unlink … Directory: create delete opendir closedir readdir rename link unlink … File & Directory Operations
Example: open(), read() and close() #include <stdio.h> #include <stdlib.h> int main(void) { int fd, n; char buffer[BUFSIZE]; char *buf = buffer; if ((fd = open( “my.file” , O_RDONLY , 0 )) == -1) { printf(“Cannot open my.file!\n”); exit(1); /* EXIT_FAILURE */ } while ((n = read(fd, buf, BUFSIZE) > 0) { <<USE DATA IN BUFFER>> } close(fd); exit(0); /* EXIT_SUCCESS */ }
Open BDS example open(name,mode,perm) system call handling as described earlier Operating System • sys_open() vn_open(): • Check if valid call • Allocate file descriptor • If file exists, open for read. Otherwise, create a new file.Must get directory inode. May require disk I/O. • Set access rights, flags and pointer to vnode • Return index to file descriptor table fd
Example: open(), read() and close() #include <stdio.h> #include <stdlib.h> int main(void) { int fd, n; char buffer[BUFSIZE]; char *buf = buffer; if ((fd = open( “my.file” , O_RDONLY , 0 )) == -1) { printf(“Cannot open my.file!\n”); exit(1); /* EXIT_FAILURE */ } while ((n = read(fd, buf, BUFSIZE) > 0) { <<USE DATA IN BUFFER>> } close(fd); exit(0); /* EXIT_SUCCESS */ }
Read BDS example buffer read(fd, *buf, len) system call handling as described earlier Operating System • sys_read() dofileread() (*fp_read==vn_read)(): • Check if valid call and mark file as used • Use file descriptor as index in file table to find corresponding file pointer • Use data pointer in file structure to find vnode • Find current offset in file • Call local file system VOP_READ(vp,len,offset,..)
Read • VOP_READ(...) is a pointer to a read function in the • corresponding file system, e.g., Fast File System (FFS) • READ(): • Find corresponding inode • Check if valid call - file size vs. len + offset • Loop and find corresponding blocks • find logical blocks from inode, offset, length • do block I/O, fill buffer structure e.g., bread(...) bio_doread(...) getblk() • return and copy block to user Operating System VOP_READ(vp,len,offset,..) getblk(vp,blkno,size,...)
Read Operating System getblk(vp,blkno,size,...) M • Search for block in buffer cache, return if found(hash vp and blkno and follow linked hash list) • Get a new buffer (LRU, age) • Call disk driver - sleep or do something else • Reorganize LRU chain and return buffer VOP_STRATEGY(bp)