1 / 1

Henry Cook, Jon Ellithorpe, Laura Keys, Andrew Waterman

IotaFS: Exploring File System Optimizations for SSDs. Henry Cook, Jon Ellithorpe, Laura Keys, Andrew Waterman. BENCHMARK RESULTS. IotaFS INODE TREE. PROBLEM STATEMENT

moira
Download Presentation

Henry Cook, Jon Ellithorpe, Laura Keys, Andrew Waterman

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IotaFS: Exploring File System Optimizations for SSDs Henry Cook, Jon Ellithorpe, Laura Keys, Andrew Waterman BENCHMARKRESULTS IotaFS INODE TREE PROBLEM STATEMENT A solid-state drive (SSD) is a non-volatile storage device that uses flash memory rather than a magnetic disk to store data. SSDs provide vastly improved read latency and random read/write access times relative to their mechanical hard disk counterparts. Unfortunately, SSDs must read and erase large blocks of data before performing any write to that block, leading to reduced write performance. Most modern file systems are optimized based on assumptions about the backing storage device that are poorly matched to SSDs. Bonnie++ Sequential Reads and Writes (KB/s) Bonnie++ Random Seek Speed (seeks/sec) RELATED WORK Previous file systems for Linux targeting NAND Flash devices have assumed a MTD driver interface, whereas modern SSDs expose a SATA interface. Sun Microsystems’s ZFS allows hybrid storage pools with both types of drives, and Btrfs will provide SSD optimizations when completed. IotaFS DISK LAYOUT Bonnie++ CPU Utilization METHODOLOGY We made a variety of modifications to the baseline file system in order to leverage the improved latency and random access performance of the SSDs. We then evaluated each of these improvements using both a synthetic and a realistic workload. We also demonstrate the effect of SSD awareness by testing our file system on an HDD as well. SDD OPTIMIZATIONS WRITE THROUGH BUFFER LARGER DISK BLOCKS Many file systems buffer writes in order to mitigate the seek penalty paid when accessing an HDD. We force the buffer to flush to the backing SSD on every write, since there is no seek penalty paid for random writes to an SSD. Some file systems use only small disk blocks. Doing so may be detrimental to SSD performance depending on the block size of the device. We are testing a variety of block sizes to measure the impact of this parameter. Tarballing the Linux Kernel (time to complete) FUTURE WORK • Additional targeted benchmarks • SSD-targeted logging for improved performance on restart after failure • Comparison with Btrfs SSD optimizations when the file system has been completed ERASE BLOCK AWARENESS COPY-ON-WRITE SSDs can only write data by erasing entire 100KB+ blocks and then rewriting them. We allocate disk blocks to files so as to localize writes to the same file within one erase block, in the hope that locality results in fewer erases. This policy creates a new copy of a block every time one is written. By coalescing pending writes and writing copies to a single block limit the number of erases required. Fragmentation should have a limited impact on SSD performance. Lines of source code BENCHMARKS Bonnie++: Tests sequential/random reads/writes to 16 1GB files Tarballing Linux kernel: Many random reads, one large sequential write SSD PARAMETERS Size: 32 GB Read BW: up to 250 MB/s Write BW: up to 170 MB/s Read Latency: 75s Erase Block Size: 128KB Active Power: 2.4W

More Related