Enabling SMR

Shingled Magnetic Recording (SMR) is a storage technology that allows for increased storage density over conventional hard disk drives. The tradeoff for this increased density is poor random write performance (reads and sequential writes are primarily unaffected). SMR drives come in three forms: drive-managed, host-managed, and host-aware. Drive-managed drives feature a shingled translation layer (STL), similar to a flash translation layer, in which the drive autonomously manages writes to prevent random write performance degradation. Host-managed drives force the client to issue sequential writes; random writes simply fail. Host-aware drives are a hybrid, preferring sequential writes but falling back on a translation layer in case of random writes.

The first aspect of our work provides a shingled translation layer and an SMR translation layer for conventional HDDs. The latter allows for a conventional HDD to be treated as an SMR drive (the performance difference between an emulated SMR and actual SMR being negligible). The former allows for the emulation of a drive-managed SMR drive. As such, performance measurements on SMR dives can be achieved without having physical access to such a drive. Furthermore, it allows for the testing of various STL policies without requiring the STL be implemented in an actual SMR drive.

The second aspect of our work aims to measure the cost of relying on the shingled translation layer in a drive-managed (or host-aware) drive. The benefit of a drive-managed SMR drive is the ability to use any file system on top of the drive, regardless of how the file system handles random writes. However, this may result in worse performance than a file system that avoids random writes altogether. We have modified NetBSD's implementation of LFS (Log-structured File System) for use with a host-aware SMR drive, the key feature of LFS being that all writes are sequential. Benchmarks using LFS on a host-aware or host-managed SMR drive may illuminate the performance costs of allowing for arbitrary file systems on a drive-managed SMR drive.

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grant No. CCF-1438983, a Graduate Research Fellowship under Fellow ID 2012116808, CAREER awards under Grant No. CSR-1302334, Collaborative Research with Erez Zadok and Ari Kaufman of SUNY Stonybrook and Geoffrey Keunning of Harvey Mudd College, "Workload-Aware Storage Architectures for Optimal Performance and Energy Efficiency.”