Our current work focuses on designing file systems and their supporting operating system architecture. File systems must evolve as system architectures change around them. We aim to identify these changes and understand how file system architecture should respond. Current work includes:
Developing a generalized logging facility from a journaling core. Using this for strong data protection within storage-area network environments.
Designing flexible file system interfaces and application-specific file systems.
How should the performance of file systems be measured?
How do file systems age and how can this aging be simulated to improve the relevance of benchmarks?
How do soft updates and journaling differ in performance and semantics? (With researchers at CMU and Kirk McKusick)
How do the different FFS allocation algorithms compare?
How do clustering and file system logging compare?
This research is part of the VINO project.
This paper presents two journaling file system implementations
and compares their performance with that of soft updates, all on
the same platform within a BSD operating system. Journaling uses
an auxiliary log to record meta-data operations while soft updates
uses ordered writes to ensure meta-data consistency. Journaling
can be run either synchronously or asynchronously. The mode affects
the file system semantics. We toggle the semantics of
journaling and compare with both the upper-bound of an unrecoverable
file system and soft updates to explore the relative costs of
Many file system benchmarks are executed on empty file systems, a state that few users encounter in the real world. To allow researchers to measure file systems in a more realistic setting, this paper presents a technique called file system aging that uses an artificial workload to simulate the effects of many months, or even years, of activity on a test file system. This paper also demonstrates the use of file system aging by using it to evaluate to modifications to the file layout policies of the UNIX fast file system.
This paper compares two different disk allocation policies implemented for the Berkeley Fast File System. A simulated ten month workload is used to fill and fragment test file systems that use the different allocation policies.
This paper is a comparison of a log-structured file system to the conventional UNIX file system (FFS). The analysis of FFS includes an analysis of the impact of file fragmentation found in the tech. report (see above).
This paper presents empirical data characterizing the fragmentation and performance of FFS file systems that range from several months to several years in age. [Overview].