This project is no longer active; this page is no longer updated.
Related projects: [Armada], [CHARISMA], [Galley], [Parallel-I/O], [STARFISH]
Related keywords: [pario], [software]
Large parallel computing systems, especially those used for scientific computation, consume and produce huge amounts of data. To provide the necessary semantics for parallel processes accessing a file, and to provide the necessary throughput for an application working with terabytes of data, requires a multiprocessor file system.
We developed methods for caching in parallel file systems, and in particular several novel methods for prefetching data based on the patterns observed thus far. We implemented these methods on a Butterfly GP-1000 parallel computer and validated the results through experimentation. The most complete description of the methods and results appear in Kotz's dissertation [kotz:thesis]; three journal papers capture most of the essentials [kotz:prefetch, kotz:jpractical, kotz:jwriteback].
RAPID-Transit was a testbed for experimenting with caching and prefetching algorithms in parallel file systems (RAPID means "Read-Ahead for Parallel Independent Disks"), and was part of the larger NUMAtic project at Duke University. The testbed ran on Duke's 64-processor Butterfly GP1000. The model we used had a disk attached to every processor, and that each file was striped across all disks. Duke's GP1000 had only one real disk, so our testbed simulated its disks. The implementation and some of the policies were dependent on the shared-memory nature of the machine; for example, there was a single shared file cache accessible to all processors. We found several policies that were successful at prefetching in a variety of parallel file-access patterns.
David Kotz and Carla Ellis, Duke University.
All the source code, AS-IS, in a 2.2 MB compressed tar file (3.8 MB when uncompressed, untarred).
This research was supported in part by NSF grants CCR-8721781 and CCR8821809 and DARPA/NASA subcontract of NCC2-560.
The views and conclusions contained on this site and in its documents are those of the authors and should not be interpreted as necessarily representing the official position or policies, either expressed or implied, of the sponsor(s). Any mention of specific companies or products does not imply any endorsement by the authors or by the sponsor(s).
Papers are listed in reverse-chronological order. Follow updates with RSS.