RAPID-Transit project (1987-1991)

This project is no longer active; this page is no longer updated.

Related projects: [Armada], [CHARISMA], [Galley], [Parallel-I/O], [STARFISH]

Related keywords: [pario], [software]


Summary

Large parallel computing systems, especially those used for scientific computation, consume and produce huge amounts of data. To provide the necessary semantics for parallel processes accessing a file, and to provide the necessary throughput for an application working with terabytes of data, requires a multiprocessor file system.

We developed methods for caching in parallel file systems, and in particular several novel methods for prefetching data based on the patterns observed thus far. We implemented these methods on a Butterfly GP-1000 parallel computer and validated the results through experimentation. The most complete description of the methods and results appear in Kotz's dissertation [kotz:thesis]; three journal papers capture most of the essentials [kotz:prefetch, kotz:jpractical, kotz:jwriteback].

RAPID-Transit was a testbed for experimenting with caching and prefetching algorithms in parallel file systems (RAPID means "Read-Ahead for Parallel Independent Disks"), and was part of the larger NUMAtic project at Duke University. The testbed ran on Duke's 64-processor Butterfly GP1000. The model we used had a disk attached to every processor, and that each file was striped across all disks. Duke's GP1000 had only one real disk, so our testbed simulated its disks. The implementation and some of the policies were dependent on the shared-memory nature of the machine; for example, there was a single shared file cache accessible to all processors. We found several policies that were successful at prefetching in a variety of parallel file-access patterns.

People

David Kotz and Carla Ellis, Duke University.

Download source code

All the source code, AS-IS, in a 2.2 MB compressed tar file (3.8 MB when uncompressed, untarred).

Funding and acknowledgements

This research was supported in part by the US National Science Foundation under awards CCR-8721781 and CCR8821809, and by the US Department of Defense (DARPA/NASA) subcontract NCC2-560.

The views and conclusions contained on this site and in its documents are those of the authors and should not be interpreted as necessarily representing the official position or policies, either expressed or implied, of the sponsor(s). Any mention of specific companies or products does not imply any endorsement by the authors or by the sponsor(s).

Papers tagged 'rapid-transit'

[Also available in BibTeX]

Papers are listed in reverse-chronological order; click an entry to pop up the abstract. For full information and pdf, please click Details link. Follow updates with RSS.

2001:
David Kotz and Carla Schlatter Ellis. Practical Prefetching Techniques for Multiprocessor File Systems. High Performance Mass Storage and Parallel I/O: Technologies and Applications. September 2001. [Details]

Improvements in the processing speed of multiprocessors are outpacing improvements in the speed of disk hardware. Parallel disk I/O subsystems have been proposed as one way to close the gap between processor and disk speeds. In a previous paper we showed that prefetching and caching have the potential to deliver the performance benefits of parallel file systems to parallel applications. In this paper we describe experiments with practical prefetching policies that base decisions only on on-line reference history, and that can be implemented efficiently. We also test the ability of these policies across a range of architectural parameters.

1993:
David Kotz and Carla Schlatter Ellis. Practical Prefetching Techniques for Multiprocessor File Systems. Journal of Distributed and Parallel Databases. January 1993. [Details]

Improvements in the processing speed of multiprocessors are outpacing improvements in the speed of disk hardware. Parallel disk I/O subsystems have been proposed as one way to close the gap between processor and disk speeds. In a previous paper we showed that prefetching and caching have the potential to deliver the performance benefits of parallel file systems to parallel applications. In this paper we describe experiments with practical prefetching policies that base decisions only on on-line reference history, and that can be implemented efficiently. We also test the ability of these policies across a range of architectural parameters.

David Kotz and Carla Schlatter Ellis. Caching and Writeback Policies in Parallel File Systems. Journal of Parallel and Distributed Computing. January 1993. [Details]

Improvements in the processing speed of multiprocessors are outpacing improvements in the speed of disk hardware. Parallel disk I/O subsystems have been proposed as one way to close the gap between processor and disk speeds. Such parallel disk systems require parallel file system software to avoid performance-limiting bottlenecks. We discuss cache management techniques that can be used in a parallel file system implementation for multiprocessors with scientific workloads. We examine several writeback policies, and give results of experiments that test their performance.

1991:
David Kotz. RAPID-Transit parallel file-system simulator. The software basis for my Ph.D dissertation, 1991. [Details]

RAPID-Transit was a testbed for experimenting with caching and prefetching algorithms in parallel file systems (RAPID means “Read-Ahead for Parallel Independent Disks”), and was part of the larger NUMAtic project at Duke University. The testbed ran on Duke’s 64-processor Butterfly GP1000. The model we used had a disk attached to every processor, and that each file was striped across all disks. Of course, Duke’s GP1000 had only one real disk, so our testbed simulated its disks. The implementation and some of the policies were dependent on the shared-memory nature of the machine; for example, there was a single shared file cache accessible to all processors. We found several policies that were successful at prefetching in a variety of parallel file-access patterns.

David Kotz and Carla Schlatter Ellis. Practical Prefetching Techniques for Parallel File Systems. Proceedings of the International Conference on Parallel and Distributed Information Systems (PDIS). December 1991. [Details]

Parallel disk subsystems have been proposed as one way to close the gap between processor and disk speeds. In a previous paper we showed that prefetching and caching have the potential to deliver the performance benefits of parallel file systems to parallel applications. In this paper we describe experiments with practical prefetching policies, and show that prefetching can be implemented efficiently even for the more complex parallel file access patterns. We test these policies across a range of architectural parameters.

David Kotz and Carla Schlatter Ellis. Caching and Writeback Policies in Parallel File Systems. Proceedings of the IEEE Symposium on Parallel and Distributed Processing (SPDP). December 1991. [Details]

Improvements in the processing speed of multiprocessors are outpacing improvements in the speed of disk hardware. Parallel disk I/O subsystems have been proposed as one way to close the gap between processor and disk speeds. Such parallel disk systems require parallel file system software to avoid performance-limiting bottlenecks. We discuss cache management techniques that can be used in a parallel file system implementation. We examine several writeback policies, and give results of experiments that test their performance.

David Kotz. Prefetching and Caching Techniques in File Systems for MIMD Multiprocessors. PhD thesis, April 1991. Available as technical report CS-1991-016. [Details]

The increasing speed of the most powerful computers, especially multiprocessors, makes it difficult to provide sufficient I/O bandwidth to keep them running at full speed for the largest problems. Trends show that the difference in the speed of disk hardware and the speed of processors is increasing, with I/O severely limiting the performance of otherwise fast machines. This widening access-time gap is known as the “I/O bottleneck crisis.” One solution to the crisis, suggested by many researchers, is to use many disks in parallel to increase the overall bandwidth.

This dissertation studies some of the file system issues needed to get high performance from parallel disk systems, since parallel hardware alone cannot guarantee good performance. The target systems are large MIMD multiprocessors used for scientific applications, with large files spread over multiple disks attached in parallel. The focus is on automatic caching and prefetching techniques. We show that caching and prefetching can transparently provide the power of parallel disk hardware to both sequential and parallel applications using a conventional file system interface. We also propose a new file system interface (compatible with the conventional interface) that could make it easier to use parallel disks effectively.

Our methodology is a mixture of implementation and simulation, using a software testbed that we built to run on a BBN GP1000 multiprocessor. The testbed simulates the disks and fully implements the caching and prefetching policies. Using a synthetic workload as input, we use the testbed in an extensive set of experiments. The results show that prefetching and caching improved the performance of parallel file systems, often dramatically.


1990:
David F. Kotz and Carla Schlatter Ellis. Prefetching in File Systems for MIMD Multiprocessors. IEEE Transactions on Parallel and Distributed Systems. April 1990. [Details]

The problem of providing file I/O to parallel programs has been largely neglected in the development of multiprocessor systems. There are two essential elements of any file system design intended for a highly parallel environment: parallel I/O and effective caching schemes. This paper concentrates on the second aspect of file system design and specifically, on the question of whether prefetching blocks of the file into the block cache can effectively reduce overall execution time of a parallel computation, even under favorable assumptions.

Experiments have been conducted with an interleaved file system testbed on the Butterfly Plus multiprocessor. Results of these experiments suggest that 1) the hit ratio, the accepted measure in traditional caching studies, may not be an adequate measure of performance when the workload consists of parallel computations and parallel file access patterns, 2) caching with prefetching can significantly improve the hit ratio and the average time to perform an I/O operation, and 3) an improvement in overall execution time has been observed in most cases. In spite of these gains, prefetching sometimes results in increased execution times (a negative result, given the optimistic nature of the study).

We explore why is it not trivial to translate savings on individual I/O requests into consistently better overall performance and identify the key problems that need to be addressed in order to improve the potential of prefetching techniques in this environment.


1989:
Carla Schlatter Ellis and David Kotz. Prefetching in File Systems for MIMD Multiprocessors. Proceedings of the International Conference on Parallel Processing (ICPP). August 1989. [Details]

The problem of providing file I/O to parallel programs has been largely neglected in the development of multiprocessor systems. There are two essential elements of any file system design intended for a highly parallel environment: parallel I/O and effective caching schemes. This paper concentrates on the second aspect of file system design and specifically, on the question of whether prefetching blocks of the file into the block cache can effectively reduce overall execution time of a parallel computation, even under favorable assumptions.

Experiments have been conducted with an interleaved file system testbed on the Butterfly Plus multiprocessor. Results of these experiments suggest that 1) the hit ratio, the accepted measure in traditional caching studies, may not be an adequate measure of performance when the workload consists of parallel computations and parallel file access patterns, 2) caching with prefetching can significantly improve the hit ratio and the average time to perform an I/O operation, and 3) an improvement in overall execution time has been observed in most cases. In spite of these gains, prefetching sometimes results in increased execution times (a negative result, given the optimistic nature of the study).

We explore why is it not trivial to translate savings on individual I/O requests into consistently better overall performance and identify the key problems that need to be addressed in order to improve the potential of prefetching techniques in this environment.


David Kotz. High-performance File System Design for MIMD Parallel Processors. A talk presented at the DARPA Workshop on Parallel Processing at UMIACS, August 1989. Audiovisual presentation. [Details]
1988:
Carla Schlatter Ellis and David Kotz. Prefetching in File Systems for MIMD Multiprocessors. Technical Report, November 1988. [Details]

The problem of providing file I/O to parallel programs has been largely neglected in the development of multiprocessor systems. There are two essential elements of any file system design intended for a highly parallel environment: parallel I/O and effective caching schemes. This paper concentrates on the second aspect of file system design and specifically, on the question of whether prefetching blocks of the file into the block cache can effectively reduce overall execution time of a parallel computation. MIMD multiprocessor architectures have a profound impact on the nature of the workloads they support. In particular, it is the collective behavior of the processes in a parallel computation that often determines the performance. The assumptions about file access patterns that underlie much of the work in uniprocessor file management are called into question. Results from experiments performed on the Butterfly Plus multiprocessor are presented showing the benefits that can be derived from prefetching (e.g. significant improvements in the cache miss ratio and the average time to perform an I/O operation). We explore why it is not trivial to translate these gains into much better overall performance.


[Kotz research]