STARFISH: Simulation Tool for Advanced Research in File Systems (1994-2001)

This project is no longer active; this page is no longer updated.

Related projects: [Armada], [CHARISMA], [Galley], [Parallel-I/O], [RAPID-Transit]

Related keywords: [pario], [software]


Summary

Large parallel computing systems, especially those used for scientific computation, consume and produce huge amounts of data. To provide the necessary semantics for parallel processes accessing a file, and to provide the necessary throughput for an application working with terabytes of data, requires a multiprocessor file system.

In the STARFISH project we developed the concept of disk-directed I/O, in which the application process requested a large parallel data transfer to or from a parallel file, and then the file system arranged the transfer of information between disks and memory in a way that suited the disks' own timing. The results show strong performance benefits--- but only if suitable interfaces allow the application to make such requests known to the file system at a high level. The most complete paper is [kotz:jdiskdir]. An overview was presented in a 1994 talk at NASA [video].

STARFISH is a simulator for experimenting with concepts in parallel file systems. It is based on Eric Brewer's Proteus simulator from MIT, version 3.01, and runs only on (MIPS-based) DECstations.

The name: The name STARFISH is an acronym (Simulation Tool for Advanced Research in File Systems), but it fits with the maritime theme of the Proteus simulator on which it is based (Proteus was a Greek god of the sea).

Download

Warning: I provide the code as-is, with little cleanup or added documentation. Some of the code is out-of-date and may have bugs. Other parts are incomplete. Many of the analysis scripts are fragile. The code is constantly evolving, and new public releases may be rare. But many people have asked me for it, so here it is.

See other warnings in the README file.

Usage rules: You're welcome to look at the code and even try to run it, but I really don't have time to help you out much. If you publish any results based on this simulator, please cite me and provide the URL for this page.

Copying rules: This package may be freely copied as long as it is kept intact with my name on it. You may not sell it for commercial purposes (hah! as if anyone would pay for it.) Please send me a note if you have a copy of this code, so I can keep track of how many copies there are, send you email about new versions, and so forth. Please ask me before you distribute any modified version.

Version 3 (October 1996):

Release 3.0 (tgz):

Version 2 (January 1995):

Sources (tgz): Experimental configurations and raw data (tgz): Figures and data (tgz) for kotz:jdiskdir:

Note re: kotz:diskdir The Version 2 code evolved after the experiments in some of those papers were run; in particular, the OSDI results were based on an earlier, buggier version of iopfs-cache. See the TR version of that paper for correct results.


People

David Kotz.

Funding and acknowledgements

This project was supported largely by the US National Science Foundation under award CCR-940919.

The views and conclusions contained on this site and in its documents are those of the authors and should not be interpreted as necessarily representing the official position or policies, either expressed or implied, of the sponsor(s). Any mention of specific companies or products does not imply any endorsement by the authors or by the sponsor(s).

Papers tagged 'starfish'

[Also available in BibTeX]

Papers are listed in reverse-chronological order; click an entry to pop up the abstract. For full information and pdf, please click Details link. Follow updates with RSS.

2001:
David Kotz. Disk-directed I/O for MIMD Multiprocessors. High Performance Mass Storage and Parallel I/O: Technologies and Applications. September 2001. [Details]

Many scientific applications that run on today’s multiprocessors, such as weather forecasting and seismic analysis, are bottlenecked by their file-I/O needs. Even if the multiprocessor is configured with sufficient I/O hardware, the file-system software often fails to provide the available bandwidth to the application. Although libraries and enhanced file-system interfaces can make a significant improvement, we believe that fundamental changes are needed in the file-server software. We propose a new technique, disk-directed I/O, to allow the disk servers to determine the flow of data for maximum performance. Our simulations show that tremendous performance gains are possible both for simple reads and writes and for an out-of-core application. Indeed, our disk-directed I/O technique provided consistent high performance that was largely independent of data distribution, obtained up to 93% of peak disk bandwidth, and was as much as 18 times faster than the traditional technique.

1997:
David Kotz. Disk-directed I/O for MIMD Multiprocessors. ACM Transactions on Computer Systems. February 1997. [Details]

Many scientific applications that run on today’s multiprocessors, such as weather forecasting and seismic analysis, are bottlenecked by their file-I/O needs. Even if the multiprocessor is configured with sufficient I/O hardware, the file-system software often fails to provide the available bandwidth to the application. Although libraries and enhanced file-system interfaces can make a significant improvement, we believe that fundamental changes are needed in the file-server software. We propose a new technique, disk-directed I/O, to allow the disk servers to determine the flow of data for maximum performance. Our simulations show that tremendous performance gains are possible both for simple reads and writes and for an out-of-core application. Indeed, our disk-directed I/O technique provided consistent high performance that was largely independent of data distribution, obtained up to 93% of peak disk bandwidth, and was as much as 18 times faster than the traditional technique.

1996:
David Kotz. Tuning STARFISH. Technical Report, October 1996. [Details]

STARFISH is a parallel file-system simulator we built for our research into the concept of disk-directed I/O. In this report, we detail steps taken to tune the file systems supported by STARFISH, which include a traditional parallel file system (with caching) and a disk-directed I/O system. In particular, we now support two-phase I/O, use smarter disk scheduling, increased the maximum number of outstanding requests that a compute processor may make to each disk, and added gather/scatter block transfer. We also present results of the experiments driving the tuning effort.

David Kotz. STARFISH parallel file-system simulator. The basis for my research on disk-directed I/O; used by at least two other research groups, October 1996. Third release. [Details]

STARFISH is a simulator for experimenting with concepts in parallel file systems. It is based on Eric Brewer’s Proteus simulator from MIT, version 3.01, and runs only on (MIPS-based) DECstations. I have used this simulator in experiments for several research papers about disk-directed I/O.

Apratim Purakayastha, Carla Schlatter Ellis, and David Kotz. ENWRICH: A Compute-Processor Write Caching Scheme for Parallel File Systems. Proceedings of the Workshop on Input/Output in Parallel and Distributed Systems (IOPADS). May 1996. [Details]

Many parallel scientific applications need high-performance I/O. Unfortunately, end-to-end parallel-I/O performance has not been able to keep up with substantial improvements in parallel-I/O hardware because of poor parallel file-system software. Many radical changes, both at the interface level and the implementation level, have recently been proposed. One such proposed interface is collective I/O, which allows parallel jobs to request transfer of large contiguous objects in a single request, thereby preserving useful semantic information that would otherwise be lost if the transfer were expressed as per-processor non-contiguous requests. Kotz has proposed disk-directed I/O as an efficient implementation technique for collective-I/O operations, where the compute processors make a single collective data-transfer request, and the I/O processors thereafter take full control of the actual data transfer, exploiting their detailed knowledge of the disk-layout to attain substantially improved performance.

Recent parallel file-system usage studies show that writes to write-only files are a dominant part of the workload. Therefore, optimizing writes could have a significant impact on overall performance. In this paper, we propose ENWRICH, a compute-processor write-caching scheme for write-only files in parallel file systems. ENWRICH combines low-overhead write caching at the compute processors with high performance disk-directed I/O at the I/O processors to achieve both low latency and high bandwidth. This combination facilitates the use of the powerful disk-directed I/O technique independent of any particular choice of interface. By collecting writes over many files and applications, ENWRICH lets the I/O processors optimize disk I/O over a large pool of requests. We evaluate our design via simulated implementation and show that ENWRICH achieves high performance for various configurations and workloads.


1995:
Apratim Purakayastha, Carla Schlatter Ellis, and David Kotz. ENWRICH: A Compute-Processor Write Caching Scheme for Parallel File Systems. Technical Report, October 1995. [Details]

Many parallel scientific applications need high-performance I/O. Unfortunately, end-to-end parallel-I/O performance has not been able to keep up with substantial improvements in parallel-I/O hardware because of poor parallel file-system software. Many radical changes, both at the interface level and the implementation level, have recently been proposed. One such proposed interface is collective I/O, which allows parallel jobs to request transfer of large contiguous objects in a single request, thereby preserving useful semantic information that would otherwise be lost if the transfer were expressed as per-processor non-contiguous requests. Kotz has proposed disk-directed I/O as an efficient implementation technique for collective-I/O operations, where the compute processors make a single collective data-transfer request, and the I/O processors thereafter take full control of the actual data transfer, exploiting their detailed knowledge of the disk-layout to attain substantially improved performance.

Recent parallel file-system usage studies show that writes to write-only files are a dominant part of the workload. Therefore, optimizing writes could have a significant impact on overall performance. In this paper, we propose ENWRICH, a compute-processor write-caching scheme for write-only files in parallel file systems. ENWRICH combines low-overhead write caching at the compute processors with high performance disk-directed I/O at the I/O processors to achieve both low latency and high bandwidth. This combination facilitates the use of the powerful disk-directed I/O technique independent of any particular choice of interface. By collecting writes over many files and applications, ENWRICH lets the I/O processors optimize disk I/O over a large pool of requests. We evaluate our design via simulated implementation and show that ENWRICH achieves high performance for various configurations and workloads.


David Kotz. Expanding the Potential for Disk-Directed I/O. Proceedings of the IEEE Symposium on Parallel and Distributed Processing (SPDP). October 1995. [Details]

As parallel computers are increasingly used to run scientific applications with large data sets, and as processor speeds continue to increase, it becomes more important to provide fast, effective parallel file systems for data storage and for temporary files. In an earlier work we demonstrated that a technique we call disk-directed I/O has the potential to provide consistent high performance for large, collective, structured I/O requests. In this paper we expand on this potential by demonstrating the ability of a disk-directed I/O system to read irregular subsets of data from a file, and to filter and distribute incoming data according to data-dependent functions.

David Kotz. Interfaces for Disk-Directed I/O. Technical Report, September 1995. [Details]

In other papers I propose the idea of disk-directed I/O for multiprocessor file systems. Those papers focus on the performance advantages and capabilities of disk-directed I/O, but say little about the application-programmer’s interface or about the interface between the compute processors and I/O processors. In this short note I discuss the requirements for these interfaces, and look at many existing interfaces for parallel file systems. I conclude that many of the existing interfaces could be adapted for use in a disk-directed I/O system.

David Kotz. Disk-directed I/O for an Out-of-core Computation. Proceedings of the IEEE International Symposium on High Performance Distributed Computing (HPDC). August 1995. [Details]

New file systems are critical to obtain good I/O performance on large multiprocessors. Several researchers have suggested the use of collective file-system operations, in which all processes in an application cooperate in each I/O request. Others have suggested that the traditional low-level interface (read, write, seek) be augmented with various higher-level requests (e.g., read matrix). Collective, high-level requests permit a technique called disk-directed I/O to significantly improve performance over traditional file systems and interfaces, at least on simple I/O benchmarks. In this paper, we present the results of experiments with an “out-of-core” LU-decomposition program. Although its collective interface was awkward in some places, and forced additional synchronization, disk-directed I/O was able to obtain much better overall performance than the traditional system.

David Kotz and Ting Cai. Exploring the use of I/O Nodes for Computation in a MIMD Multiprocessor. Proceedings of the IPPS Workshop on Input/Output in Parallel and Distributed Systems (IOPADS). April 1995. [Details]

As parallel systems move into the production scientific-computing world, the emphasis will be on cost-effective solutions that provide high throughput for a mix of applications. Cost-effective solutions demand that a system make effective use of all of its resources. Many MIMD multiprocessors today, however, distinguish between “compute” and “I/O” nodes, the latter having attached disks and being dedicated to running the file-system server. This static division of responsibilities simplifies system management but does not necessarily lead to the best performance in workloads that need a different balance of computation and I/O.

Of course, computational processes sharing a node with a file-system service may receive less CPU time, network bandwidth, and memory bandwidth than they would on a computation-only node. In this paper we begin to examine this issue experimentally. We found that high-performance I/O does not necessarily require substantial CPU time, leaving plenty of time for application computation. There were some complex file-system requests, however, which left little CPU time available to the application. (The impact on network and memory bandwidth still needs to be determined.) For applications (or users) that cannot tolerate an occasional interruption, we recommend that they continue to use only compute nodes. For tolerant applications needing more cycles than those provided by the compute nodes, we recommend that they take full advantage of both compute and I/O nodes for computation, and that operating systems should make this possible.


David Kotz. Expanding the Potential for Disk-Directed I/O. Technical Report, March 1995. [Details]

As parallel computers are increasingly used to run scientific applications with large data sets, and as processor speeds continue to increase, it becomes more important to provide fast, effective parallel file systems for data storage and for temporary files. In an earlier work we demonstrated that a technique we call disk-directed I/O has the potential to provide consistent high performance for large, collective, structured I/O requests. In this paper we expand on this potential by demonstrating the ability of a disk-directed I/O system to read irregular subsets of data from a file, and to filter and distribute incoming data according to data-dependent functions.

David Kotz. Disk-directed I/O for an Out-of-core Computation. Technical Report, January 1995. [Details]

New file systems are critical to obtain good I/O performance on large multiprocessors. Several researchers have suggested the use of collective file-system operations, in which all processes in an application cooperate in each I/O request. Others have suggested that the traditional low-level interface (read, write, seek) be augmented with various higher-level requests (e.g., read matrix), allowing the programmer to express a complex transfer in a single (perhaps collective) request. Collective, high-level requests permit techniques like two-phase I/O and disk-directed I/O to significantly improve performance over traditional file systems and interfaces. Neither of these techniques have been tested on anything other than simple benchmarks that read or write matrices. Many applications, however, intersperse computation and I/O to work with data sets that cannot fit in main memory. In this paper, we present the results of experiments with an “out-of-core” LU-decomposition program, comparing a traditional interface and file system with a system that has a high-level, collective interface and disk-directed I/O. We found that a collective interface was awkward in some places, and forced additional synchronization. Nonetheless, disk-directed I/O was able to obtain much better performance than the traditional system.

1994:
David Kotz. HP 97560 disk simulation module. Used in STARFISH and several other research projects, 1994. [Details]

We implemented a detailed model of the HP 97560 disk drive, to replicate a model devised by Ruemmler and Wilkes (both of Hewlett-Packard).

David Kotz. Disk-directed I/O for MIMD Multiprocessors. Proceedings of the Symposium on Operating Systems Design and Implementation (OSDI). November 1994. Updated as Dartmouth TR PCS-TR94-226 on November 8, 1994. [Details]

Many scientific applications that run on today’s multiprocessors are bottlenecked by their file I/O needs. Even if the multiprocessor is configured with sufficient I/O hardware, the file-system software often fails to provide the available bandwidth to the application. Although libraries and improved file-system interfaces can make a significant improvement, we believe that fundamental changes are needed in the file-server software. We propose a new technique, disk-directed I/O, that flips the usual relationship between server and client to allow the disks (actually, disk servers) to determine the flow of data for maximum performance. Our simulations show that tremendous performance gains are possible. Indeed, disk-directed I/O provided consistent high performance that was largely independent of data distribution, and close to the maximum disk bandwidth.

David Kotz. Disk-directed I/O for MIMD Multiprocessors. Bulletin of the IEEE Technical Committee on Operating Systems and Application Environments. Autumn 1994. [Details]

Many scientific applications that run on today’s multiprocessors are bottlenecked by their file I/O needs. Even if the multiprocessor is configured with sufficient I/O hardware, the file-system software often fails to provide the available bandwidth to the application. Although libraries and improved file-system interfaces can make a significant improvement, we believe that fundamental changes are needed in the file-server software. We propose a new technique, disk-directed I/O, that flips the usual relationship between server and client to allow the disks (actually, disk servers) to determine the flow of data for maximum performance. Our simulations show that tremendous performance gains are possible. Indeed, disk-directed I/O provided consistent high performance that was largely independent of data distribution, and close to the maximum disk bandwidth.

David Kotz and Ting Cai. Exploring the use of I/O Nodes for Computation in a MIMD Multiprocessor. Technical Report, October 1994. Revised 2/20/95. [Details]

Most MIMD multiprocessors today are configured with two distinct types of processor nodes: those that have disks attached, which are dedicated to file I/O, and those that do not have disks attached, which are used for running applications. Several architectural trends have led some to propose configuring systems so that all processors are used for application processing, even those with disks attached. We examine this idea experimentally, focusing on the impact of remote I/O requests on local computational processes. We found that in an efficient file system the I/O processors can transfer data at near peak speeds with little CPU overhead, leaving substantial CPU power for running applications. On the other hand, we found that some complex file-system features could require substantial CPU overhead. Thus, for a multiprocessor system to obtain good I/O and computational performance on a mix of applications, the file system (both operating system and libraries) must be prepared to adapt their policies to changing conditions.

David Kotz. Disk-directed I/O for MIMD Multiprocessors. Technical Report, July 1994. Revised November 8, 1994. [Details]

Many scientific applications that run on today’s multiprocessors are bottlenecked by their file I/O needs. Even if the multiprocessor is configured with sufficient I/O hardware, the file-system software often fails to provide the available bandwidth to the application. Although libraries and improved file-system interfaces can make a significant improvement, we believe that fundamental changes are needed in the file-server software. We propose a new technique, disk-directed I/O, that flips the usual relationship between server and client to allow the disks (actually, disk servers) to determine the flow of data for maximum performance. Our simulations show that tremendous performance gains are possible. Indeed, disk-directed I/O provided consistent high performance that was largely independent of data distribution, and close to the maximum disk bandwidth.

David Kotz, Song Bac Toh, and Sriram Radhakrishnan. A Detailed Simulation Model of the HP 97560 Disk Drive. Technical Report, July 1994. [Details]

We implemented a detailed model of the HP 97560 disk drive, to replicate a model devised by Ruemmler and Wilkes (both of Hewlett-Packard, HP). Our model simulates one or more disk drives attached to one or more SCSI buses, using a small discrete-event simulation module included in our implementation. The design is broken into three components: a test driver, the disk model itself, and the discrete-event simulation support. Thus, the disk model can be easily extracted and used in other simulation environments. We validated our model using traces obtained from HP, using the same “demerit” measure as Ruemmler and Wilkes. We obtained a demerit figure of 3.9%, indicating that our model was extremely accurate. This paper describes our implementation, and is meant for those wishing to understand our model or to implement their own.


[Kotz research]