<?xml version="1.0"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>

<title>David Kotz papers for project 'pario'</title>
<description>Papers from David Kotz and his research group, about Parallel I/O (other).
</description>
<language>en-us</language>
<pubDate>Wed, 18 Mar 2026 17:29:02 +0000</pubDate>
<link>https://www.cs.dartmouth.edu/~kotz/research/project/pario/index.html</link>
<docs>https://validator.w3.org/feed/docs/rss2.html</docs>
<atom:link href="https://www.cs.dartmouth.edu/~kotz/research/project/pario/feed.xml" rel="self" type="application/rss+xml"/>

<item>
<title>A Holesome File System</title>
<guid>vengroff:holesome-tr</guid>
<pubDate>Sat, 01 May 2004 00:00:00 </pubDate>
<description>
Darren Erik Vengroff and David Kotz.
 &lt;b&gt;A Holesome File System.&lt;/b&gt;
 Technical Report number&#160;TR2004-497, Dartmouth Computer Science, May 2004.
 Originally written in July 1995; released May 2004.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;We present a novel approach to fully dynamic management of physical disk blocks in Unix file systems. By adding a single system call, &lt;em&gt;zero()&lt;/em&gt;, to an existing file system, we permit applications to create &lt;em&gt;holes&lt;/em&gt;, that is, regions of files to which no physical disk blocks are allocated, far more flexibly than previously possible. &lt;em&gt;zero&lt;/em&gt; can create holes in the middle of existing files. &lt;/p&gt;&lt;p&gt; Using &lt;em&gt;zero()&lt;/em&gt;, it is possible to efficiently implement applications including a variety of databases and I/O-efficient computation systems on top of the Unix file system. &lt;em&gt;zero()&lt;/em&gt; can also be used to implement an efficient file-system-based paging mechanism. In some I/O-efficient computations, the availability of &lt;em&gt;zero()&lt;/em&gt; effectively doubles disk capacity by allowing blocks of temporary files to be reallocated to new files as they are read. &lt;/p&gt;&lt;p&gt; Experiments on a Linux &lt;em&gt;ext2&lt;/em&gt; file system augmented by &lt;em&gt;zero()&lt;/em&gt; demonstrate that where their functionality overlaps, &lt;em&gt;zero()&lt;/em&gt; is more efficient than &lt;em&gt;ftruncate()&lt;/em&gt;. Additional experiments reveal that in exchange for added effective disk capacity, I/O-efficient code pays only a small performance penalty.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/vengroff-holesome-tr/index.html</link>
</item>

<item>
<title>Scientific Applications using Parallel I/O</title>
<guid>oldfield:bapp-pario</guid>
<pubDate>Sat, 01 Sep 2001 00:00:00 </pubDate>
<description>
Ron Oldfield and David Kotz.
 &lt;b&gt;Scientific Applications using Parallel I/O.&lt;/b&gt;
 &lt;i&gt;High Performance Mass Storage and Parallel I/O: Technologies and Applications&lt;/i&gt;, chapter&#160;45, pages&#160;655&#8211;666.
 Edited by Hai Jin, Toni Cortes, and Rajkumar Buyya.
 Wiley-IEEE Press, September 2001.
 ISBN13:&#160;978-0-471-20809-9.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;Scientific applications are increasingly being implemented on massively parallel supercomputers. Many of these applications have intense I/O demands, as well as massive computational requirements. This paper is essentially an annotated bibliography of papers and other sources of information about scientific applications using parallel I/O.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/oldfield-bapp-pario/index.html</link>
</item>

<item>
<title>I/O in Parallel and Distributed Systems</title>
<guid>kotz:encyc1</guid>
<pubDate>Fri, 01 Jan 1999 00:00:00 </pubDate>
<description>
David Kotz and Ravi Jain.
 &lt;b&gt;I/O in Parallel and Distributed Systems.&lt;/b&gt;
 &lt;i&gt;Encyclopedia of Computer Science and Technology&lt;/i&gt;, pages&#160;141&#8211;154.
 Edited by Allen Kent and James G. Williams.
 Volume&#160;40, Marcel Dekker, 1999.
 ISBN13:&#160;9780824722937.
 Supplement 25.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;We sketch the reasons for the I/O bottleneck in parallel and distributed systems, pointing out that it can be viewed as a special case of a general bottleneck that arises at all levels of the memory hierarchy. We argue that because of its severity, the I/O bottleneck deserves systematic attention at all levels of system design. We then present a survey of the issues raised by the I/O bottleneck in six key areas of parallel and distributed systems: applications, algorithms, languages and compilers, run-time libraries, operating systems, and architecture.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-encyc1/index.html</link>
</item>

<item>
<title>Special Issue on Parallel I/O Systems</title>
<guid>kotz:pario-intro</guid>
<pubDate>Mon, 01 Dec 1997 00:00:00 </pubDate>
<description>
David Kotz.
 &lt;b&gt;Special Issue on Parallel I/O Systems.&lt;/b&gt;
 &lt;i&gt;ACM SIGMETRICS Performance Evaluation Review&lt;/i&gt;, volume&#160;25, number&#160;3, 1&#160;page, page&#160;2.
 ACM, December 1997.
 doi:10.1145/270900.581191.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;Introduction to the special issue.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-pario-intro/index.html</link>
</item>

<item>
<title>Introduction to Multiprocessor I/O Architecture</title>
<guid>kotz:pioarch</guid>
<pubDate>Mon, 01 Jan 1996 00:00:00 </pubDate>
<description>
David Kotz.
 &lt;b&gt;Introduction to Multiprocessor I/O Architecture.&lt;/b&gt;
 &lt;i&gt;Input/Output in Parallel and Distributed Computer Systems&lt;/i&gt;, chapter&#160;4, pages&#160;97&#8211;124.
 Edited by Ravi Jain, John Werth, and James C. Browne.
 Volume&#160;362 in The Kluwer International Series in Engineering and Computer Science, Kluwer Academic Publishers, 1996.
 doi:10.1007/978-1-4613-1401-1_4.
 ISBN13:&#160;978-1-4613-1401-1.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;The computational performance of multiprocessors continues to improve by leaps and bounds, fueled in part by rapid improvements in processor and interconnection technology. I/O performance thus becomes ever more critical, to avoid becoming the bottleneck of system performance. In this paper we provide an introduction to I/O architectural issues in multiprocessors, with a focus on disk subsystems. While we discuss examples from actual architectures and provide pointers to interesting research in the literature, we do not attempt to provide a comprehensive survey. We concentrate on a study of the architectural design issues, and the effects of different design alternatives.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-pioarch/index.html</link>
</item>

<item>
<title>Large-Scale File Systems with the Flexibility of Databases</title>
<guid>choudhary:sdcr</guid>
<pubDate>Sun, 01 Dec 1996 00:00:00 </pubDate>
<description>
Alok Choudhary and David Kotz.
 &lt;b&gt;Large-Scale File Systems with the Flexibility of Databases.&lt;/b&gt;
 &lt;i&gt;ACM Computing Surveys (CSUR)&lt;/i&gt;, volume&#160;28, number&#160;4es, 1&#160;page.
 ACM, December 1996.
 doi:10.1145/242224.242488.
 Position paper for the Working Group on Storage I/O for Large-Scale Computing, ACM Workshop on Strategic Directions in Computing Research. Available on-line only.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;We note that large-scale computing includes many applications with intensive I/O demands. A data-storage system for such applications must address two issues: locating the appropriate data set, and accessing the contents of the data set. Today, there are two extreme models of data location and management: 1) file systems, which can be fast but which require a user to manage the structure of the file-name space and, often, of the file contents; and 2) object-oriented-database systems, in which even the smallest granule of data is stored as an object with associated access methods, which is very flexible but often slow. We propose a solution that may provide the performance of file systems with the flexibility of object databases.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/choudhary-sdcr/index.html</link>
</item>

<item>
<title>Strategic directions in storage I/O issues in large-scale computing</title>
<guid>gibson:strategic-directions</guid>
<pubDate>Sun, 01 Dec 1996 00:00:00 </pubDate>
<description>
Garth A. Gibson, Jeffrey Scott Vitter, John Wilkes, Alok Choudhary, Peter Corbett, Thomas H.  Cormen, Carla Schlatter Ellis, Michael T. Goodrich, Peter Highnam, David Kotz, Kai Li, Richard R. Muntz, Joseph Pasquale, M. Satyanarayanan, and Darren Erik Vengroff.
 &lt;b&gt;Strategic directions in storage I/O issues in large-scale computing.&lt;/b&gt;
 &lt;i&gt;ACM Computing Surveys (CSUR)&lt;/i&gt;, volume&#160;28, number&#160;4, 15&#160;pages, pages&#160;779&#8211;793.
 ACM, December 1996.
 doi:10.1145/242223.242300.
 I am listed as a 'contributor' in the author list of this paper.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;In this article we discuss the strategic directions and challenges in the management and use of storage systems &#8212; those components of computer systems responsible for the storage and retrieval of data.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/gibson-strategic-directions/index.html</link>
</item>

<item>
<title>Applications of Parallel I/O</title>
<guid>kotz:app-pario</guid>
<pubDate>Tue, 01 Oct 1996 00:00:00 </pubDate>
<description>
David Kotz.
 &lt;b&gt;Applications of Parallel I/O.&lt;/b&gt;
 Technical Report number&#160;PCS-TR96-297, Dartmouth Computer Science, October 1996.
 Release 1.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;Scientific applications are increasingly being implemented on massively parallel supercomputers. Many of these applications have intense I/O demands, as well as massive computational requirements. This paper is essentially an annotated bibliography of papers and other sources of information about scientific applications using parallel I/O. It will be updated periodically.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-app-pario/index.html</link>
</item>

<item>
<title>The Panda Array I/O Library on the Galley Parallel File System</title>
<guid>thomas:thesis</guid>
<pubDate>Sat, 01 Jun 1996 00:00:00 </pubDate>
<description>
Joel T. Thomas.
 &lt;b&gt;The Panda Array I/O Library on the Galley Parallel File System.&lt;/b&gt;
 Technical Report number&#160;PCS-TR96-288, Dartmouth Computer Science, Hanover, NH, June 1996.
 Available as Dartmouth Computer Science Technical Report PCS-TR96-288.
 Senior Honors Thesis. Advisor: David Kotz.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;The Panda Array I/O library, created at the University of Illinois, Urbana-Champaign, was built especially to address the needs of high-performance scientific applications. I/O has been one of the most frustrating bottlenecks to high performance for quite some time, and the Panda project is an attempt to ameliorate this problem while still providing the user with a simple, high-level interface. The Galley File System, with its hierarchical structure of files and strided requests, is another attempt at addressing the performance problem. My project was to redesign the Panda Array library for use on the Galley file system. This project involved porting Panda's three main functions: a checkpoint function for writing a large array periodically for 'safekeeping,' a restart function that would allow a checkpointed file to be read back in, and finally a timestep function that would allow the user to write a group of large arrays several times in a sequence. Panda supports several different distributions in both the compute-node memories and I/O-node disks.  &lt;/p&gt;&lt;p&gt; We have found that the Galley File System provides a good environment on which to build high-performance libraries, and that the mesh of Panda and Galley was a successful combination.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/thomas-thesis/index.html</link>
</item>

<item>
<title>Parallel File Systems</title>
<guid>kotz:lecture</guid>
<pubDate>Fri, 01 Mar 1996 00:00:00 </pubDate>
<description>
David Kotz.
 &lt;b&gt;Parallel File Systems.&lt;/b&gt;
 A multimedia lecture included in the CD-ROM &#8220;Introductory Lectures on Data-Parallel Computing&#8221;, published by AK Peters, Ltd., March 1996.
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-lecture/index.html</link>
</item>

<item>
<title>Parallel I/O: Getting Ready for Prime Time</title>
<guid>reed:panel</guid>
<pubDate>Sat, 01 Jul 1995 00:00:00 </pubDate>
<description>
Dan Reed, Charles Catlett, Alok Choudhary, David Kotz, and Marc Snir.
 &lt;b&gt;Parallel I/O: Getting Ready for Prime Time.&lt;/b&gt;
 &lt;i&gt;IEEE Parallel &amp; Distributed Technology: Systems &amp; Applications&lt;/i&gt;, volume&#160;3, number&#160;2, pages&#160;64&#8211;71.
 IEEE, Summer 1995.
 doi:10.1109/MPDT.1995.9283668.
 Edited transcript of panel discussion at the 1994 International Conference on Parallel Processing.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;During the &lt;em&gt;International Conference on Parallel Processing&lt;/em&gt;, held August 15-19, 1994, we convened a panel to discuss the state of the art in parallel I/O, tools and techniques to address current problems, and challenges for the future. The following is an edited transcript of that panel.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/reed-panel/index.html</link>
</item>

<item>
<title>HP 97560 disk simulation module</title>
<guid>kotz:diskmodel-sw</guid>
<pubDate>Sat, 01 Jan 1994 00:00:00 </pubDate>
<description>
David Kotz.
 &lt;b&gt;HP 97560 disk simulation module.&lt;/b&gt;
 Used in STARFISH and several other research projects, 1994.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;We implemented a detailed model of the HP 97560 disk drive, to replicate a model devised by Ruemmler and Wilkes (both of Hewlett-Packard).&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-diskmodel-sw/index.html</link>
</item>

<item>
<title>Bibliography about Parallel I/O</title>
<guid>kotz:pario-sw</guid>
<pubDate>Sat, 01 Jan 1994 00:00:00 </pubDate>
<description>
David Kotz.
 &lt;b&gt;Bibliography about Parallel I/O.&lt;/b&gt;
 BibTeX bibliography, 1994.
 First released in 1994, and updated periodically through 2011.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;A bibliography of many references on parallel I/O and multiprocessor file-systems issues. As of the fifth edition, it is available in HTML format.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-pario-sw/index.html</link>
</item>

<item>
<title>A Detailed Simulation Model of the HP 97560 Disk Drive</title>
<guid>kotz:diskmodel</guid>
<pubDate>Fri, 01 Jul 1994 00:00:00 </pubDate>
<description>
David Kotz, Song Bac Toh, and Sriram Radhakrishnan.
 &lt;b&gt;A Detailed Simulation Model of the HP 97560 Disk Drive.&lt;/b&gt;
 Technical Report number&#160;PCS-TR94-220, Dartmouth Computer Science, July 1994.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;We implemented a detailed model of the HP 97560 disk drive, to replicate a model devised by Ruemmler and Wilkes (both of Hewlett-Packard, HP). Our model simulates one or more disk drives attached to one or more SCSI buses, using a small discrete-event simulation module included in our implementation. The design is broken into three components: a test driver, the disk model itself, and the discrete-event simulation support. Thus, the disk model can be easily extracted and used in other simulation environments. We validated our model using traces obtained from HP, using the same &#8220;demerit&#8221; measure as Ruemmler and Wilkes. We obtained a demerit figure of 3.9%, indicating that our model was extremely accurate. This paper describes our implementation, and is meant for those wishing to understand our model or to implement their own.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-diskmodel/index.html</link>
</item>

<item>
<title>Integrating Theory and Practice in Parallel File Systems</title>
<guid>cormen:integrate</guid>
<pubDate>Tue, 01 Jun 1993 00:00:00 </pubDate>
<description>
Thomas H. Cormen and David Kotz.
 &lt;b&gt;Integrating Theory and Practice in Parallel File Systems.&lt;/b&gt;
 &lt;i&gt;Proceedings of the Dartmouth Institute for Advanced Graduate Studies (DAGS)&lt;/i&gt;, pages&#160;64&#8211;74.
 Dartmouth Institute for Advanced Graduate Studies (DAGS), Dartmouth College, Hanover, NH, June 1993.
 Revised as Dartmouth PCS-TR93-188 on 9/20/94.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;Several algorithms for parallel disk systems have appeared in the literature recently, and they are asymptotically optimal in terms of the number of disk accesses. Scalable systems with parallel disks must be able to run these algorithms. We present for the first time a list of capabilities that must be provided by the system to support these optimal algorithms: control over declustering, querying about the configuration, independent I/O, and turning off parity, file caching, and prefetching. We summarize recent theoretical and empirical work that justifies the need for these capabilities. In addition, we sketch an organization for a parallel file interface with low-level primitives and higher-level operations.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/cormen-integrate/index.html</link>
</item>

<item>
<title>Throughput of Existing Multiprocessor File Systems (an informal study)</title>
<guid>kotz:throughput</guid>
<pubDate>Sat, 01 May 1993 00:00:00 </pubDate>
<description>
David Kotz.
 &lt;b&gt;Throughput of Existing Multiprocessor File Systems (an informal study).&lt;/b&gt;
 Technical Report number&#160;PCS-TR93-190, Dept. of Math and Computer Science, Dartmouth College, May 1993.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;Fast file systems are critical for high-performance scientific computing, since many scientific applications have tremendous I/O requirements. Many parallel supercomputers have only recently obtained fully parallel I/O architectures and file systems, which are necessary for scalable I/O performance. Scalability aside, I show here that many systems lack sufficient absolute performance. I do this by surveying the performance reported in the literature, summarized in an informal table.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-throughput/index.html</link>
</item>

<item>
<title>Integrating Theory and Practice in Parallel File Systems</title>
<guid>cormen:integrate-tr</guid>
<pubDate>Mon, 01 Mar 1993 00:00:00 </pubDate>
<description>
Thomas H. Cormen and David Kotz.
 &lt;b&gt;Integrating Theory and Practice in Parallel File Systems.&lt;/b&gt;
 Technical Report number&#160;PCS-TR93-188, Dept. of Math and Computer Science, Dartmouth College, March 1993.
 Revised 9/20/94.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;Several algorithms for parallel disk systems have appeared in the literature recently, and they are asymptotically optimal in terms of the number of disk accesses. Scalable systems with parallel disks must be able to run these algorithms. We present a list of capabilities that must be provided by the system to support these optimal algorithms: control over declustering, querying about the configuration, independent I/O, turning off file caching and prefetching, and bypassing parity. We summarize recent theoretical and empirical work that justifies the need for these capabilities.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/cormen-integrate-tr/index.html</link>
</item>

<item>
<title>The Scalable I/O Initiative</title>
<guid>bershad:scalable-io</guid>
<pubDate>Tue, 23 Feb 1993 00:00:00 </pubDate>
<description>
B. Bershad, A. Chien, A. Choudhary, T. Cormen, E. DeBenedictis, D. DeWitt, D. Ecklund, W. Gropp, R. Grossman, R. Kendall, K. Kennedy, C. Koelbel, D. Kotz, K. Li, P. Lyster, D. Marinescu, P. Messina, R. Moore, S. O'Malley, D. Payne, T. Pratt, D. Reed, J. Saltz, R. Stevens, S. Wallach, and R. Williams.
 &lt;b&gt;The Scalable I/O Initiative.&lt;/b&gt;
 Technical Report, CalTech Concurrent Supercomputing Consortium, February 23, 1993.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;This white paper describes a collaborative project that brings together systems software developers, computer vendors, and applications teams to develop hardware and software systems to support scalable I/O for high performance computer systems. The project is organized around the provision of a full-scale testbed for the development and evaluation of new systems software for scalable I/O.  In addition, research projects will be formed to address the scalable I/O problem from a number of perspectives, such as languages, compilers, file systems, networking software, persistent object stores, and low level system services.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/bershad-scalable-io/index.html</link>
</item>

<item>
<title>Multiprocessor File System Interfaces</title>
<guid>kotz:fsint2</guid>
<pubDate>Fri, 01 Jan 1993 00:00:00 </pubDate>
<description>
David Kotz.
 &lt;b&gt;Multiprocessor File System Interfaces.&lt;/b&gt;
 &lt;i&gt;Proceedings of the International Conference on Parallel and Distributed Information Systems (PDIS)&lt;/i&gt;, pages&#160;194&#8211;201.
 IEEE, January 1993.
 doi:10.1109/PDIS.1993.253093.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;Increasingly, file systems for multiprocessors are designed with parallel access to multiple disks, to keep I/O from becoming a serious bottleneck for parallel applications. Although file system software can transparently provide high-performance access to parallel disks, a new file system interface is needed to facilitate parallel access to a file from a parallel application. We describe the difficulties faced when using the conventional (Unix-like) interface in parallel applications, and then outline ways to extend the conventional interface to provide convenient access to the file for parallel programs, while retaining the traditional interface for programs that have no need for explicitly parallel file access. Our interface includes a single naming scheme, a &lt;em&gt;multiopen&lt;/em&gt; operation, local and global file pointers, mapped file pointers, logical records, &lt;em&gt;multifiles&lt;/em&gt;, and logical coercion for backward compatibility.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-fsint2/index.html</link>
</item>

<item>
<title>Multiprocessor File System Interfaces</title>
<guid>kotz:fsint2p</guid>
<pubDate>Fri, 01 May 1992 00:00:00 </pubDate>
<description>
David Kotz.
 &lt;b&gt;Multiprocessor File System Interfaces.&lt;/b&gt;
 &lt;i&gt;Proceedings of the USENIX File Systems Workshop (WOFS)&lt;/i&gt;, pages&#160;149&#8211;150.
 USENIX Association, May 1992.
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-fsint2p/index.html</link>
</item>

<item>
<title>Multiprocessor File System Interfaces</title>
<guid>kotz:fsint</guid>
<pubDate>Sun, 01 Mar 1992 00:00:00 </pubDate>
<description>
David Kotz.
 &lt;b&gt;Multiprocessor File System Interfaces.&lt;/b&gt;
 Technical Report number&#160;PCS-TR92-179, Dept. of Math and Computer Science, Dartmouth College, March 1992.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;Increasingly, file systems for multiprocessors are designed with parallel access to multiple disks, to keep I/O from becoming a serious bottleneck for parallel applications. Although file system software can transparently provide high-performance access to parallel disks, a new file system interface is needed to facilitate parallel access to a file from a parallel application. We describe the difficulties faced when using the conventional (Unix-like) interface in parallel applications, and then outline ways to extend the conventional interface to provide convenient access to the file for parallel programs, while retaining the traditional interface for programs that have no need for explicitly parallel file access. Our interface includes a single naming scheme, a &lt;em&gt;multiopen&lt;/em&gt; operation, local and global file pointers, mapped file pointers, logical records, &lt;em&gt;multifiles&lt;/em&gt;, and logical coercion for backward compatibility.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/kotz-fsint/index.html</link>
</item>

<item>
<title>NUMAtic Project and the DUnX OS</title>
<guid>ellis:numatic</guid>
<pubDate>Tue, 01 Jan 1991 00:00:00 </pubDate>
<description>
Carla Ellis, Mark Holliday, Rick LaRowe, David Kotz, Vick Khera, Steve Owen, and Chris Connelly.
 &lt;b&gt;NUMAtic Project and the DUnX OS.&lt;/b&gt;
 &lt;i&gt;IEEE Technical Committee on Operating Systems and Application Environments (Newsletter)&lt;/i&gt;, volume&#160;5, number&#160;4, pages&#160;12&#8211;14.
 IEEE, Winter 1991.
 &lt;p&gt;&lt;b&gt;Abstract:&lt;/b&gt;
&lt;p&gt;Nonuniformity of memory access is an almost inevitable feature of the memory architecture in any shared memory multiprocessor design that can scale to large numbers of processors. The NUMAtic project is a program of experimental research exploring memory management on NUMA machines.&lt;/p&gt;&lt;/p&gt;
 
</description>
<link>https://www.cs.dartmouth.edu/~kotz/research/ellis-numatic/index.html</link>
</item>

</channel>
</rss>
