Last modified: Mon May 18 16:49:45 CEST 2020
My research interests have evolved over the decades to include security and privacy, the Internet of Things, mobile and wearable computing, mobile health (mHealth), pervasive computing for healthcare, wireless networks, mobile ad hoc networks, mobile agents, parallel file systems, and concurrent data structures. This page provides a brief overview of some of my research projects, roughly in reverse-chronological order. Not everything is covered; you may also browse the complete list of papers, or explore the index of papers (by keyword).
Much of my research activity is affiliated with the Dartmouth Center for Technology and Behavioral Health (CTBH), Dartmouth Augmented Health Lab (AH), Dartmouth Institute for Security, Technology, and Society (ISTS), or Swiss Center for Digital Health Interventions (CDHI). In the past, I organized the Center for Mobile Computing.
In October 2020 my team will launch the Security and Privacy in the Lifecycle of IoT for Consumer Environments (SPLICE) project, which focuses on privacy issues in smart homes.
Our work has addressed security and privacy in many contexts - most recently in mobile and wearable systems.
See also past projects DIST, MAP, NetSANI, Kerf, D'Agents, MetroSense, SASOS, Snowflake, Solar, THaW, and TISH.
In the Amanuensis project we explore the potential combination of blockchain and trusted-execution-environment (TEE) technology to support strong integrity and confidentiality guarantees in the storage of health-related data... including robust provenance information regarding information derived from original sources.
We expect that wearable and portable medical sensors will enable long-term continuous medical monitoring for many purposes, such as patients with chronic medical conditions (such as diabeties), people seeking to change behavior (e.g., losing weight, or quitting smoking), or athletes wishing to monitor their condition and performance. The resulting data may be used directly by the person, or shared with others: with a physician for treatment, with an insurance company for coverage, or by a trainer or coach. Such systems have huge potential benefit to the quality of healthcare and quality of life for many people.
See also Amanuensis, and past projects Amulet, THaW, and TISH.
In the SIMBA project we explore opportunities to sense physiological states (such as stress) and the receptivity to behavioral interventions.
We also explored the viability of commercially available off-the-shelf sensors for stress monitoring. The idea is to be able to use cheap, non-clinical sensors to capture physiological signals, and make inferences about the wearer's stress level based on that data. In particular, we developed algorithms to use a popular off-the-shelf heart-rate monitor, the Polar H7, and found we were able to detect stressful events with an F1 score of 0.81, on par with clinical-grade sensors.
Recent mHealth intervention designs, including Just-In-Time Adaptive Interventions (JITAIs), show great potential because they aim to provide the right type and amount of support, at the right time. Timing the delivery of a JITAI such as the user is receptive and available to engage with the intervention is crucial for a JITAI to succeed. Our work explored the factors affecting users' receptivity towards JITAIs, and built machine-learning models to detect receptivity.
In the Auracle project we developed a wearable device that can detect eating and related behaviors, validated its performance in lab and field studies on adults and on children; we also explored mechanisms for a person to interact with a head-worn device like the Auracle. See also: the Auracle website.
In the Amulet project we developed a custom wrist-worn computing platform for mobile health (mHealth). For more information about the Amulet project, and a complete list of its papers (not just those including David Kotz and his students), see the Amulet website.
In the Trustworthy Health and Wellness (THaW) project, which was a broad project involving multiple universities, my group was focused mostly wearable and portable devices for use in health monitoring and management, with an emphasis on the security and privacy issues that arise with these devices and their apps. We considered wearable, mobile, or home-based technologies being used by patients or clinical staff, and addressed issues of data integrity and authenticity, person identification and authentication, and usability. For a complete list of papers and people involved in THaW, see the thaw.org website.
In the TISH project we explored wearable and portable devices for use in health monitoring and management, with an emphasis on the security and privacy issues that arise with these devices and their apps. We considered wearable, mobile, or home-based technologies being used by patients or clinical staff, and addressed issues of data integrity and authenticity, person identification and authentication, usability, and ultra-low-power wearable mHealth devices.
When Wi-Fi emerged in 2001, there were few large deployments and Dartmouth was one of the first universities to deploy a campus-wide Wi-Fi network. In 2001-02 we conducted the largest-ever characterization effort on a wireless network, beginning a long series of Wi-Fi measurement projects, and enabled much of the work described below.
In the CRAWDAD project we created a repository of data and tools collected by the mobile-computing and wireless-networking community, which we host here at Dartmouth. As of May 2019 it housed 125 datasets and 22 tools, and had 14,141 registered users from 126 countries. At the time, at least 2,869 papers had used CRAWDAD datasets or mentioned CRAWDAD. See also: the project web page.
Wireless mesh networks provide Wi-Fi service to mobile clients, much like an infrastructure wireless network, but the backhaul connection between access points is itself an ad hoc wireless network. One large challenge in mesh networks is management. We developed the MeshMon system, which can inform a sysadmin about the health of the mesh network and help diagnose any problems with the network.
The Mobility modeling and MANET projects really involved three areas of research.
Mobility modeling: most research in mobile computing, including many papers on ad hoc networks, wireless networks, and pervasive computing, used inadequate mobility models based on variations of random-walk behavior. Building upon traces collected from Dartmouth's wireless network, we derived mobility models and parameters that more closely match the mobility behaviors of real users.
Mobility prediction: we developed and evaluated methods to predict the next access point where a Wi-Fi device was likely to associate, based on its past history.
MANET and ad hoc networks: we evaluated the relative performance of mobile ad hoc network (MANET) simulations and MANET experiments. In the process, we identified the common assumptions made in MANET research and quantitatively showed how simulation results will not match reality unless good models are used. We conducted the largest-ever outdoor experiment with multiple routing algorithms, and developed new ways to drive a simulator with conditions that match those in the experiment.
We developed Dartmouth Internet Security Testbed (DIST), a large-scale deployment designed to support research on wireless-network security challenges, comprising a wireless-network measurement infrastructure and a suite of Wi-Fi capable mobile devices. DIST built on MAP and supported the work in NetSANI.
We took a three-point, MAP (Measure, Analyze, Protect) approach to develop an integrated and extensible framework to address attacks on Wi-Fi networks. We aimed to develop an integrated set of new components that allow a WiFi network operator to measure and analyze WiFi and VoWLAN activity, and in real-time to identify and defend against MAC-layer attacks on that infrastructure. Some of our most important contributions regarded means to optimize capture of traffic across the many channels of 802.11.
The Network Trace Sanitization and ANonymization Infrastructure (NetSANI) project aimed to increase network-trace sharing by making it safer and easier to sanitize network traces (remove sensitive information). To this end, the NetSANI goal was to be a flexible and extensible suite of software tools for sanitizing network traces, based on user-specified sanitization goals and user-specified research goals. We never quite achieved that goal, but we conducted some anonymization (and de-anonymization) research.
See also THaW and TISH.
In our MetroSense research we developed the AnonySense system, which includes novel mechanisms for the anonymous collection of sensor data from people who volunteer their cell phones as part of a distributed sensing platform, addressing a key challenge in the important area of participatory and opportunistic urban sensing. In a subproject called PLACE (Privacy in Location-Aware Computing Environments), we also developed a method for access control called virtual walls, which is an intuitive method for controlling access to contextual sensor data.
Kerf was a set of tools designed to help system administrators analyze intrusions in their network of workstations. Kerf tools collect host and network log data in secure databases, allow administrators sophisticated searches using our SQL-language variant (SawQL, pronounced saw-kwill), and present the results through a browsable graphical interface.
In the Snowflake project, we tackled the problem of naming and sharing resources across administrative boundaries. We developed a theory and implementation for restricted delegation, building on the classic "speaks-for" relation that forms the foundation of many authorization logics. In Snowflake, principals can delegate authority to other principles, but in a limited way; in earlier work, it was only possible for a principal to delegate all of its authority. The work is theoretically well-founded and yet practical to implement.
In this SASOS project we explored the potential for "single address-space operating systems". In the mid 1990s there was a lot of interest in operating systems that used a single, large address space, made possible by the new 64-bit microprocessors, to hold all processes and persistent data. Although the concept was interesting, it required an address to be used once and then never reused. We actually measured the usage of live computer systems to get a good estimate on how quickly such an address space would be consumed.
Mobile agents are software programs that can move from host to host at times and to places of their own choosing. They are a form of active mobile code that open up new possibilities in distributed computing. Our team created Agent Tcl, one of the first comprehensive mobile-agent software platforms in the research community. In a five-year DARPA-funded effort we transformed Agent Tcl into D'Agents, which supported Java and Scheme as well as the Tcl programming languages, and which enabled our research on performance aspects of mobile code, the security challenges in mobile code, and market-based control of mobile agents and distributed systems.
We developed the Solar system, a comprehensive middleware framework for the development of context-aware applications. Solar is based on a publish-subscribe model, allowing applications to subscribe to streams of events carrying context data. Through a novel context naming system, applications can identify the desired sources, which themselves may be a named output of a tree of operators that aggregate many other sources. Solar also contributed novel methods for data-flow management, recognizing that some sensor-based context systems may produce far more data (events) than can be carried by an underlying wireless network or can be consumed by operators and applications. Furthermore, we developed a theory and implementation of context-sensitive authorization, the first distributed approach that respects confidentiality and integrity goals. In context-sensitive authorization systems, the authorization policies (e.g., for access to physical resources like a room or virtual resources like a database) depend on the context (e.g., location or activity) of the person requesting access to the resource.
We conducted an extensive series of research related to parallel file systems, and more generally file I/O for parallel computers. The pario page provides an overview, including an extensive bibliography and archive of material related to this topic. Our specific research projects are summarized below.
We developed the Armada parallel file system to allow a programmer more flexibility in specifying how data could flow from a set of I/O nodes to a set of computation nodes, in the context of large-scale computational grids. In these grids, network latency is significant, and it is important to pipeline the data flow. Armada allows the programmer to specify the data-transformation operators between the computation nodes and the I/O nodes, and internally optimizes the structure before automatically deploying the operators to intermediate nodes.
One of the big challenges facing research on parallel file systems was to develop a solid understanding of the workload: what do parallel programmers actually do with parallel file systems. We launched a cooperative effort, called CHARISMA, to collect and analyze file-system traces from multiple applications on several different file systems. The CHARISMA project was unique in recording individual read and write requests in live, multiprogramming, parallel workloads (rather than from selected or non-parallel applications). The resulting papers are some of the only work to characterize production parallel computer systems.
We designed and implemented the Galley parallel file system to meet the needs of parallel scientific applications. Galley demonstrated the power of a split-level interface: a low-level interface that allowed efficient data transfers and in particular the ability of I/O nodes in a multiprocessor to execute some of the file-system code, and a set of high-level interfaces that may be specific to a programming language or application domain and thus most convenient for the programmer.
In the STARFISH project we developed the concept of disk-directed I/O, in which the application process requested a large parallel data transfer to or from a parallel file, and then the file system arranged the transfer of information between disks and memory in a way that suited the disks' own timing. The results show strong performance benefits--- but only if suitable interfaces allow the application to make such requests known to the file system at a high level. STARFISH is the name of our simulator for experimenting with concepts in parallel file systems.
RAPID-Transit was a testbed for experimenting with caching and prefetching algorithms in parallel file systems (RAPID means "Read-Ahead for Parallel Independent Disks"), and was part of the larger NUMAtic project at Duke University. The testbed ran on Duke's 64-processor Butterfly GP1000. The model we used had a disk attached to every processor, and that each file was striped across all disks. Of course, Duke's GP1000 had only one real disk, so our testbed simulated its disks.
We developed the DAPPLE programming language, implemented as a C++ class library designed to provide the illusion of a data-parallel programming language on conventional hardware and with conventional compilers. DAPPLE defines Vectors and Matrices as basic classes, with all the C operators overloaded to provide for elementwise arithmetic. In addition, DAPPLE provides typical data-parallel operations such as scans, permutations, and reductions. Finally, DAPPLE provides a parallel if-then-else statement to restrict the context of the above operations to subsets of vectors or matrices.