In wireless body area networks, an important open security problem is the one body authentication problem. That is, given a group of wireless sensors, can one detect if they are all placed on the same body. Prior work has shown this is possible for sensors placed near each other, however, these techniques fail for sensors placed at extreme locations on the body. For example, suppose sensor A is placed on the ankle, sensor B on the waist, and sensor C on the wrist. Prior work would detect sensors A and B as being on the same body as well as B and C. However, detecting that A and C are the same body has proven to be difficult. We wish to make a step toward a solution to this problem.
We make the novel observation that perhaps there is a transitive relationship we can exploit. That is, if we know A and B are on the same body and B and C are on the same body, then it must be true that A and C are on the same body because of their common relationship with B.
However, there is one caveat: we cannot know, a priori, where the sensors are placed on the body. Thus, an important first step is to be able to detect the location of a sensor on the body, and that is the problem we wish to solve. We plan to look at at least 5 locations: ankles, wrists, and waist. Additionally, we would like to be as energy conscious as possible, so choosing light-weight features and classifications algorithms is a goal because we are work with energy constrained devices.
Given an stream of accelerometer samples:
S = { s_1=<x_1, y_1, z_1>, ..., s_n=<x_n, y_n, z_n> }
we first want to extract the magnitude for each sample:
S_m = { ||s_1||, ..., ||s_n|| }
We take the magnitude of each sample because we want to be agnostic to orientation. We cannot hope that users will apply sensors in the same orientation every time they use a sensor, so it is best that our method is orientation agnostic.
Next, because we are going to extract features from this time-series data, we need to examine a window of data. Choosing the size of this window is important since we want the features to capture something representative about the stream. Thus, we will experiment with both window size and overlap percentage.
Given a window of magnitudes, we need to extract some features from the stream. For these features, we defer to the activity recognition literature. The existing research extracts features such as:
Once we have a feature vector for each window, we can train various classifiers to see how well they perform using 10-fold cross validation. We plan to test a naive Bayes classifier, nearest neighbor, and support vector machines with various kernels (linear, polynomial, radial basis).
We plan to collect our own dataset using iPhones. Each iPhone is equipped with a 3-axis accelerometer capable of sensing at 100Hz. In practice, however, we have only been able to reliably push them to 80Hz. We already have an application that is capable of collecting accelerometer data, however, we will need to re-instrument it because we want to be able to use several iPhones and have them take time synchronized measurements. Once we have the necessary infrastructure in place, we plant to recruit several individuals, each of whom will wear an iPhone on their ankle, wrist, and belt. They will then walk a 10 minute course at least 3 times. This should give us the data we need to verify whether our method is suitable.
In the first week, we plan to re-instrument or accelerometer application so that multiple iPhones can quickly coordinate to take time synchronized measurements. This is a relatively easy task since iPhones can easily communicate over an ad-hoc network via BlueTooth.
In the second week, we plan to gather participants and collecting the data as described above. This should bring us right up the project milestone.
In the remaining weeks, we plan to test our method using the data we previously collected. Barring any unforeseen circumstances, this phase should take us right up to the end of the term.