Indoor Location Recognition

Hong Lu & Daniel Peebles
Update: we have made progress! Click here to find out more!

Introduction

Modern mobile phones typically include GPS for sensing location. This feature is commonly used to provide rich location-aware applications on platforms such as Apple's iPhone and Google's Android. While GPS-based localization works well outdoors, it stops working indoors.

In this situation, mobile platforms typically use Wi-Fi-based localization, which while potentially effective, has the disadvantage of not having a standard set of bases from which to infer location. Instead, services like SkyHook use a vehicle-mounted wireless scanner to profile wireless signal strengths. This usually works, but the fact that samples are taken from a vehicle means that the system will always think the user is on a road — hardly ideal for determining indoor location!

We propose to achieve high-accuracy indoor localization by exploiting the full sensor suite available to the iPhone. This would provide us with audio, accelerometer, (possibly) light level, image snapshots, and Wi-Fi scan results.

Method

We already have an application for jailbroken iPhones that we use to log data. Currently, it supports logging accelerometer, GPS, and audio, but could easily be expanded to capture images and Wi-Fi scans.

Using our software, we will sample locations in Sudikoff, and collect as much data as we can. We will either wear the iPhone on armbands during the data collection phase, or hold it out to take photographs at different locations in the building. We will make a point of collecting data at different times of day to account for lighting differences and differences in social behavior.

After collecting data, we will compute features on it and use the captured images to help ourselves map the captured samples to actual positions on a 3d plan of Sudikoff. Features will include statistical and information-theoretic measures in the time and frequency domains (maybe even cepstral) on the time-series data, similar (possibly scale-invariant) measures on the images. We will also include raw Wi-Fi RSSI measures for all access points in the building.

We then intend to use Gaussian processes as a form of regression to learn a function mapping our high-dimensional feature vector (possibly including time) to a 3d point (with uncertainty) within our model. Depending on the results of this method, we will determine whether or not to apply further smoothing, and whether our feature set needs adjusting. We will also evaluate how removing specific sensors and features from the process affects overall performance.

Timeline

Adding the extra capabilities to our logger program should not take more than a week or two, and data collection will be quick after that. We aim for a relatively dense dataset covering Sudikoff, and want to cover multiple days to avoid catching transient anomalies. By the milestone, we expect to have the majority of our data collected, and initial work on implementing the Gaussian process complete (we intend to write the code for this ourselves to familiarize ourselves with GP).

Related Work