Vocal Resonance: Using Internal Body Voice for Wearable Authentication
[liu:vocalresonance]
Rui Liu, Cory Cornelius, Reza Rawassizadeh, Ron Peterson, and David Kotz. Vocal Resonance: Using Internal Body Voice for Wearable Authentication. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) (UbiComp), volume 2, number 1, article 19, 23 pages. ACM, March 2018. doi:10.1145/3191751. ©Copyright ACM.Abstract:
We observe the advent of body-area networks of pervasive wearable devices, whether for health monitoring, personal assistance, entertainment, or home automation. For many devices, it is critical to identify the wearer, allowing sensor data to be properly labeled or personalized behavior to be properly achieved. In this paper we propose the use of vocal resonance, that is, the sound of the person’s voice as it travels through the person’s body -- a method we anticipate would be suitable for devices worn on the head, neck, or chest. In this regard, we go well beyond the simple challenge of speaker recognition: we want to know who is wearing the device. We explore two machine-learning approaches that analyze voice samples from a small throat-mounted microphone and allow the device to determine whether (a) the speaker is indeed the expected person, and (b) the microphone-enabled device is physically on the speaker’s body. We collected data from 29 subjects, demonstrate the feasibility of a prototype, and show that our DNN method achieved balanced accuracy 0.914 for identification and 0.961 for verification by using an LSTM-based deep-learning model, while our efficient GMM method achieved balanced accuracy 0.875 for identification and 0.942 for verification.
Citable with [BibTeX]
Projects: [thaw]
Keywords: [authentication] [iot] [security] [sensors] [wearable]
Available from the publisher: [DOI]
Available from the author:
[bib]
[pdf]
This pdf was produced by the publisher and its posting here is permitted by the publisher.