• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations | Disambiguate

DMCA

SoundSense: Scalable Sound Sensing for People-Centric Applications on Mobile Phones

Cached

  • Download as a PDF

Download Links

  • [www.cs.dartmouth.edu]
  • [www.cs.dartmouth.edu]
  • [sensorlab.cs.dartmouth.edu]
  • [web.media.mit.edu]
  • [cvrr.ucsd.edu]
  • [www.cs.dartmouth.edu]
  • [www.cs.ucf.edu]
  • [csce.uark.edu]
  • [web.media.mit.edu]
  • [www.cse.buffalo.edu]
  • [www.cs.ucf.edu]
  • [www.ists.dartmouth.edu]

  • Save to List
  • Add to Collection
  • Correct Errors
  • Monitor Changes
by Hong Lu , Wei Pan , Nicholas D. Lane , Tanzeem Choudhury , Andrew T. Campbell
Citations:139 - 10 self
  • Summary
  • Citations
  • Active Bibliography
  • Co-citation
  • Clustered Documents
  • Version History

BibTeX

@MISC{Lu_soundsense:scalable,
    author = {Hong Lu and Wei Pan and Nicholas D. Lane and Tanzeem Choudhury and Andrew T. Campbell},
    title = {SoundSense: Scalable Sound Sensing for People-Centric Applications on Mobile Phones},
    year = {}
}

Share

Facebook Twitter Reddit Bibsonomy

OpenURL

 

Abstract

Top end mobile phones include a number of specialized (e.g., accelerometer, compass, GPS) and general purpose sensors (e.g., microphone, camera) that enable new people-centric sensing applications. Perhaps the most ubiquitous and unexploited sensor on mobile phones is the microphone – a powerful sensor that is capable of making sophisticated inferences about human activity, location, and social events from sound. In this paper, we exploit this untapped sensor not in the context of human communications but as an enabler of new sensing applications. We propose SoundSense, a scalable framework for modeling sound events on mobile phones. SoundSense is implemented on the Apple iPhone and represents the first general purpose sound sensing system specifically designed to work on resource limited phones. The architecture and algorithms are designed for scalability and SoundSense uses a combination of supervised and unsupervised learning techniques to classify both general sound types (e.g., music, voice) and discover novel sound events specific to individual users. The system runs solely on the mobile phone with no back-end interactions. Through implementation and evaluation of two proof of concept peoplecentric sensing applications, we demostrate that SoundSense is capable of recognizing meaningful sound events that occur in users ’ everyday lives. Categories and Subject Descriptors

Keyphrases

mobile phone    scalable sound sensing    people-centric application    sound event    unexploited sensor    human activity    top end mobile phone    novel sound event    subject descriptor    human communication    enable new people-centric sensing application    untapped sensor    general sound type    user everyday life    powerful sensor    back-end interaction    general purpose sensor    individual user    meaningful sound event    apple iphone    concept peoplecentric sensing application    sophisticated inference    social event    scalable framework    first general purpose sound   

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University