About us
We are a research lab within the University of Michigan's Computer Science and Engineering department. Our mission is to deliver rich, meaningful, and interactive sonic experiences for everyone through research at the forefronts of human-computer interaction, accessible computing, audio AI, and sound UX. We focus on the interplay between sounds (including speech and music) and sensory abilities (such as deafness, blindness, ADHD, autism, and also non-disabled people); our inventions include both the accessible delivery of sound information and using sounds to make the world more accessible and perceivable (e.g., audio-based navigation systems for the blind).
We embrace the term ‘accessibility’ in its broadest sense, encompassing not only tailored experiences for people with disabilities, but also the seamless and effortless delivery of information to all users. We focus on accessibility, since we view it as a window into the future, recognizing that people with disabilities have historically been early adopters of many modern technologies such as telephones, headphones, email, messaging, and smart speakers.
Our team consists of people from diverse backgrounds, including designers, engineers, musicians, psychologists, physicians, and sociologists, allowing us to examine sound accessibility challenges from a multi-stakeholder perspective. Our research process is iterative ranging from designing to building to evaluation and deployment, and have often resulted in tangible products with huge real-world impact (e.g., one deployed app has over 100,000 users). Our work has also directly influenced products at leading tech companies such as Microsoft, Google, and Apple, has received paper awards at premier HCI conferences, and has been featured in leading press venues (e.g., CNN, Forbes, New Scientist).
Currently, with generous support from National Institutes of Health (NIH), Google, and Michigan Medicine, we are focusing on the following research areas:
AI Sound Awareness Systems. The questions we are actively exploring in this area lie at the intersection of AI and HCI, including how can we make interfaces to help Deaf people, who may not be able to access sounds, record sounds to teach and train their own personalized AI sound recognition models? How can AI sound recognition technology dynamically adapt and deliver relevant information in ever-changing user contexts and environments?
Projects:
HomeSound
| SoundWatch
| AdaptiveSound
| ProtoSound
| HACSound
| SoundWeaver
AR/VR Hearing Accessibility Toolkits. What toolkits can help developers meet hearing accessibility standards in their AR/VR apps? How do we balance cognitive overload caused by our new accessibility tools for both developers and the end-users?
Projects:
SoundVR
| SoundModVR
Next-Generation Hearables. How can next-generation earphones seamlessly deliver both real and virtual information for multiple user groups? How can hearable technology help manage autistic hypersensitivity in affected groups (e.g., individuals with autism)? How can we make user-configurable hearing aids that adapt to physical changes (e.g., growing ears in DHH children), user's environment (e.g., home vs. outdoor acoustics), and user needs (e.g., high vs. low focused tasks)?
Projects:
MaskSound
| SonicMold
| SoundShift
HealthTech for Deaf/Disabled People. How can technologies improve communication and healthcare access for Deaf/disabled people? What interfaces and technology form factors are most suitable for deployment and adoption in high-stakes settings? What are the reactions of different stakeholders, including doctors, patients, and medical staff, to the new deployed technologies?
Projects:
CaptionMed
| CARTGPT
| HoloSound
| SoundActions
We're actively recruiting students and postdocs! If you are interested, please apply to work with us.
Recent News
Jul 28: Two demos and one poster accepted to ASSETS/UIST 2024!
Jul 02: Two papers, SoundModVR and MaskSound, accepted to ASSETS 2024!
Jun 03: Alexander Wang joined our lab. Welcome, Alex!
May 22: Our paper SoundShift, which conceptualizes mixed reality audio manipulations, accepted to DIS 2024! Congrats, Rue-Chei and team!
Mar 11: Our undergraduate student, Hriday Chhabria, accepted to the CMU REU program! Hope you have a great time this summer, Hriday.
Feb 21: Our undergraduate student, Wren Wood, accepted to the PhD program at Clemson University! Congrats, Wren!
Jan 23: Our Masters student, Jeremy Huang, has been accepted to UMich CSE PhD program. That's two good news for Jeremy this month (the CHI paper being the first). Congrats, Jeremy!
Jan 19: Our paper detailing our brand new human-AI collaborative approach for sound recognition has been accepted to CHI 2024! We can't wait to present our work in Hawaii later this year!
Nov 10: Professor Dhruv Jain invited to give a talk on accessiblity research in the Introduction to HCI class at the University of Michigan.
Oct 24: SoundWatch received the best student paper nominee at ASSETS 2023! Congrats, Jeremy and team!
Aug 28: A new PhD student, Xinyun Cao, joined our lab. Welcome Xinyun!
Aug 17: New funding alert! Our NIH funding proposal on "Developing Patient Education Materials to Address the Needs of Patients with Sensory Disabilities" has been accepted!
Aug 10: Professor Dhruv Jain invited for a talk on Sound Accessibility at Google.
Jun 30: Two papers from our lab, SoundWatch field study and AdaptiveSound, accepted to ASSETS 2023!
Apr 19: Professor Dhruv Jain awarded the Google Research Scholar Award.
Mar 16: Professor Dhruv Jain elected as the inaugral ACM SIGCHI VP for Accessibility!
Feb 14: Professor Dhruv Jain honored with the SIGCHI Outstanding Dissertation Award.
Jan 30: Professor Dhruv Jain honored with the William Chan Memorial Dissertation Award.