Loading...

Soundability Lab

Building interactive systems to deliver better sound experiences to everyone

About us

We are a research lab within the University of Michigan's Computer Science and Engineering department. Our mission is to deliver interactive, rich, and meaningful sonic experiences for everyone. Our primary research areas span human-computer interaction, accessible computing, and sound design. We focus on the interplay between sounds (including speech and music) and sensory abilities (such as deafness, blindness, ADHD, autism, and non-disability), and we work on projects that deliver sound information accessibly and also that use sounds to make the world more accessible (e.g., audio-based navigation systems for the blind).

We embrace the term ‘accessibility’ in its broadest sense, encompassing not only tailored experiences for people with disabilities, but also the seamless and effortless delivery of information to all users. By prioritizing accessibility, we are able to gain early insight into the future, recognizing that individuals with disabilities have often been early adopters of many everyday technologies, from telephones and earphones to email, texting, subtitles, and smart speakers.

Our team consists of people from diverse backgrounds, including designers, engineers, musicians, architects, psychologists, doctors, and sociologists. This diversity allows us to approach technical sound accessibility challenges from a multi-stakeholder perspective. We follow an iterative design, building, evaluation, and deployment approach, resulting not only in valuable research insights in the field of human-computer interaction but also in tangible products with immediate real-world impact. Our work has been recognized with best-paper awards in premier conferences such as CHI, ASSETS, and UIST, featured in prominent press outlets (e.g., CNN, Forbes, New Scientist), publicly released (e.g., one app has over 100,000 users), and has influenced products at leading technology companies such as Google, Apple, and Microsoft.

Currently, we are focusing on the following research areas, with generous support from National Institutes of Health (NIH), Google, and Michigan Medicine:

Interactive AI for Sound Accessibility. How can Deaf people, who cannot hear sounds, teach and train their own AI sound recognition models? How can sound recognition models adapt to changing contexts and environments? What sound cues will help provide holistic sound awareness to deaf and hard of hearing people and how can AI help?
Projects: HomeSound | SoundWatch | AdaptiveSound | ProtoSound | InteractiveSound

Personalizable Soundscapes and Hearables. How can multiple intrusive environmental sound cues be delivered seamlessly? How can we dynamically adapt music based on the user's environment and context of use? How can earbuds be dynamically personalized to each user's hearing profile and fit? What promising sound interfaces can help manage hypersensitivity?
Projects: MaskSound | 3DSoundBuds

AR/VR Sound Experiences and Toolkits. How can developers easily integrate sound accessibility into their emerging VR apps? What features should VR developer toolkits support? What rich sound experiences can be enabled using AR technology? How can we seamlessly blend sound in the real and virtual world?
Projects: SoundVR | SoundBlender

Medical Communication Accessibility. How can speech technology improve communication for Deaf/disabled people in healthcare settings? What interfaces and form factors are most suitable for deployment in these settings? What are the reactions of various stakeholders, including patients, physicians, and staff on these technologies?
Projects: MedCaption | HoloSound | CartGPT | SoundActions

We're continously recruiting. If you are interested in these areas, please apply to work with us.

Recent News

Jan 23: Our Masters student, Jeremy Huang, has been accepted to UMich CSE PhD program. That's two good news for Jeremy this month (the CHI paper being the first). Congrats, Jeremy!
Jan 19: Our paper detailing our brand new human-AI collaborative approach for sound recognition has been accepted to CHI 2024! We can't wait to present our work in Hawaii later this year!
Nov 10: Professor Dhruv Jain invited to give a talk on accessiblity research in the Introduction to HCI class at the University of Michigan.
Oct 24: SoundWatch receives the best student paper nominee at ASSETS 2023! Congrats, Jeremy and team!
Aug 28: We welcome a new PhD student, Xinyun Cao, into our lab. Welcome Xinyun!
Aug 17: New funding notice! Our NIH funding proposal on "Developing Patient Education Materials to Address the Needs of Patients with Sensory Disabilities" has been accepted!
Aug 10: Professor Dhruv Jain invited for a talk on Sound Accessibility at Google.
Jun 30: Two papers from our lab, SoundWatch field study and AdaptiveSound, accepted to ASSETS 2023!
Apr 19: Professor Dhruv Jain awarded theGoogle Research Scholar Award.
Mar 16: Professor Dhruv Jain elected as the inaugral ACM SIGCHI VP for Accessibility!
Feb 14: Professor Dhruv Jain honored with the SIGCHI Outstanding Dissertation Award.
Jan 30: Professor Dhruv Jain honored with the William Chan Memorial Dissertation Award.

Our Team

Headshot of Dhruv Jain
Dhruv "DJ" Jain

Dhruv "DJ" Jain

Assistant Professor, Computer Science & Engineering (Lab head)
Headshot of Xinyun Cao
Xinyun Cao

Xinyun Cao

PhD Student, Computer Science & Engineering
Headshot of Jeremy Huang
Jeremy Huang

Jeremy Huang

PhD Student, Computer Science & Engineering
Headshot of Liang-Yuan Wu
Liang-Yuan Wu

Liang-Yuan Wu

MS Student, Computer Science & Engineering
Headshot of Andy Jin
Andy Jin

Andy Jin

Undergraduate Student, Computer Science & Engineering
Headshot of Yuni Park
Yuni Park

Yuni Park

Undergraduate Research Assistant, Computer Science & Engineering
Headshot of Hriday Chhabria
Hriday Chhabria

Hriday Chhabria

Undergraduate Student, Computer Science & Engineering
Headshot of Reyna Wood
Reyna Wood

Reyna Wood

Undergraduate Student, Computer Science & Engineering
Headshot of Rue-Chei Chang
Rue-Chei Chang

Rue-Chei Chang

PhD Student, Computer Science & Engineering
Headshot of Anhong Guo
Anhong Guo

Anhong Guo

Assistant Professor, Computer Science & Engineering (Collaborator)
Headshot of Xinyue Chen
Xinyue Chen

Xinyue Chen

PhD Student, Computer Science & Engineering
Headshot of Xu Wang
Xu Wang

Xu Wang

Assistant Professor, Computer Science & Engineering (Collaborator)
Headshot of Elijah Bouma-Sims
Elijah Bouma-Sims

Elijah Bouma-Sims

PhD Student, Carnegie Mellon University (Collaborator)
Headshot of Lorrie Faith Cranor
Lorrie Cranor

Lorrie Cranor

Professor, Carnegie Mellon University (Collaborator)
Headshot of Michael M. McKee
Michael M. McKee

Michael M. McKee

Associate Professor, Michigan Medicine (Collaborator)

Alumni

Headshot of Emily Tsai
Emily Tsai

Emily Tsai

Masters Student, School of Information
Headshot of Mansanjam Kaur
Mansanjam Kaur

Mansanjam Kaur

Masters Student, School of Information
Headshot of Yifan Zhu
Yifan Zhu

Yifan Zhu

Masters Student, Computer Science & Engineering
Headshot of Andrew Dailey
Andrew Dailey

Andrew Dailey

Undergraduate Student, Computer Science & Engineering

Publications

We publish our research work in the most prestigious human-computer interaction and accessibility venues including CHI, UIST, and ASSETS. Eight of our articles have been honored with awards.

A user is wearing a smartwatch in front of water running down a sink. The smartwatch displays the identified sound as 'water running' with a classification confidence of 83%.
AWARD

SoundWatch Field Study

Real-World Feasibility of Sound Recognition
(Best paper honorable mention)
ASSETS 2023: PAPER | CODE
A close up shot of a person attending a 10-person video conference on a laptop.
AWARD

Classes Taught by DJ

EECS 495: Accessible Computing

This upper-level undergraduate class serves as an introduction to accessibility for undergraduate studdents and uses a curriculum designed by Professor Dhruv Jain. Students learn essential concepts related to accessibiity, disability theory, and user-centric design, and contribute to a studio-style team project in collaboration with clients with a disability and relevant stakeholders we recruit. This intense 14-week class requires working in teams to lead a full scale end-to-end accessibility project from its conceptualization, to design, to implementation, and evaluation. The goal is to reach a level of proficiency comparable to that of a well-launched employee team in a computing industry. Often, projects terminate in real-world deployments and app releases.

Read more →

EECS 598: Advanced Accessibility

This graduate-level class focuses on advances topics in accessibility including disabilty theory, user-research, and their impact on technology. Includes guest lectures by esteemed researchers and practioners in the field of accessibility.

Read more →

Talks

Sound Sensing for Deaf and Hard of Hearing Users

Navigating Graduate School with a Disability

Deep Learning for Sound Awareness on SmartWatches

Field Study of a Tactile Sound Awareness Device

First slide of the talk. A scene of a kitchen in the background with the talk title: Field Deployment of a Smarthome Sound Awareness System for Deaf and Hard of Hearing Users

Field Deployment of a In-Home Sound Awareness System

First slide of the talk. Shows DJ riding on a camel in a desert. The title of the talk reads: Autoethography of a Hard of Hearing Traveler

Autoethnography of a hard of hearing traveler

First slide of the talk. A person claps in front of a tablet interface that visaulizes the clapping sound using a pulsating bubble. The title reads: Exploring Sound Awareness in the Home for People who are Deaf or Hard of Hearing

Exploring sound awareness in the home

First slide of the talk with an image of an ear doning a hearing aid. The title reads: Deaf and Hard of Hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies​

Online Survey of Wearable Sound Awareness

First slide of the talk showing a person walking and talking with another person. The first person is wearing a HoloLens which shows ​real-time captions in Augmented Reality. Title is Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing.

Towards accessible conversations in a mobile context

First slide of the talk showing a rocky beach with waves crashing over the beach. Talk title reads: Immersive Scuba Diving Simulator Using Virtual Reality

Immersive scuba diving simulator using virtual reality​

First slide of the talk showing a round table conversation with a person wearing a Google Glass. The directions of the active speakers in the conversation are visualized as arrows on the Glass. Talk title is Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing.

HMD Visualizations to Support Sound Awareness

Videos

Lab Openings

Prospective PhD students: We are recruiting PhD students with the following set of skills: (1) those with prior HCI and user study experience to lead projects focused on Deaf/disabled population, or (2) those with prior applied AI experience around sound and audio, but who also can build user-facing systems and conduct user research. If you think you fit any of these skills, please email DJ at profdj [at] umich [dot] edu with a brief justification of your skill set (e.g., through relevant research experience), some example projects you'd like to pursue that build on our lab's work, and your CV.

Undergraduates/Masters students: Please complete this form and we will get back to you!

Potential PostDocs: Please email DJ with your research interests, your dissertation draft, and your CV.