About us
We are a research lab in the University of Michigan campus focusing on building interactive systems to address pressing accessibility challenges. We employ a diverse team of people from multidisciplinary backgrounds—including engineers, designers, architects, healthcare specialists, sociologists, and psychologists—allowing us to tackle accessibility problems holistically. Our projects undergo the full design, development, and evaluation cycle—starting from understanding a problem through a multi stakeholder perspective, to building an end-to-end usable solution to address the problem, and, finally, deploying and studying our solution over extended use periods in the field. Due to this holistic focus, we're able to achieve immediate real-world impact; our research work has been publicly released (e.g., one system has been used by over 100,000 disabled people) and has directly impacted products at leading tech companies such as Microsoft, Google, and Apple.
Currently, we are pushing boundaries in the following three research areas, with generous support from National Institutes of Health (NIH), Google, and Michigan Medicine:
1. Interactive AI for Accessibility. Involving end-users in the AI model training and personalization pipeline can increase their reliability, flexibility, and scalability. However, developing interfaces for people with limited sensory abilities to interact with AI is challenging. For example, how can deaf and hard of hearing people, who have trouble hearing sounds themselves, record sounds to train a sound recognition model? Or, how can blind people access the correctness and reliability of an image classification model? We're prototyping interfaces that will help Deaf/Disabled people to record training data, access the quality of their samples, train an AI model, and assess its correctness all by themselves.
2. Next-Generation Hearables. What capabilities should next generation of hearables (e.g., earbuds, hearing aids) support and how can innovations in AI sound recognition technology help? Who can benefit from the enchanced capabilities (e.g., deaf and hard of hearing people, people with autism, blind people, even non-disabled people) and what are the promising interfaces to deliver these enhancements? Our lab is inventing novel algorithms, system pipelines, and interfaces to support novel and exciting interactions with the audio environment previously only possible in the realm of science fiction.
3. Accessibility Toolkits for AR/VR Devices. XR devices are poised to dominate every facet of our lives; however, accessibility remains a challenge. Indeed, accessibilty has always been an afterhought when designing modern technlogies, leading to sub-par and inaccessible user experiences. Fortunately, since XR is still emerging, we have a great oppportunity to push accesibility into this medium from the start. Our lab is building and studying software toolkits for AR and VR to easily integrate accessibility into their apps.
We're continuously recruiting. If you are interested in working in these areas, please apply.
Recent News
Jan 23: Our Masters student, Jeremy Huang, has been accepted to UMich CSE PhD program. That's two good news for Jeremy this month (the CHI paper being the first). Congrats, Jeremy!
Jan 19: Our paper detailing our brand new human-AI collaborative approach for sound recognition has been accepted to CHI 2024! We can't wait to present our work in Hawaii later this year!
Nov 10: Professor Dhruv Jain invited to give a talk on accessiblity research in the Introduction to HCI class at the University of Michigan.
Oct 24: SoundWatch receives the best student paper nominee at ASSETS 2023! Congrats, Jeremy and team!
Aug 28: We welcome a new PhD student, Xinyun Cao, into our lab. Welcome Xinyun!
Aug 17: New funding notice! Our NIH funding proposal on "Developing Patient Education Materials to Address the Needs of Patients with Sensory Disabilities" has been accepted!
Aug 10: Professor Dhruv Jain invited for a talk on Sound Accessibility at Google.
Jun 30: Two papers from our lab, SoundWatch field study and AdaptiveSound, accepted to ASSETS 2023!
Apr 19: Professor Dhruv Jain awarded theGoogle Research Scholar Award.
Mar 16: Professor Dhruv Jain elected as the inaugral ACM SIGCHI VP for Accessibility!
Feb 14: Professor Dhruv Jain honored with the SIGCHI Outstanding Dissertation Award.
Jan 30: Professor Dhruv Jain honored with the William Chan Memorial Dissertation Award.