Loading...

Soundability Lab

Transforming the Human Experience of Sound

About us

We are a research lab within the University of Michigan's Computer Science and Engineering department. Our mission is to deliver rich, meaningful, and interactive sonic experiences for everyone through research in human-computer interaction, audio AI, accessible computing, and sound UX. We have two focus areas: (1) sound accessibility, which includes designing systems and interfaces to deliver sound information accessibly and seamlessly to end-users, and (2) hearing health, which includes developing hardware, algorithms, and apps for next-generation earphones and hearing aids.

We embrace the term ‘accessibility’ in its broadest sense, encompassing not only tailored experiences for people with disabilities, but also the seamless and effortless delivery of information to all users. Our focus on accessibility provides us a window into the future, as people with disabilities have often been early adopters of many modern technologies such as telephones, headphones, email, messaging, and smart speakers.

Our lab is primarily composed of HCI and AI experts. We also regularly collaborate with scientists from medicine, psychology, sociology, music, and design backgrounds. Our multi-stakeholder approach has led to a huge community impact. Many of our technologies have been publicly released (e.g., one deployed app has over 100,000 users) and have directly influenced products at companies like Microsoft, Google, and Apple. Our research has also earned multiple paper awards at premier HCI venues, has been featured in leading media outlets (e.g., CNN, Forbes, New Scientist), and is included in academic curricula worldwide.

Our current impact areas include media accessibility (e.g., enhanced captioning for movies, accessible sound augmentations for VR) and healthcare accessibility (e.g., technologies to support communication within mixed-ability physician teams, modeling patients' hearing health to improve hearing aids). Key research focuses and questions are:

Intent-Driven Sound Awareness Systems. How can sound awareness technologies model user's intent and deliver context aware sound feedback?
Projects: | AdaptiveSound | ProtoSound | HACSound | SoundWeaver

Sound Accessibility in Media. How can generative AI improve accessibility of sounds in mainstream media or new media?
Projects: SoundVR | SoundModVR | SoundShift

Next-Generation Hearing Aids & Earphones. How can next earphones extract desired sounds or suppress unwanted noises to provide a seamless hearing experience? How can we dynamically capture audiometric data, model human auditory perception, and diagnose hearing-related medical conditions on the edge?
Projects: MaskSound | SonicMold | SoundShift

In future, we envision a world where technologies will expand human hearing, enabling extremely personalized, seamless, and fully accessible soundscapes that dymamically adapt to users' intent, environment, and social contexts. We call this vision, "auditory superintelligence".

If our vision and current focus areas appeal to you, please apply. We are actively recruiting PhD students and postdocs to join us in shaping the future of sound accessibility!

Recent News

Oct 30: Our CARTGPT work received the best poster award at ASSETS!
Oct 11: Soundability lab students are presenting 7 papers, demos, and posters at the upcoming UIST and ASSETS 2024 conferences!
Sep 30: We were awarded the Google Academic Research Award for Leo and Jeremy's project!
Jul 28: Two demos and one poster accepted to ASSETS/UIST 2024!
Jul 02: Two papers, SoundModVR and MaskSound, accepted to ASSETS 2024!
May 22: Our paper SoundShift, which conceptualizes mixed reality audio manipulations, accepted to DIS 2024! Congrats, Rue-Chei and team!
Mar 11: Our undergraduate student, Hriday Chhabria, accepted to the CMU REU program! Hope you have a great time this summer, Hriday.
Feb 21: Our undergraduate student, Wren Wood, accepted to the PhD program at Clemson University! Congrats, Wren!
Jan 23: Our Masters student, Jeremy Huang, has been accepted to UMich CSE PhD program. That's two good news for Jeremy this month (the CHI paper being the first). Congrats, Jeremy!
Jan 19: Our paper detailing our brand new human-AI collaborative approach for sound recognition has been accepted to CHI 2024! We can't wait to present our work in Hawaii later this year!
Oct 24: SoundWatch received the best student paper nominee at ASSETS 2023! Congrats, Jeremy and team!
Aug 17: New funding alert! Our NIH funding proposal on "Developing Patient Education Materials to Address the Needs of Patients with Sensory Disabilities" has been accepted!
Mar 16: Professor Dhruv Jain elected as the inaugral ACM SIGCHI VP for Accessibility!

Our Team

Headshot of Dhruv Jain
Dhruv "DJ" Jain

Dhruv "DJ" Jain

Assistant Professor, Computer Science & Engineering (Lab head)
Headshot of Xinyun Cao
Xinyun Cao

Xinyun Cao

PhD Student, Computer Science & Engineering
Headshot of Jeremy Huang
Jeremy Huang

Jeremy Huang

PhD Student, Computer Science & Engineering
Headshot of Alexander Wang
Alexander Wang

Alexander Wang

Visiting Researcher, Computer Science & Engineering
Headshot of Liang-Yuan Wu
Liang-Yuan Wu

Liang-Yuan Wu

MS Student, Computer Science & Engineering
Headshot of Hriday Chhabria
Hriday Chhabria

Hriday Chhabria

Undergraduate Student, Computer Science & Engineering
Headshot of Hanlong Liu
Hanlong Liu

Hanlong Liu

Undergraduate Student, Computer Science & Engineering
Headshot of Yuni Park
Yuni Park

Yuni Park

Undergraduate Research Assistant, Computer Science & Engineering
Headshot of Andy Jin
Andy Jin

Andy Jin

Undergraduate Student, Computer Science & Engineering
Headshot of Rue-Chei Chang
Rue-Chei Chang

Rue-Chei Chang

PhD Student, Computer Science & Engineering
Headshot of Anhong Guo
Anhong Guo

Anhong Guo

Assistant Professor, Computer Science & Engineering (Collaborator)
Headshot of Xinyue Chen
Xinyue Chen

Xinyue Chen

PhD Student, Computer Science & Engineering
Headshot of Xu Wang
Xu Wang

Xu Wang

Assistant Professor, Computer Science & Engineering (Collaborator)
Headshot of Elijah Bouma-Sims
Elijah Bouma-Sims

Elijah Bouma-Sims

PhD Student, Carnegie Mellon University (Collaborator)
Headshot of Lorrie Faith Cranor
Lorrie Cranor

Lorrie Cranor

Professor, Carnegie Mellon University (Collaborator)
Headshot of Michael M. McKee
Michael M. McKee

Michael M. McKee

Associate Professor, Michigan Medicine (Collaborator)

Alumni

Headshot of Reyna Wood
Reyna Wood

Wren "Reyna" Wood

Undergraduate Student, Computer Science & Engineering
Headshot of Emily Tsai
Emily Tsai

Emily Tsai

Masters Student, School of Information
Headshot of Mansanjam Kaur
Mansanjam Kaur

Mansanjam Kaur

Masters Student, School of Information
Headshot of Andrew Dailey
Andrew Dailey

Andrew Dailey

Undergraduate Student, Computer Science & Engineering

Publications

We publish our research work in the most prestigious human-computer interaction and accessibility venues including CHI, UIST, and ASSETS. Nine of our articles have been honored with awards.

A snapshot of a CART real-time captioning being broadcasted on an open screen.
AWARD

CARTGPT

ASSETS 2024 (Poster): PAPER
A user is wearing a smartwatch in front of water running down a sink. The smartwatch displays the identified sound as 'water running' with a classification confidence of 83%.
AWARD

SoundWatch Field Study

Real-World Feasibility of Sound Recognition
(Best paper honorable mention)
ASSETS 2023: PAPER | CODE
A close up shot of a person attending a 10-person video conference on a laptop.
AWARD

Classes Taught by DJ

EECS 495: Accessible Computing

This upper-level undergraduate class serves as an introduction to accessibility for undergraduate studdents and uses a curriculum designed by Professor Dhruv Jain. Students learn essential concepts related to accessibiity, disability theory, and user-centric design, and contribute to a studio-style team project in collaboration with clients with a disability and relevant stakeholders we recruit. This intense 14-week class requires working in teams to lead a full scale end-to-end accessibility project from its conceptualization, to design, to implementation, and evaluation. The goal is to reach a level of proficiency comparable to that of a well-launched employee team in a computing industry. Often, projects terminate in real-world deployments and app releases.

Read more →

EECS 598: Advanced Accessibility

This graduate-level class focuses on advances topics in accessibility including disabilty theory, user-research, and their impact on technology. Includes guest lectures by esteemed researchers and practioners in the field of accessibility.

Read more →

Talks

Sound Sensing for Deaf and Hard of Hearing Users

Navigating Graduate School with a Disability

Deep Learning for Sound Awareness on SmartWatches

Field Study of a Tactile Sound Awareness Device

First slide of the talk. A scene of a kitchen in the background with the talk title: Field Deployment of a Smarthome Sound Awareness System for Deaf and Hard of Hearing Users

Field Deployment of a In-Home Sound Awareness System

First slide of the talk. Shows DJ riding on a camel in a desert. The title of the talk reads: Autoethography of a Hard of Hearing Traveler

Autoethnography of a hard of hearing traveler

First slide of the talk. A person claps in front of a tablet interface that visaulizes the clapping sound using a pulsating bubble. The title reads: Exploring Sound Awareness in the Home for People who are Deaf or Hard of Hearing

Exploring sound awareness in the home

First slide of the talk with an image of an ear doning a hearing aid. The title reads: Deaf and Hard of Hearing Individuals' Preferences for Wearable and Mobile Sound Awareness Technologies​

Online Survey of Wearable Sound Awareness

First slide of the talk showing a person walking and talking with another person. The first person is wearing a HoloLens which shows ​real-time captions in Augmented Reality. Title is Towards Accessible Conversations in a Mobile Context for People who are Deaf and Hard of Hearing.

Towards accessible conversations in a mobile context

First slide of the talk showing a rocky beach with waves crashing over the beach. Talk title reads: Immersive Scuba Diving Simulator Using Virtual Reality

Immersive scuba diving simulator using virtual reality​

First slide of the talk showing a round table conversation with a person wearing a Google Glass. The directions of the active speakers in the conversation are visualized as arrows on the Glass. Talk title is Head-Mounted Display Visualizations to Support Sound Awareness for the Deaf and Hard of Hearing.

HMD Visualizations to Support Sound Awareness

Videos

Lab Openings

Prospective PhD students: Our lab has openings for upto three PhD students (beginning Fall 2025) in two areas, (1) data science and AI for acoustics and/or hearing health and (2) AR/VR interaction design for sound accessibility. Please see our research focus. If you believe you are the right fit, apply to the UMich CSE PhD program and email Prof. DJ at profdj [at] umich [dot] edu with: (1) a brief description of yourself and your skillset, supported by relevant prior experience, (2) some examples of projects you'd like to pursue in your PhD, and (3) your CV. We look forward to hearing from you!

Undergraduates/Masters students: Please complete this online intake form and we will get back to you when we have openings.

Potential postdocs: We are recruiting a postdoc in the area of sound accessibility with starting date of your choice in 2025. If interested, please email Prof. DJ with your research interests, a draft of your dissertation (an early writeup is fine), and your CV. Official posting coming soon!