About us
We are a research lab in the University of Michigan campus focusing on building interactive systems to address pressing accessibility challenges. We employ a diverse team of people from multidisciplinary backgrounds—including engineers, designers, architects, healthcare specialists, sociologists, and psychologists—allowing us to tackle accessibility problems holistically. Our projects undergo the full design, development, and evaluation cycle—starting from understanding a problem through a multi stakeholder perspective, to building an end-to-end usable solution to address the problem, and, finally, deploying and studying our solution over extended use periods in the field. Due to this holistic focus, we're able to achieve immediate real-world impact; our research work has been publicly released (e.g., one system has been used by over 100,000 disabled people) and has directly impacted products at leading tech companies such as Microsoft, Google, and Apple.
Currently, we are most passionate about innovating the following four research areas:
1. Interactive AI for Deaf/Disabled People. Involving end-users in the model training and personalization pipeline can increase their reliability, flexibility, and scalability. However, developing interfaces for layman users to interact with AI is challenging, and even more so people with limited sensory abilities. For example, how can deaf and hard of hearing people, who have trouble hearing sounds themselves, record sounds to train a sound recognition model? Or, how can blind people access the correctness and reliability of an image classification model? We're prototyping interfaces that will help Deaf/Disabled people to record training data, access the quality of their samples, train an AI model, and assess its correctness all by themselves.
2. Sound Recognition Technologies for Deaf and Hard of Hearing People. Current real-world sound recognition technologies categorize sounds into discrete events (e.g., 'washer running' or 'dog barking') and do not convey the state of these sounds (e.g., is my washing machine spinning too hard? or, is my dog barking softly or in excitement/anger?). This state or semantic information is important to get a holistic information about the surrounding sound activities. We're building systems to sense, process, and convey this state information to DHH end-users.
3. Customized Audio Experiences for Everyone. Building on our team's expertise in sound accessibiity, we're exploring: how can we use audio augmented reality (AR) technology to deliver customized sound experiences in different contexts? This has many powerful applications—for example, to enable autistic people to manage auditory hypersensitivity in noisy environments, or to support blind people in navigating seamlessly through auditory cues while keeping aware of their surroundings, or for anyone, including non-disabled people, to prioritize their sound information delivery in different contexts (e.g., while driving).
4. Social VR for Mixed Ability Groups. What if people with different abilities (e.g., deaf, blind, and non-disabled) join a meeting or decide to play a multiplayer game? How will each be informed about and communicate their access needs to each other? We're prototyping virtual reality (VR) interfaces, apps, and toolkits to support mixed-ability social interaction. Our hope is to use VR to model the future of mixed-ability interaction and inspire the next-generation of technology creation in the real-world.
If you are interested in working in any of these areas, please apply to work with us.