We are researchers who build and study interactive systems to solve pressing accessibility challenges. Our long-term vision is that accessibility will be integrated, by default, in all devices, systems, and environments. To achieve this vision, our lab has three focus areas:
Explore interfaces and systems that empower end-users to build or personalize assistive technologies on their own. This will enable future assistive technologies to accommodate the diversity of individual needs and preferences. Our current focus in on sound accessibility (e.g., deaf & hard of hearing people, people with auditory processing disorders). Skillset required: interactive machine learning | data visualization | HCI | sound engineering
Make toolkits, APIs, or guidelines to support developers in intergrating accessibility into their apps, softwares, and devices. Our current focus is on XR (AR, VR) software. Skillset required: design | software engineering | prototyping
Explore technologies to ease communication between healthcare professionals and people with disabilities. Our current focus is on deaf & hard of hearing patients. Skillset: qualitative & quantitative analysis | prototyping
For all three areas, we critically involve end-users in our design and build pipeline by conducting studies to understand their needs, building technology systems to address those needs, and deploying and studying our systems with our users in their natural enviroments. This heavy focus on people-centric design has allowed us to make a lot of real-world impact from publicly releasing our technologies (e.g., one deployed system has over 100,000 users) to directly impacting products at Microsoft, Google, and Apple. Our research work is published in the most prestigious human-computer interaction and computer science venues such as CHI, UIST, and ASSETS and have been honored with multiple awards. Much of what we build also empowers non-disabled users as well, because, really, accessibility affects everyone. We all find conversations difficult to hear in noisy bars or a phone difficult to see in direct sunlight and can benefit from using adaptable technologies.
If this interests you and you want to work with us, please apply below.
Apr 19: We are honored to receive the Google Research Scholar Award to advance Jeremy and Hriday's project on sound awareness! Mar 16: Professor Dhruv Jain appointed as the ACM SIGCHI VP for Accessibility! Feb 14: Professor Dhruv Jain honored with SIGCHI Outstanding Dissertation Award Jan 30: Professor Dhruv Jain honored with William Chan Memorial Dissertation Award Nov 22: Our Mixed Ability Autoethography paper invited to feature in CACM Research Highlights!
We are always on the look for passionate students and collaborators to help us achieve our vision.
Prospective PhD students: Please read through the focus areas and the requisite skillset mentioned above. If you think you fit any of these skills and are interested in contributing to these areas, please send DJ an email at profdj [at] umich [dot] edu with a brief justification of your skill set (e.g., through relevant research experience), a list of potential project ideas you'd like to pursue, and your CV.
Undergrads/Masters/High School students: Please complete this form and we will reach out to you!
Prospective Postdocs: Please send DJ an email with your reasoning for pursuing a post-doc position, a draft of your dissertation (if available), and your CV.