What is it about?
The authors designed a toolkit to make social interactions in virtual reality (VR) accessible to people with different sensory abilities, called SocialCueSwitch. In VR, social cues like gestures, eye contact, and proximity are typically visual or auditory, excluding people who are blind or have low vision (BLV) and those who are deaf or hard of hearing (DHH). SocialCueSwitch allows these cues to be represented in various ways, including haptic feedback (touch), so users can choose the best methods for their needs. For example, spatial audio can indicate who is speaking for BLV users, while visual indicators or captions can help DHH users. Developed for easy integration into existing VR systems, this toolkit aims to make VR more inclusive and customizable, enhancing social interactions for everyone.
Featured Image
Photo by julien Tromeur on Unsplash
Why is it important?
SocialCueSwitch addresses the accessibility of virtual reality (VR) environments by focusing on social cues. While many existing accessibility tools in VR address literal cues, such as identifying colors or reading text, SocialCueSwitch is different by translating social interactions—like gestures, eye contact, and proximity—into multiple sensory modalities, including haptic feedback. This approach allows users, especially those who are blind or have low vision (BLV) and those who are deaf or hard of hearing (DHH), to receive feedback on these crucial social cues in ways that suit their sensory preferences. By enabling such nuanced and customizable interactions, SocialCueSwitch not only makes VR more inclusive but also enhances the social experience for a diverse range of users, providing clearer feedback in multiple modalities. This system may also benefit those who would like clearer indications of social cues, such as those who are neurodiverse.
Perspectives
Read the Original
This page is a summary of: SocialCueSwitch: Towards Customizable Accessibility by Representing Social Cues in Multiple Senses, May 2024, ACM (Association for Computing Machinery),
DOI: 10.1145/3613905.3651109.
You can read the full text:
Resources
Contributors
The following have contributed to this page