With the goal of increasing access to making, engineers at Stanford University have collaborated with members of the blind and visually impaired community to develop a touch-based display that mimics the geometry of 3D objects designed on a computer.
Creating a 3D object with computer software is often the first step in producing it physically and can be burdensome for people who are blind or visually impaired. Even with 3D modeling software that has more accessible ways of inputting designs, they still have to evaluate their work by either creating a physical version they can touch or by listening to a description provided by a sighted person.
“Design tools empower users to create and contribute to society but, with every design choice, they also limit who can and cannot participate,” said Alexa Siu, a graduate student in mechanical engineering at Stanford, who developed, tested and refined the system featured in this research. “This project is about empowering a blind user to be able to design and create independently without relying on sighted mediators because that reduces creativity, agency and availability.”
Follmer lab member Kai Zhang is a 2014 Stanford Graduate Fellow.