Partnership Award

By Patchen Barss
Forethought rather than afterthought: that’s the guiding principle of a groundbreaking five-day co-design festival hosted in 2023 by Ontario Tech University with the Canadian National Institute for the Blind and OCAD University that brought together participants who identify as blind, partially sighted, Deafblind, or living with sensory processing disabilities to identify areas for design improvement with AI and gain insights on best practices with immersive technologies like augmented and virtual reality. The virtual and in-person workshops actively involved the persons affected by AI who shared firsthand experiences of accessibility barriers in AI, allowing the researchers to feed those lived experiences directly into their designs. This work is ensuring Dr. Peter Lewis and Dr. Mahadeo Sukhai’s new “explainable AI” tools are transparent, inclusive, and equitable, and will shape recommendations for a national standard for disability-inclusive AI, by building accessibility into technology from the start.
Peter Lewis, Canada Research Chair in Trustworthy Artificial Intelligence at Ontario Tech, and Mahadeo Sukhai, the former vice-president of research and Chief Accessibility Officer at CNIB who is now the Chief Operating Officer of IDEA-STEM and holds adjunct faculty positions at Ontario Tech University, Queen’s University, and OCAD U, are focusing on an area of AI research where accessibility is both particularly important, and uniquely challenging. “The early partnership should be illustrative for anyone engaging in research collaborations between the academic and nonprofit sectors, where both parties participate as equals. We lived that ideal and hope it will be an example for others in the future,” Sukhai says.
“Many AI systems operate as what are known as 'black boxes'—we don't know how they make their decisions, or which factors drive their output. Explainable AI provides people with this information, which helps determine whether the AI's output is trustworthy. For example, we know there are ongoing issues with AI systems around bias and discrimination. Explainability can help expose that,” Lewis explains. “However, today's explainable AI techniques almost all involve complex visuals, such as heatmaps or infographics. This fundamentally excludes people with sight loss from accessing the information. Unless we make them more accessible, many people will continue to be excluded from being able to exercise their rights to understand why an AI system that affects them did what it did.”
The original co-design event inspired a wide range of innovations including options to replace images with sounds—instead of data visualizations, visually impaired people could access “data sonifications.”
As has been the case with many technological advances, accessibility considerations can also have benefits well beyond the community they were originally designed to serve.
“One of the things we find time and again is that building accessible-by-design tools and spaces helps everybody, not just people with specific needs,” Lewis says. “Our research is leading to multi-modal ways to interact with AI tools, to suit people's preferences, moving beyond outputs just designed by and for technologists.”