Computer

Some Interactive Visualizations Can Be Made Accessible to Screen-Reader Users by Adding One Line of Code

Some Interactive Visualizations Can Be Made Accessible to Screen-Reader Users by Adding One Line of Code

The way we comprehend our lives has altered as a result of interactive representations. They may, for instance, display how many coronavirus infections there are in each state.

However, screen readers, which are programs that scan the content of a computer screen and make it available via a synthesized voice or Braille, frequently do not make these visuals accessible to individuals who use them. For a number of reasons, including complete or partial blindness, learning difficulties, or motion sensitivity, millions of Americans utilize screen readers.

VoxLens is a JavaScript plugin developed by University of Washington researchers in collaboration with screen-reader users that enables interaction with visualizations with just one additional line of code.

Users of VoxLens can hear a graph’s information transformed into sound, hear a high-level summary of it, or ask precise inquiries about the data, like what the mean or minimum number is, using voice commands.

The team presented this project May 3 at CHI 2022 in New Orleans.

“If I’m looking at a graph, I can pull out whatever information I am interested in, maybe it’s the overall trend or maybe it’s the maximum,” said lead author Ather Sharif, a UW doctoral student in the Paul G. Allen School of Computer Science & Engineering. “Right now, screen-reader users either get very little or no information about online visualizations, which, in light of the COVID-19 pandemic, can sometimes be a matter of life and death. The goal of our project is to give screen-reader users a platform where they can extract as much or as little information as they want.”

Screen readers can inform users about the text on a screen because it’s what researchers call “one-dimensional information.”

When we build technology, we tend to think of people who are like us and who have the same abilities as we do. For example, D3 has really revolutionized access to visualizations online and improved how people can understand information. But there are values ingrained in it and people are left out. It’s really important that we start thinking more about how to make technology useful for everybody.

Katharina Reinecke

“There is a start and an end of a sentence and everything else comes in between,” said co-senior author Jacob O. Wobbrock, UW professor in the Information School. “But as soon as you move things into two dimensional spaces, such as visualizations, there’s no clear start and finish. It’s just not structured in the same way, which means there’s no obvious entry point or sequencing for screen readers.”

In order to understand how a prospective tool would function, the researchers first worked with five screen-reader users who were either completely or partially blind.

“In the field of accessibility, it’s really important to follow the principle of ‘nothing about us without us,’” Sharif said. “We’re not going to build something and then see how it works. We’re going to build it taking users’ feedback into account. We want to build what they need.”

To implement VoxLens, visualization designers only need to add a single line of code.

“We didn’t want people to jump from one visualization to another and experience inconsistent information,” Sharif said. “We made VoxLens a public library, which means that you’re going to hear the same kind of summary for all visualizations. Designers can just add that one line of code and then we do the rest.”

By enlisting 22 screen-reader users who were either completely or partially blind, the researchers evaluated VoxLens. After learning how to utilize VoxLens, participants worked through nine exercises that each required them to respond to inquiries concerning a visualization.

VoxLens users completed the tasks with 122% more accuracy and 36% less interaction time than study participants who did not have access to this tool.

“We want people to interact with a graph as much as they want, but we also don’t want them to spend an hour trying to find what the maximum is,” Sharif said. “In our study, interaction time refers to how long it takes to extract information, and that’s why reducing it is a good thing.”

The team also interviewed six participants about their experiences.

“We wanted to make sure that these accuracy and interaction time numbers we saw were reflected in how the participants were feeling about VoxLens,” Sharif said. “We got really positive feedback. Someone told us they’ve been trying to access visualizations for the past 12 years and this was the first time they were able to do so easily.”

Right now, VoxLens only works for visualizations that are created using JavaScript libraries, such as D3, chart.js or Google Sheets. However, the group is striving to include more well-known visualization tools. The voice-recognition system can be frustrating to operate, according to the researchers.

“This work is part of a much larger agenda for us removing bias in design,” said co-senior author Katharina Reinecke, UW associate professor in the Allen School. “When we build technology, we tend to think of people who are like us and who have the same abilities as we do. For example, D3 has really revolutionized access to visualizations online and improved how people can understand information. But there are values ingrained in it and people are left out. It’s really important that we start thinking more about how to make technology useful for everybody.”

Additional co-authors on this paper are Olivia Wang, a UW undergraduate student in the Allen School, and Alida Muongchan, a UW undergraduate student studying human centered design and engineering.

This research was funded by the Mani Charitable Foundation, the University of Washington Center for an Informed Public, and the University of Washington Center for Research and Education on Accessible Technology and Experiences.