Interactive visualizations have changed the way we understand our lives. For example, they can show the number coronavirus infections in each country.
However, these graphics are often not accessible to people who use screen readers, software programs that scan computer screen content, and access content through synthesized voice or Braille. Millions of Americans use screen readers for a variety of reasons, including complete or partial blindness, learning disabilities, or sensitivity to movement.
Team introduced this project May 3 at CHI 2022 in New Orleans.
“If I look at the chart, I can pull out any information that interests me, maybe it’s an overall trend or maybe a maximum,” said the lead author. Ather Sharif, PhD student at the Paul G. Allen School of Computer Science & Engineering. “Right now, screen reader users receive very little or no information about online visualizations, which may sometimes be a matter of life and death in light of the COVID-19 pandemic. The aim of our project is to provide screen readers with a platform where they can extract so much or so little information. as much as they want. ”
Screen readers can inform users about text on the screen, as researchers call it “one-dimensional information.”
“There is a beginning and an end to the sentence and everything else is in between,” said the co-author Jacob O. Wobbrock, Professor UW at the Information School. “But once you move things into two-dimensional spaces like visualizations, there’s no clear beginning and end. It’s just not structured in the same way, which means there’s no obvious entry point or sequence for screen readers.”
The team launched a project in collaboration with five screen reader users with partial or complete blindness to see how a potential tool could work.
“In terms of accessibility, it’s really important to follow the ‘nothing about us without us’ principle,” Sharif said. “We won’t build anything and then we’ll see how it works. We’ll build it with user feedback. We want to build what they need.”
To implement VoxLens, visualization designers only need to add one line of code.
“We didn’t want people to jump from one visualization to another and experience inconsistent information,” Sharif said. “We’ve made VoxLens a public library, which means you’ll hear the same kind of summary for all visualizations. Designers can only add one line of code and we’ll take care of the rest.”
The researchers evaluated VoxLens by recruiting 22 screen reader users who were either completely or partially blind. Participants learned to use VoxLens and then completed nine tasks, each of which included answers to visualization questions.
Compared to participants from previous study who did not have access to this tool, VoxLens users completed tasks with 122% increased accuracy and 36% reduced interaction time.
“We want people to interact with the chart the way they want, but we also don’t want them to spend an hour looking for the maximum,” Sharif said. “In our study, interaction time refers to how long it takes to extract information, so it’s a good idea to shorten it.”
The team also interviewed six participants about their experiences.
“We wanted to make sure that the accuracy and time numbers of the interaction we saw were reflected in how participants felt about VoxLens,” Sharif said. “We got really positive feedback. Someone told us they’ve been trying to access the visualizations for the last 12 years, and this was the first time they’ve done so easily.”
“This work is part of a much bigger agenda for us – removing bias in design,” said the co-author Catherine Reinecke, Associate Professor of UW at Allen School. “When we create technology, we tend to think of people who are like us and who have the same skills as us. For example, D3 has really revolutionized access to visualizations online and improved the way people can understand information. But there are entrenched values and people are It’s really important that we start thinking more about how to make technology useful for everyone. “
Other co-authors of this article are Olivia WangUW University student at Allen School a Alida Muongchan, UW University student studying human design and engineering. This research was funded by the Mani Charitable Foundation, University of Washington Center for the Informed Publicand the University of Washington Center for research and education in the field of available technologies and experience.
The name of the article
Article publication date
April 29, 2022
Disclaimer: AAAS and EurekAlert! is not responsible for the accuracy of the reports published on EurekAlert! contributing institutions or using any information through the EurekAlert system.
#VoxLens #Adding #single #line #code #interactive #visualizations #accessible #screen #reader #users