Making data visualizations accessible to blind and visually impaired people
By: Jessica Lachance, Elections Canada
Introduction
Throughout the 21st century, the amount and variety of data available to citizens, researchers, industry, and government has grown exponentially (see: Volume of data/information created, captured, copied, and consumed worldwide from 2010 to 2020, with forecasts from 2021 to 2025). With that growth comes an expectation we'll use the data to inform policies, business decisions and consumer choice.
The human mind can't efficiently interpret this amount of raw data, yet we need to summarize our data to understand its features. The dominant tools to help us understand data are data visualizations. Visualizations range from simple static images to interactive software that lets you choose specific data and display curated summaries.
Despite their benefits, data visualizations present significant barriers for Blind and visually impaired (BVI) people. These barriers have left BVI people less able to effectively participate in public discourse, in their workplace, or make informed choices. For example, current alt-text guidelines aren't always based in well-researched evidence. This leads to gaps that don't allow BVI people to glean statistical information at the same speed as sighted users. How does someone understand what it means to "flatten the curve" if you can't see the associated graph? (See: Making data visualizations more accessible).
We'll further explore these key needs and the methods used to help improve the accessibility of data visualizations and provide non-visual alternatives to present data. We'll then study a few solutions that work for a number of visualization types.
Current accessibility guidelines
The Web Content Accessibility Guidelines (WCAG) 2.1 are the international web accessibility standard. Web accessibility is founded on four main principles – content should be perceivable, operable, understandable and robust (see: Understanding the Four Principles of Accessibility). The Canada.ca Content Style Guide, which bases their rules on WCAG 2.1, says that writers should include a "long description" for charts and a shorter alt-text for a high-level description. For charts, it suggests that an HTML table could be used as the long description.
But researchers and advocates argue that this doesn't always meet BVI people's needs, and the HTML tables require BVI users to exert more effort (or to require a higher cognitive load) to think through answers to simple questions such as "which data series is the maximum?". Sighted users can easily glean the information at-a-glance. Also, alt-texts don't always provide sufficient detail, especially when it comes to the spatial information of the graph.
Features of accessible data visualizations
We found that each researcher uniquely defined what makes a data visualization operable and what information is necessary to make it understandable and robust. The following dimensions must be considered to make data visualizations accessible.
Data-related tasks should require an equal cognitive load and equal effort: As noted above, HTML tables require a higher cognitive load to identify statistical features, like the minimum or maximum. That's not to say HTML tables aren't valuable. In general, BVI users appreciate this feature, and most sighted users do, too. However, when the HTML table is the only interface it can sometimes be overwhelming.
Provide information at varying levels of complexity: Not all data-related tasks are equal. Information with varying levels of complexity can be shown with a visualization. One interesting article from Lundgard and Satyanarayan (see: Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content) defines four distinct levels of semantic content a data visualization description could be conveying.
The four levels are:
- Listing visualization construction properties (e.g., axes, chart type, colours)
- Reporting statistical concepts and relations
- Identifying perceptual and cognitive phenomenon
- Extracting domain-specific insights
Their study found that both Blind and sighted participants found level 3 content most useful. Blind participants found level 2 content to be more useful than their sighted counterparts but found level 4 content much less useful.
These different semantic levels also align with Shneiderman's Visual Information-Seeking mantra to "Overview first, zoom and filter, then details-on-demand." as written in his highly influential work in the dawn of online visualizations (see: Schneiderman's Mantra | Data Visualization). This also applies to BVI users – they want to picture the overview of the chart, zoom in to it, and filter it to get statistical concepts, relations, and details.
Paint a mental picture: Some existing guidelines state that details like graph colours, or axes' descriptions should be ignored to reduce cognitive load. But, multiple studies found these to be important as they helped BVI users to picture the charts. Doing so helped them communicate results with sighted colleagues and helped them understand data visualizations as a statistical tool or when coming across a less common chart type.
BVI users want to know "what the author wants you to know": The majority of BVI users found levels 2 and 3, where we start to understand the gist of the data, are also where the current accessibility guidelines are most lacking. Understanding trends and features are easier to remember when presented with physical descriptions of the graph. This mental picture help users to easily recall where to find statistical information like the minimum, the maximum or where two lines intersect.
Present data objectively: BVI participants in studies emphasized that accessible descriptions shouldn't contain editorialized content associated with level 4 semantic content. BVI users should be able to verify any claims made about the data themselves, and not be given an editorialized view, beyond what a sighted user would have access to. They also expressed a preference for descriptions which use an objective tone.
Make sure the solution is appropriate for web-browsing: We can also consider the scope of the solution. A disability dongle is a well-intentioned solution that prioritizes form over its function as an accessibility device. Making accessible data visualizations don't need a fancy new tool. Ideally, solutions should be compatible with common technology for BVI people, like braille displays and screen readers. When solutions require costly software or hardware to operate, it reduces their operability and robustness.
Alternative accessible solutions
We described six key needs of BVI users when engaging with data visualizations:
- Provide information at varying levels of complexity
- Paint a mental picture of the visualization
- Let BVI users know what you, as the author, want them to know
- Present the data objectively
- Integrate accessibilities tool BVI users already have or can be easily integrated within the browser
- Have a solution that works for many different kinds of graphs
We've already described how dominant recommendations fall short. This section describes some of the alternative accessible solutions for data visualizations and their pros and cons.
Sonification
Sonification refers to the use of sound scales to map data in a way that is analogous to colour scales. Sounds can change pitch, tenor, or volume to represent changes in data. Research into graph sonification focuses on how to map data to sounds to optimize BVI user understanding.
Sonification, like visualization, can be processed in parallel, making it well-suited for multidimensional data. With integrations of a 3D soundscape device, sonification can even be used to plot data with spatial relationships, like choropleths. Check out the Data Sonification Archive for examples.
BVI users found that sonification helped them "visualize" the graph, but there was a learning curve to overcome. Lack of standards also creates challenges for designers trying to integrate sonification scales into the online environment, though several new open-source libraries (TwoTone Data Sonification, Highcharts Sonification Studio) promise to create sonification scales, using data.
Haptics
Haptics refer to the sense of touch, for example through sensations of force or friction against one's finger. Haptics can help users feel the ups and downs of a line graph or the height of a bar graph against gridlines.
Research into haptic visualization argued that many BVI people learn through touch, therefore tactile representation of the graph can make it easier to link what they may have learned in school to what is being presented in front of them. But in practice, that's not always the case. Some users found the different frictions "disturbing" and "confusing" leading them to incorrectly perceive the layout of a complex graph. This could be due to the range of values perceived by the eyes is "orders of magnitude" greater than what can be perceived by touch. So, while haptics may be useful in gaining the gist of a graph, it's not well-suited to identifying precise data points.
Another drawback is that without an easily available library to work with, the average website designer isn't able to create a data visualization that maps to haptic feedback. The lack of haptic libraries and the need for a specialized haptic device could result in just creating a disability dongle.
Accessible descriptions
Accessible descriptions involve writing a description of the data visualization. While this sounds like alt-texts in WCAG 2.1, the strategies below take it a step further than the current guidelines.
Alt-texts are not all bad. One study asked participants to evaluate the quality of alt-texts found in academic journals, and the participants appreciated when they contained information recommended by alt-text guidelines. Where these guidelines fall short are in the first three points mentioned in the introduction. Alt-texts are great at conveying information like the subject and graph type, but don't give BVI users a summary of the data or statistical features.
For designers, it would be a tedious, if not impossible task to write out these descriptions. More so when the charts are interactive and allow users to choose different views of a graph. As a result, research for the next generation of accessible descriptions focuses on three possibilities:
- Allowing BVI users to "navigate" a visualization, as they do for a current webpage
- Programmatically generate natural language descriptions
- Presenting an interactive query mode where users can ask questions about the data in natural language
Navigable Scalable Vector Graphics
HTML structures a webpage into hierarchical sections, and each tag describes the content. Navigating these structures are familiar to BVI users who use screen-readers to navigate the web.
Some researchers and web developers are promoting the use of HTML elements to create data visualizations and present accessible descriptions for BVI people. Scalable Vector Graphics (SVGs) have been popular for programmatically building data visualizations online, and are used by Chart.js, D3.js - Data-Driven Documents, Google Charts, and others. When done right, each tag in an SVG can be organized so a tree of shapes can be traversed by screen-readers in a meaningful way (for example: Semiotic, Highcharts' accessibility module, Accessibility in d3 Bar Charts | a11y with Lindsey).
Natural language generation
Natural language generation is a domain of machine learning that automatically generates text that sounds like a human wrote it. This could be used to create a description of a graph and a summary of its data. Several libraries currently exist (like VoxLens, evoGraphs) but are limited to 2D charts, like bar or line charts.
Interactive query models
Interactive query modes allow users to control how much information they receive at once. These natural language interfaces allow users to ask questions of varying complexity without overwhelming them and complement other methods described in this article.
Interactive query models can also be intuitive to learn. In one study, VoxLens users were given the option to sonify a graph or use an interactive dialogue. Most preferred the interactive dialogue.
But interactive query modes, especially in natural language, push the limits of our current computational capacities, especially for nuanced or context-dependent analysis of data. The current challenge for both natural language generation and interactive query models is the capacity to make a solution that is robust for even complex visualizations.
Multi-modal visualizations
Multi-modal, as the name implies, is a combination of the previously described techniques to communicate the data. The main advantages are:
- The weakness of one method can be reinforced by another method
- Multi-modal solutions account for a wider range of user preferences; and
- Having multiple sensory inputs can reduce the cognitive load it takes to understand data
The most common multi-modal solution is the combination of sonification and haptics. Because touch is a familiar learning method for some BVI people, it can reinforce the information being provided by sonification, which is less known. Conversely, sound hints can reinforce haptic reception, like playing a sound when a user switches haptically to a different data series noted by different frictions. However, the multi-modal approach that use haptics suffer from the same drawback as only using haptics – they require specialized hardware.
Multi-modal approaches that combined sonification and accessible descriptions found better success. VoxLens saw a 122% increase in task accuracy and a 36% reduction in total interaction time.
In summary, multi-modal approaches that combined sonification and accessible descriptions create accessible data visualizations that require an equitable cognitive load, provide information at all levels of semantic content and lightweight for web-browsing, if the limitations to the kind of data visualizations supported can be overcome.
Conclusion
At the beginning of this article, we asked "How would you understand how to flatten the curve if you can't see the associated graph?". We looked at key considerations. We learned that providing a table of values wouldn't easily allow BVI users to find the maximums of each curve.
To balance the desire to add information without increasing the cognitive load, BVI users and researchers suggest having an accessible method of querying the data that would be beneficial for tasks like retrieving the minimum or maximum, or highlighting areas of the graph.
Data is getting more complex, and with the Accessible Canada Act requiring a barrier-free Canada by 2040 (see: Towards an Accessible Canada), we all have a role to play in ensuring our data visualizations are barrier-free.
Meet the Data Scientist
If you have any questions about my article or would like to discuss this further, I invite you to Meet the Data Scientist, an event where authors meet the readers, present their topic and discuss their findings.
Thursday, March 16
2:00 to 3:30 p.m. ET
MS Teams – link will be provided to the registrants by email
Register for the Data Science Network's Meet the Data Scientist Presentation. We hope to see you there!
Subscribe to the Data Science Network for the Federal Public Service newsletter to keep up with the latest data science news.
Referenced Tools
- TwoTone Data Sonification
- Highcharts Sonification Studio
- Chart.js
- D3.js - Data-Driven Documents
- Google Charts
- Semiotic
- Highcharts' accessibility module
- evoGraphs
Further reading
- Ali, S., Muralidharan, L., Alfieri, F., Agrawal, M., & Jorgensen, J. (2020). Sonify: Making Visual Graphs Accessible. In T. Ahram, R. Taiar, S. Colson, & A. Choplin (Eds.), Human Interaction and Emerging Technologies (pp. 454–459). Springer International Publishing.
- Choi, J., Jung, S., Park, D. G., Choo, J., & Elmqvist, N. (2019). Visualizing for the Non-Visual: Enabling the Visually Impaired to Use Visualization. Computer Graphics Forum, 38(3), 249–260.
- Chundury, P., Patnaik, B., Reyazuddin, Y., Tang, C., Lazar, J., & Elmqvist, N. (2021). Towards Understanding Sensory Substitution for Accessible Visualization: An Interview Study. IEEE Transactions on Visualization and Computer Graphics, 28(1), 1084–1094.
- Fan, D., Siu, A. F., Law, W.-S. A., Zhen, R. R., O'Modhrain, S., & Follmer, S. (2022). Slide-Tone and Tilt-Tone: 1-DOF Haptic Techniques for Conveying Shape Characteristics of Graphs to Blind Users. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1–19.
- Fritz, J. P., & Barner, K. E. (1999). Design of a haptic data visualization system for people with visual impairments. IEEE Transactions on Rehabilitation Engineering, 7(3), 372–384.
- Godfrey, A. J. R., Murrell, P., & Sorge, V. (2018). An Accessible Interaction Model for Data Visualisation in Statistics. In K. Miesenberger & G. Kouroupetroglou (Eds.), Computers Helping People with Special Needs (pp. 590–597). Springer International Publishing.
- Jung, C., Mehta, S., Kulkarni, A., Zhao, Y., & Kim, Y.-S. (2021). Communicating Visualizations without Visuals: Investigation of Visualization Alternative Text for People with Visual Impairments. IEEE Transactions on Visualization and Computer Graphics, 28(1), 1095–1105.
- Lundgard, A., & Satyanarayan, A. (2021). Accessible Visualization via Natural Language Descriptions: A Four-Level Model of Semantic Content. IEEE Transactions on Visualization and Computer Graphics, 28(1), 1073–1083.
- Lunn, D., Harper, S., & Bechhofer, S. (2011). Identifying Behavioral Strategies of Visually Impaired Users to Improve Access to Web Content. ACM Transactions on Accessible Computing, 3(4), 13:1-13:35.
- Murillo-Morales, T., & Miesenberger, K. (2020). AUDiaL: A Natural Language Interface to Make Statistical Charts Accessible to Blind Persons. In K. Miesenberger, R. Manduchi, M. Covarrubias Rodriguez, & P. Peňáz (Eds.), Computers Helping People with Special Needs (pp. 373–384). Springer International Publishing.
- Sawe, N., Chafe, C., & Treviño, J. (2020). Using Data Sonification to Overcome Science Literacy, Numeracy, and Visualization Barriers in Science Communication. Frontiers in Communication, 5.
- Sharif, A., & Forouraghi, B. (2018). evoGraphs—A jQuery plugin to create web accessible graphs. 2018 15th IEEE Annual Consumer Communications & Networking Conference (CCNC), 1–4.
- Sharif, A., Wang, O. H., Muongchan, A. T., Reinecke, K., & Wobbrock, J. O. (2022). VoxLens: Making Online Data Visualizations Accessible with an Interactive JavaScript Plug-In. Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, 1–19.
- Shneiderman, B. (1996). The eyes have it: A task by data type taxonomy for information visualizations. Proceedings 1996 IEEE Symposium on Visual Languages, 336–343.
- Strantz, A. (2021a). Using Web Standards to Design Accessible Data Visualizations in Professional Communication. IEEE Transactions on Professional Communication, 64(3), 288–301.
- Strantz, A. (2021b). Beyond "Alt-Text": Creating Accessible Data Visualizations with Code. The 39th ACM International Conference on Design of Communication, 331–337.
- Yu, W., Ramloll, R., & Brewster, S. (2001). Haptic graphs for blind computer users. In S. Brewster & R. Murray-Smith (Eds.), Haptic Human-Computer Interaction (pp. 41–51). Springer.
- Zhao, H., Smith, B. K., Norman, K., Plaisant, C., & Shneiderman, B. (2005). Interactive sonification of choropleth maps. IEEE MultiMedia, 12(2), 26–35.
- Zong, J., Lee, C., Lundgard, A., Jang, J., Hajas, D., & Satyanarayan, A. (2022). Rich Screen Reader Experiences for Accessible Data Visualization. Computer Graphics Forum, 41(3), 15–27.
- Date modified: