Data visualizations play a crucial role in communicating patterns in quantitative data, making data visualization literacy a key target of STEM education. However, it is currently unclear to what degree different assessments of data visualization literacy measure the same underlying constructs. Here, we administered two widely used graph comprehension assessments (Galesic & Garcia-Retamero, 2011; Lee, Kim, & Kwon, 2016) to both a university-based convenience sample and a demographically representative sample of adult participants in the United States (N =1,113). Our analysis of individual variability in test performance suggests that overall scores are correlated between assessments and associated with the amount of prior coursework in mathematics. However, further exploration of individual error patterns suggests that these assessments probe somewhat distinct components of data visualization literacy, and we do not find evidence that these components correspond to the categories that guided the design of either test (e.g., questions that require retrieving values rather than making comparisons). Together, these findings suggest opportunities for development of more comprehensive assessments of data visualization literacy that are organized by components that better account for detailed behavioral patterns.
/analysis
: directory with scripts for all data cleaning, processing, analyses, and results presented in the manuscript
/data
: directory containing raw experiment and post-experiment survey data, as well as cleaned and processed data
/results
: directory containing all figures presented in the manuscript