VIS presentations at V-VSS 2020

Here’s your 2020 guide to VIS-related content at V-VSS! Make sure you check out the following presentations:

The Set Size Bias in Ensemble Comparison (Or Why Showing Raw Data May Be Misleading) [741]
Steve Haroz

Abstract: Ensemble perception is characterized by the rapid ability to estimate a summary statistic from a set without needing serial inspection. But which stimulus properties influence how that summary is made? In a within-subject experiment with per-trial feedback, subjects chose which set had a larger average value. Using data visualizations as stimuli, subjects were asked which of two sets had a higher position (dot plots), a larger size (floating bar graphs), or redundantly coded highest position and largest size (regular bar graphs). The experiment also varied set size (1vs1, 12vs12, 20vs20, 12vs20, and 20vs12), mean difference between the sets (0 to 80 pixels in 10 pixel increments), and which set had the largest single value. With 25 repetitions per condition, each subjects ran in over 5,000 trials. For single-item comparisons, position was unsurprisingly more precise than length alone. However, for set comparison, the noisiness of ensemble coding appears to overpower these differences, so position, length, and the redundant combination have indistinguishable discriminability, which contradicts Cleveland & McGill (1984). Moreover, for all visual features, responses were biased towards the larger set size. Previous results (Yuan, Haroz, & Franconeri 2018) suggested that this bias is caused by estimating a sum or total area. But because the effect occurs in the position (dot plot) condition, where sum or total area are unhelpful, that model is unlikely. Additional analyses did not reveal a bias towards the set with the largest single value, the smallest single value, or the largest range of values. These results imply that this bias is holistic and not driven by simpler proxies. As showing raw data rather than only summary statistics is common advice in visualization design, the set size bias could cause people to misinterpret visualizations that do not have the same number of items in each group.

Toward a Science of Effect Size Perception: the Case of Introductory Psychology Textbooks [1185]
Sarah Kerns, Elisabeth Grinspoon, Laura Germine & Jeremy Wilmer

Abstract: In an increasingly data-producing world, effective visualization is an indispensable tool for understanding and communicating evidence. While vision science already provides broad cognitive guideposts for graph design, graphs themselves raise new constraints and questions that remain relatively unexplored. One central question is how the magnitude, or effect size, of a difference is interpreted. Here, we take several steps toward understanding effect size perception via the case example of college-level Introductory Psychology textbooks, selected for their reach to millions of students per year. A survey of all 23 major introductory textbooks found that graphs of central tendency (means) indicate distribution of individual data points less than five percent of the time, and are thus formally ambiguous with regard to effect size. To understand how this ambiguity is commonly interpreted, we needed a measure of effect size perception. After multiple rounds of piloting (45 total rounds, 300+ total participants), we settled on a drawing-based measure whereby participants “sketch hypothetical individual values, using dots” onto their own representation of a given bar graph. Effect sizes are then read out directly from these drawings, in standard deviation (SD) units, by a trained coder. Next, we selected two textbook graphs for their large effect sizes of 2.00 and 0.70 SDs, and, using our created measure, we tested 112 educated participants. In their drawings we observed inflated, widely varying drawn effect sizes for both graphs, with median drawn effect size that were 200% and 1143% of the true effect size, and interquartile ranges were 100-300% and 714-2000%, respectively. The present work therefore documents an influential domain where the norm in graphical communication is formally ambiguous with regard to effect size, develops a straightforward approach to measuring effect size perception, and provides existence-proof of widely varying, highly inflated effect size perceptions.

Further Evidence that Probability Density Shape is a Proxy for Correlation [1481]
Madison Elliott & Ronald Rensink

Abstract: Previous work demonstrated a discrimination performance cost for selecting “target” correlation populations among irrelevant “distractor” populations in two-class scatterplots (Elliott, 2016). This cost cannot be eliminated by increasing featural differences, e.g., color, between the dots in the two populations (Elliott & Rensink, VSS 2018). These findings do not agree with predictions from feature-based attention models (Wolfe, 1994), motivating us to investigate whether feature information can in fact be used to select target correlation populations. Observers performed a correlation discrimination task for scatterplots containing a target and a distractor population. Both populations had the same mean, standard deviation, color, and number of dots; the resulting two-class plots were distinguished by the correlation of the target population only. In the first of two counterbalanced conditions, targets were more correlated than the distractors; in the second, they were less. Results showed that observers can successfully discriminate two-class plots based on the correlation of their target populations. Increased JNDs were found when targets had higher correlations than distractors, replicating the results of Elliott (2016); however, there was no cost for targets with lower correlations. This asymmetry supports the proposal (Rensink, 2017) that estimation of correlation in scatterplots is based on the width of the probability density function corresponding to the dot cloud; for a two-class plot this appears to be a single density function dominated by the width of the lower-correlation (and thus wider) population. In addition, there is a resistance to feature selection: performance is the same regardless of the difference in features between target and distractor populations. This suggests that a two-class scatterplot is coded as a single ensemble, with observers unable to select items based on the value of their features because ensemble structure is prioritized over item-level feature information (Brady & Alvarez, 2011).

Correlation structure does not affect number judgment [1511]
Ford Atwater, Madison Elliott, Caitlin Coyiuto & Ronald Rensink

Abstract: Past work showed a performance cost for discriminating two-class scatterplots formed of a “target” population and an irrelevant “distractor” population (Elliott, 2016). As long as a second population was present, discrimination was the same regardless of color similarity between the two, resulting in “all-or-nothing performance” (Elliott & Rensink, VSS 2019). When the same dots were randomly distributed in a numerosity estimation task, severe performance costs were found for similar target and distractor colors, but no cost for opposite-colored distractors, showing “graded performance” similar to that encountered in visual search (Duncan & Humphreys, 1989). A possible explanation for this difference is that the structured spatial distributions of stimuli in the correlation task differed from the random distributions in the number estimation task. This prompted us to investigate whether or not numerosity estimates would be affected by correlation structure, and vice versa. Observers performed two within-subjects tasks: a correlation task and a numerosity discrimination task. Crucially, the stimuli were always the same: two-class scatterplots with target and distractor populations distinguished by color. Trial blocks were fully counterbalanced according to target dot number (50, 100, 150) and target dot correlation (.3, .6, .9). Distractor populations were always drawn with 100 dots at Pearson’s r = .3. Results showed that correlation JND slopes were unaffected by number of dots in the target population, and JND intercepts increased as the number of dots decreased, consistent with one-class scatterplot performance (Rensink, 2017). Number JNDs increased with the number of dots, consistent with past findings (Feigenson et al., 2004). Critically, number JNDs did not vary with target correlation, showing that the structure and geometric density of the target population does not affect our ability to select and estimate number information.

Shape size judgments are influenced by fill and contour closure [1640]
David Burlinson & Danielle Szafir

Abstract: Shape and size are two important visual channels that underpin our ability to reason about the visual world in settings ranging from natural scenes to informational charts and diagrams. Strong evidence in the vision science (Makovski, 2017) and data visualization (Smart & Szafir, 2019) literature suggests that shape and size perception are inextricably linked to each other, with asymmetric influences of either channel upon the other. To better understand these influences and begin exploring the visual features that contribute to them, we designed an experiment to address the following question: “how do people judge the size of simple 2D shapes varying in geometric properties at multiple scales?” We asked 82 subjects on Amazon Mechanical Turk to adjust different target shapes until they appeared the same size as an array of homogeneous background shapes, and collected response data on combinations of representative filled, unfilled, and open shape categories at three levels of size. We analyzed the delta between target and background shape size judgments using a generalized linear model, and found statistical significance for the role of size, target and distractor features, and the interaction of shape and size (all with P < .001). As shape size increased, the delta between target and background shapes decreased. In medium and large conditions, open shapes needed to be made smaller to appear the same size as filled or unfilled shapes, lending support to the open-object illusion. These findings have practical and theoretical implications. Visualization tools and designers would benefit from sets of symbols normalized for perceptual size at different scales; future studies can explore more situated tasks and contexts to that end. Furthermore, theories underlying shape perception should account for characteristics such as visual density, geometric properties, and contour closure, as these features produced significant differences in perceived size in this study.

VSS 2018 – Highlights!

Highlights from our Satellite Event!

Visualization Meets Vision Science

Talk Schedule:

2:00-2:05 pm: Introduction
2:05-2:11 pm: Open vs Closed Perceptual Categories Persist in the Context of Overplotting and with Real-World DataDavid Burlinson
2:17-2:23 pm: Perceptual averaging in visual communication: Ensemble representations in the perception of scientific data in graphs – Stefan Uddenberg
2:23-2:29 pm: Data Visualization in Introductory Psychology Textbooks – Jeremy Wilmer
2:29-2:35 pm: Using Analogies to Teach Novel GraphicsEmily Laitin
2:35-2:41 pm: Missing the forest and the trees in animated chartsNicole Jardine
2:41-2:47 pm: Exploring Attention on Large-scale Visualizations using ZoomMaps, a Zoomable Crowdsourced Interface – Anelise Newman
2:47-2:53 pm: The role of spatial organization for interpreting colormap data visualizations – Shannon Sibrel
2:53-3:08 pm: All presenters on panel Q&A
3:08-3:23 pm: Guest speaker Brian Fisher
3:23-3:38 pm: JOV special edition editor presentation and Q&A
3:38-4:30 pm: “Meet & Greet” with refreshments

This event was sponsored by Adobe Inc., the Visual Thinking Lab at Northwestern, and Colorado Boulder’s VisuaLab.

VSS 2018 Highlights Part I – St. Pete Beach, FL

We organized a symposium: Vision and Visualization- Inspiring Novel Research Directions in Vision Science

We’ve posted slides from the 4 presenters, as well as the original description from our submission below…

Talk Slides

Information visualization and the study of visual perception
(Ron Rensink)

Where do people look on data visualizations?
(Aude Oliva & Zoya Bylinskii)

Segmentation, structure, and shape perception in data visualizations
(Steven Franconeri)

Color Perception in Data Visualizations
(Danielle Szafir)

Image uploaded from iOS-2 (1)

Symposium Description

Data is ubiquitous in the modern world, and its communication, analysis, and interpretation are critical scientific issues. Visualizations leverage the capabilities of the visual system, allowing us to intuitively explore and generate novel understandings of data in ways that fully-automated approaches cannot. Visualization research builds an empirical framework around design guidelines, perceptual evaluation of design techniques, and a basic understanding of the visual processes associated with viewing data displays. Vision science offers the methodologies and phenomena that can provide foundational insight into these questions. Challenges in visualization map directly to many vision science topics, such as finding data of interest (visual search), estimating data means and variance (ensemble coding), and determining optimal display properties (crowding, salience, color perception). Given the growing interest in psychological work that advances basic knowledge and allows for immediate translation, visualization provides an exciting new context for vision scientists to confirm existing hypotheses and explore new questions. This symposium will illustrate how interdisciplinary work across vision science and visualization simultaneously improves visualization techniques while advancing our understanding of the visual system, and inspire new research opportunities at the intersection of these two fields.

Historically, the crossover between visualization and vision science relied heavily on canonical findings, but this has changed significantly in recent years. Visualization work has recently incorporated and iterated on newer vision research, and the results has been met with great excitement from both sides (e.g., Rensink & Baldridge, 2010; Haroz & Whitney, 2012; Harrison et al., 2014; Borkin et al., 2016; Szafir et al., 2016). Unfortunately, very little of this work is presented regularly at VSS, and there is currently no dedicated venue for collaborative exchanges between the two research communities. This symposium showcases the current state of vision science and visualization research integration, and aspires to make VSS a home for future exchanges. Visualization would benefit from sampling a wider set of vision topics and methods, while vision scientists would gain a new real-world context that simultaneously provokes insight about the visual system and holds translational impact.

Ron, Danielle, Cindy, Madison, Christie, and Steve.

InfoVis at VSS! – St. Pete, Florida

We hosted our first event on May 23, 2017: an informal VSS satellite meeting for vision science researchers to learn about and discuss the field of information visualization!

The meeting began with a presentation about what InfoVis research is like, and why cognitive psychologists might care about it. Next, research talks were given by Madison Elliott, Cindy Xiong, Christie Nothelfer, Danielle Albers-Szafir, and Zoya Bylinskii, who all conduct research at the intersection of visualization and vision science. The meeting concluded with a round table introduction, discussion, and contact/idea exchange. Hopefully this meeting was the start of many more collaborative events!


  • Changing task demands limits feature based attention. Madison Elliott & Ron Rensink (University of British Columbia).
  • Curse of knowledge in visual data communication. Cindy Xiong & Steve Franconeri (Northwestern University).
  • Rapid feature-selection benefits from feature redundancy. Christie Nothelfer & Steve Franconeri (Northwestern University).
  • Designing for data and vision: ensembles, constancy, and color models. Danielle Albers Szafir (University of Colorado – Boulder).
  • How studying the perception of visualizations is like studying the perception of scenes. Zoya Bylinskii & Aude Oliva (Massachusetts Institute of Technology).

Slides from opening talk available here: VSS_InfoVis