Welcome to our website: a place where researchers can explore and promote a promising new partnership between visualization and vision science!
Welcome to our website: a place where researchers can explore and promote a promising new partnership between visualization and vision science!
Sponsored by: Adobe Inc., CU Boulder VisuaLab, Northwestern U Visual Thinking Lab
Following the last three years of fun interdisciplinary events at both VIS and VSS, we are holding the first official event for vision scientist and visualization researcher collaboration, interaction, and peer reviewed research-sharing at VIS. The specific goal of this workshop is to provide a forum where vision science and visualization researchers can share cutting-edge research at this interdisciplinary intersection, in preparation for publishing and presenting it at both IEEE VIS, as well as in the upcoming Journal of Vision Special Issue.
Note: paper drafts can be downloaded via IEEE Xplore and here with the workshop password.
Invited Speaker Abstracts and Slides:
Visual search– Jeremy Wolfe
Decades of research on visual search have given us quite a good understanding of how people look for targets in scenes containing distracting items. Knowing how people search is not the same as knowing how to design searchable visual stimuli, especially if we want users to be able to search those stimuli for a variety of different targets. Still, the topics of search and searchability must be related so we will explore what the rules governing the deployment of visual attention might suggest to the creators of new visualizations.
Working memory– Timothy Brady
When processing complex visual displays, people often need to hold information actively in mind to facilitate comparison or integration. Decades of research have shown that our ability to hold information actively in mind is incredibly limited (e.g., we can miss large changes to scenes if we happen to not be holding in mind the right information), and simple rules like people can remember 3-4 things are popular ways to conceive of these limits. In this talk, I discuss what aspects of visual information people can easily hold in mind; what things are extremely difficult to hold in mind; and how these limits relate to visualization design.
Visual magnitudes– Darko Odic
The perception of visual magnitudes – length, area, time, number, etc. – has been one of the foundational questions since the dawn of empirical psychology, stretching back from Weber and Helmholtz to today. In this talk, I will share a number of insights, new and old, about how we perceive number, time, and space representations throughout our entire lifespan, focusing especially on issues that might be relevant for data visualization. I will first discuss findings about how our perceptual system deals with competing magnitude dimensions: situations in which, e.g., both number and length are competing for attention. Next, I will share several findings demonstrating that surface area perception is susceptible to various surprising inconsistencies and illusions, whereby we perceive collections of objects to be cumulatively smaller than they really are. Finally, I will share findings on how perceptual magnitude representations allow us to easily find the maximal and minimal element in a set.
Thanks to everyone who attended this event!
2:00-2:05 pm: Introduction
2:05-2:11 pm: Open vs Closed Perceptual Categories Persist in the Context of Overplotting and with Real-World Data – David Burlinson
2:17-2:23 pm: Perceptual averaging in visual communication: Ensemble representations in the perception of scientific data in graphs – Stefan Uddenberg
2:23-2:29 pm: Data Visualization in Introductory Psychology Textbooks – Jeremy Wilmer
2:29-2:35 pm: Using Analogies to Teach Novel Graphics – Emily Laitin
2:35-2:41 pm: Missing the forest and the trees in animated charts – Nicole Jardine
2:41-2:47 pm: Exploring Attention on Large-scale Visualizations using ZoomMaps, a Zoomable Crowdsourced Interface – Anelise Newman
2:47-2:53 pm: The role of spatial organization for interpreting colormap data visualizations – Shannon Sibrel
2:53-3:08 pm: All presenters on panel Q&A
3:08-3:23 pm: Guest speaker Brian Fisher
3:23-3:38 pm: JOV special edition editor presentation and Q&A
3:38-4:30 pm: “Meet & Greet” with refreshments
This event was sponsored by Adobe Inc., the Visual Thinking Lab at Northwestern, and Colorado Boulder’s VisuaLab.
Slides are not available from this meetup- as many presenters gave talks on unpublished and ongoing work for critique and feedback.
Selected Lightning Talk Abstracts
Maureen Stone (Tableau Research)
Rainbows Gone Good: Multi-hued color gradients applied to spatial data (aka rainbow color maps) can create a lot of problems, which are well and vigorously documented. But let’s not throw out the visual power of hue shifting just because most rainbow designs are bad. It’s possible to design good “rainbows,” especially for dark backgrounds. Will show some examples recently designed for Tableau’s recent density mark (aka heat maps) feature.
Robert Kosara (Tableau Research)
Distortion for Science: 3D Pie Charts to Settle the Arc vs. Area Question
Ghulam Jilani Quadri (University of South Florida)
Modelling Cluster Multi-factor Perception in Scatterplots using Merge Trees: In the representation of information, design choices matter for effectively communicating information. Human perception plays an important role in what we infer from a visualization, and a better understanding of perception helps design the visualization in both a quantitative and a qualitative manner. Surveying the last decade of information visualization, many perception-based experiments have been carried out to understand the effectiveness of visualization with respect to viewers. Understanding a viewer’s ability to rapidly and accurately understand the clusters in a scatter-plot is theme of this work.
We present a rigorous empirical study on visual perception of clustering in scatter-plots modeling around a topological data structure known as the merge tree. We tested our cluster identification model in scatter-plots on a variety of multi-factor variables to understand the main and interaction effects. Our two staged psycho-physical study are a calibration experiment and an Amazon Mechanical Turk (AMT) study. On the basis of calibration result, we selected multi-factor factor variable ranges to be more effective in the wider AMT experiments.
Factors: Number of data points, point size and distribution (i.e. cluster sigma) are studied to understand their main and interaction effect on identification of features in visual encoding. We performed experiments to measure the effect of cluster distance and visual density on identification of cluster in scatter-plots.
Evaluation and analysis of results categorized into 1) the distance-based model and 2) the density-based model. The distance-based model analyses the effect of variables (number of data points, size of data points and distribution) with respect to distance between assumed cluster centers; The density-based model uses topology to study effect of visual density and prediction of the number of clusters that will be visible. The merge tree provides the threshold value to predict the number of clusters based on the perceptual analysis. Using these number we can redesign and test the model.
Our initial results and analysis are successfully proved our hypothesis and main effects of multi-factor variables are significant along with the interaction effect. Main and interaction effect plot support our hypothesis. This work is a visual summary task and sensitivity analysis variation. Transferring the specialized views of perception to generalized application on the visualization is the continuing work.
Steve Haroz (INRIA)
Extreme Visual-Linguistic Interference in Recall of Graph Information: In data visualization, gaze is directed towards the text and specifically titles, which can affect what is recalled from memory (Borkin, Bylinskii, et al. 2016; Matzen et al. 2017). In a more controlled setting, we investigate how wording in a line graph’s title impacts recall of the trend’s slope. In an experiment, we showed subjects a simple graph with an increasing or decreasing trend, paired with a title that is either strongly stated (“Strong Increase in Contraceptive Use”), more neutral (“The Increase in Contraceptive Use”), or subtle (“Slight Increase in Contraceptive Use”). To avoid rehearsal, subjects then performed a series of letter recall tasks, before being asked to recall the title and choose between two slopes, one was shallower and the other was steeper than the original. In trials with strongly stated titles, subjects were biased towards reporting a steeper slope, whereas we did not find an effect for the neutral or subtle conditions.
Ron Rensink (University of British Columbia)
Visual Features as Information Carriers: To determine the extent to which visual features can carry quantitative information, observers can be tested on their ability to estimate and discriminate Pearson correlation in graphical representations of artificial datasets. In these representations, the first dimension of each data element is represented by the horizontal position of a short vertical line (as in a scatterplot), while the second is represented by some visual feature (e.g., orientation or color). In the tests here, the visual feature used was physical luminance (intensity). Results show performance that is broadly similar to that for scatterplots (Rensink, 2017): just noticeable differences (JNDs) were roughly proportional to the distance from perfect correlation, and estimated correlations were logarithmic functions of the same quantity. These results support the proposal that luminance can be an information carrier—i.e., it can convey quantitative information in much the same way as spatial position—although its inherent noise may be greater. They also support the suggestion that the information obtained from the graphical representation is extracted by processes involved in ensemble coding, with at least some degree of binding between the luminance and position of each ensemble element.
Alex Kale (University of Washington)
Benefits of Bayesian Hierarchical Models for Inferring Just-Noticeable Differences: Vision scientists and data visualization researchers often use just-noticeable differences (JNDs)- the amount of change in a feature of a stimulus (e.g., color, orientation, motion) which is detectable to an observer with 75% accuracy -as a measure of observers’ perceptual sensitivity to changes in a stimulus. Researchers estimate JNDs by modeling accuracy as a function of stimulus intensity (i.e., units of change in the facet of the stimulus under study), a process called fitting psychometric functions (PFs). We argue that the current procedure for fitting PFs is lossy: researchers fit PFs to observer responses then use only the estimated JNDs to make statistical inferences about the perceptual sensitivity of a population of observers. By separating the estimation of JNDs and statistical inference into two different models, researchers are discarding important information about uncertainty in PF fits. This can lead to overconfidence in the reliability of average JND estimates at the population level. We argue that fitting PFs and statistical inference should be integrated into a single Bayesian hierarchical regression model. Such a model would simultaneously estimate PFs for individual observers, hyperparameters describing the population of observers (e.g., the mean and standard deviation of JNDs), and the average impact of experimental manipulations like visualization conditions on JNDs. We demonstrate this approach with data collected in a recent crowdsourced experiment on uncertainty visualization. Incorporating JND estimation with statistical inference inside one unified model will lead to more conservative estimates of effects in studies of perceptual sensitivity and more accurate characterization of uncertainty in those estimates.
Eric Alexander (Carleton College)
Exploring Crowding Effects on Font Size Encodings: Word clouds are a form of text visualization that take a set of words sized according to some data attribute (often frequency within a document) and jumble them together. As a visualization technique, they are often used and often maligned. Proponents say they give a playful and engaging high-level overview, while detractors dismiss them as “pop” visualizations that fail to help users perform the tasks for which they are often employed. Empirical evidence for these stances has been varied: word clouds have been shown to be bad at search tasks, but decent at helping users perceive the gist of a group of related words.
It is still partially in question whether people can read the data being encoded in a word cloud accurately. Given the ways in which two words’ shapes may vary aside from their font size, font size encodings have in the past been thought too difficult for users to read. Despite this impression, in work published in 2017, we found that across 17 experiments covering a wide range of word shape variations, participants were able to accurately perceive even minute differences in font size, seeming to suggest that font size encodings may be more effective than previously thought. However, there are a number of factors that we still need to investigate, including color, word orientation, semantic meanings of words, and more.
In collaboration with Professor Danielle Albers Szafir (University of Colorado-Boulder), I plan to explore one such additional factor: the effect of crowding on word cloud legibility. “Crowding” refers the phenomenon in which it becomes much more difficult to identify visual targets when they are surrounded by other potential targets. As the density of a word cloud rises, then, it is possible that a participant’s ability to make accurate judgments about font size encodings may decrease. This is particularly likely to happen at the periphery of their vision. While the limitations of peripheral vision may not be an issue for tasks that only require identification of a single word’s font size, higher level tasks like gist-forming are likely to incorporate perception not just of a center word, but of the words around it, as well. Understanding the limits of perception when scanning dense word clouds is crucial for identifying their utility in real-world tasks.
We will be investigating these questions through a series of experiments conducted on Amazon’s Mechanical Turk. By presenting participants with word clouds of varying densities and asking them to make judgments about the words at both the center and peripheral edges of the clouds, we hope to be able to model how their performance deteriorates as a function of increased crowding. We believe this work lies at a fruitful juncture of visualization and vision science, and hope it will allow us to more responsibly deploy font size encodings in a wide variety of fields and settings.
Caitlyn McColeman (Northwestern University)
The Interaction between Visual Encodings and Value on Perceptual Biases in Ratio Judgements: Perhaps the most familiar example of studying the perception of data visualization comes from Cleveland and McGill (1984) wherein they tested the participants’ ability to make ratio judgements using different types of perceptual encodings (including position, length, area and shading). Their analysis was informed by earlier psychophysical studies in which the relationship between the objective change in stimulation and the subjective experience of that change was tested. In most senses, a single unit increase impacts the perception of that change non-linearly as values increase, formalized famously by Weber’s Law.
In Cleveland and McGill (1984) they provide a conjecture that bar graphs representing one portion of a filled whole elicit Weber’s law from two directions: one from the bottom and one from the top of the rectangle.
We have tested and extended the above conjecture. In three conditions, we test participants’ ability to redraw ratio values (from 1-99%). One of three types of graphs is shown briefly on the screen, and then participants are asked to redraw the presented value using a mouse cursor. One condition (the “stacked bar” condition) is a single stacked bar, where the filled-in bar of the represents the ratio value of interest and the length of the frame around it represents the whole. In a second condition (the “side-by-side bar” condition), the ratio is represented again by a bar, but rather than stacked within a frame, it is represented separately from a larger bar that represents one. In a third, control condition (the “bar only condition”), we show only a single bar representing a magnitude without any reference with which to report a ratio.
The results from the study were largely consistent with the psychophysical take: as values in the “bar only” condition increased, errors from participants redrawing them increased non-linearly.
The “side-by-side bar” condition showed a general increase in error for values 0-40% and a decrease close to 50%, with an interesting bimodality in the remaining values (51-99%).
In the “stacked bar” condition, errors in participants’ drawings were mostly consistent with the prediction from Cleveland and McGill: essentially there were two Weber’s Law functions observed from the top and the bottom of the rectangle frame, excepting a remarkably accurate pattern of responses near 50%.
We propose that these findings may serve as evidence for categorical perception in the context of data visualization.
Madison Elliott (University of British Columbia)
Understanding Complex Ensemble Perception with Multi-Class Scatterplots: Our visual system rapidly extracts ensembles to help us understand our environment (Haberman & Whitney, 2012). However, it is not yet understood how multiple ensemble dimensions are used, or how attention can select one ensemble over another. As a first step, we investigated feature selection in attention for multi-dimensional ensembles. Specifically, we examined whether increasing featural differences, which aids perceptual grouping (Moore & Egeth, 1997), would boost selectivity for one ensemble over another.
The perception of correlation in scatterplots appears to be an ensemble process (Rensink, 2017), and adding an irrelevant set of data points causes interference (Elliott & Rensink, VSS 2016; 2017). To investigate this more thoroughly, observers performed a correlation discrimination task for scatterplots containing both a “target” ensemble and an irrelevant “distractor” ensemble (Elliott & Rensink, VSS 2017) where target ensembles were distinguished by the color, shape, or color and shape combinations of their elements. Both tasks used ΔE from Szafir (2017) to create a precise experimental color space that takes into account stimulus area and mark type. Distractor colors varied in equal perceptual steps along three axes: luminance, chroma, and hue, which allowed us to investigate whether individual color dimensions influenced selection.
Below is a list of papers and presentations from members of VisXVision. A * denotes the VisXVision member name in a list of unaffiliated authors.
Anamaria Crisan & Madison Elliott* – How to evaluate an evaluation study? Comparing and contrasting practices in vis with those of other disciplines
Robert Kosara & Steve Haroz – Skipping the Replication Crisis in Visualization: Threats to Study Validity and How to Address Them
Steve Haroz – Open Practices in Visualization Research
Dominik Moritz*, Chenglong Wang, Greg L. Nelson, Halden Lin, Adam M. Smith, Bill Howe, & Jeffrey Heer – Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco
Hayeong Song and Danielle Albers Szafir* – Where’s My Data? Evaluating Visualizations with Missing Data
Alex Kale*, Francis Nguyen, Matthew Kay*, and Jessica Hullman – Hypothetical Outcome Plots Help Untrained Observers Judge Trends in Ambiguous Data
Brian Ondov, Nicole Jardin*, Niklas Elmqvist, and Steven Franconeri* – Face to Face: Evaluating Visual Comparison
Theresa-Marie Rhyne – Keynote Speaker at VisGuides – Formulating a Colorization Guide for VIS
Attentional Selection of Multiple Correlation Ensembles
Madison Elliott & Ron Rensink
Effects of title wording on memory of trends in line graphs
Anelise Newman, Zoya Bylinskii, Steve Haroz, Spandan Madan, Fredo Durand, Aude Oliva
Interpreting color-coding systems: the effects of concept activation on color inference
Kathleen Foley, Laurent Lessard, Karen B. Schloss
We’ve posted slides from the 4 presenters, as well as the original description from our submission below…
Where do people look on data visualizations?
(Aude Oliva & Zoya Bylinskii)
Segmentation, structure, and shape perception in data visualizations
Color Perception in Data Visualizations
Data is ubiquitous in the modern world, and its communication, analysis, and interpretation are critical scientific issues. Visualizations leverage the capabilities of the visual system, allowing us to intuitively explore and generate novel understandings of data in ways that fully-automated approaches cannot. Visualization research builds an empirical framework around design guidelines, perceptual evaluation of design techniques, and a basic understanding of the visual processes associated with viewing data displays. Vision science offers the methodologies and phenomena that can provide foundational insight into these questions. Challenges in visualization map directly to many vision science topics, such as finding data of interest (visual search), estimating data means and variance (ensemble coding), and determining optimal display properties (crowding, salience, color perception). Given the growing interest in psychological work that advances basic knowledge and allows for immediate translation, visualization provides an exciting new context for vision scientists to confirm existing hypotheses and explore new questions. This symposium will illustrate how interdisciplinary work across vision science and visualization simultaneously improves visualization techniques while advancing our understanding of the visual system, and inspire new research opportunities at the intersection of these two fields.
Historically, the crossover between visualization and vision science relied heavily on canonical findings, but this has changed significantly in recent years. Visualization work has recently incorporated and iterated on newer vision research, and the results has been met with great excitement from both sides (e.g., Rensink & Baldridge, 2010; Haroz & Whitney, 2012; Harrison et al., 2014; Borkin et al., 2016; Szafir et al., 2016). Unfortunately, very little of this work is presented regularly at VSS, and there is currently no dedicated venue for collaborative exchanges between the two research communities. This symposium showcases the current state of vision science and visualization research integration, and aspires to make VSS a home for future exchanges. Visualization would benefit from sampling a wider set of vision topics and methods, while vision scientists would gain a new real-world context that simultaneously provokes insight about the visual system and holds translational impact.
Congratulations Danielle! Read her profile here 🙂
“Discover Pasteur’s Quadrant: Four research communities that will inspire your work”. (FYI- we’re expecting video + slides to be posted on the OPAM website sometime soon- for now, find a link to the program here).
Tamara Munzner‘s panel talk, “Data Visualization as a Driver for Visual Cognition Research“, was declared (by the OPAM keynoter Jeremy Wolfe) to open millennia-worth of dissertation material for visioneers. Find her slides here!
Presenters shared lightning talks about their latest work at the intersection of vision science and visualization!
Abstract archives and presentation slides can be found here:
|Fumeng Yang||Correlation Judgment|
|David Burlinson||Open vs Closed Shapes|
|Maureen Stone||Color Design for Tableau 10|
|Steve Haroz||From Spatial Frequencies to ISOTYPE|
|Alex Kale||Uncertainty in Visualizations|
|Nam Wook Kim||BubbleView|
|Zoya Bylinskii||Predicting Attention for Design Feedback|
|Christie Nothelfer||How Do We Read Line Charts?|
|Madison Elliott||Task Demands Affect Feature Selection|
Read our full panel submission with abstracts here!
Panelist Bios and Talk Slides:
Ruth Rosenholtz is a Principal Research Scientist in MIT’s Department of Brain and Cognitive Sciences, and a member of CSAIL. She has a B.S. in Engineering from Swarthmore College, and an M.S. and Ph.D. in EECS from UC Berkeley. Her lab studies human vision, including visual search, perceptual organization, visual clutter, and peripheral vision. Her work focuses on developing predictive computational models of visual processing, and applying such models to design of user interfaces and information visualizations. She joined MIT in 2003 after 7 years at the Palo Alto Research Center (formerly Xerox PARC).
Ronald Rensink is an Associate Professor in the departments of Computer Science and Psychology at the University of British Columbia (UBC). His research interests include visual perception (especially visual attention), information visualization and visual analytics. He obtained a PhD in Computer Science from UBC in 1992, followed by a postdoc in Psychology at Harvard University, and then several years as a scientist at Cambridge Basic Research, an MIT-Nissan lab in Cambridge MA. He is currently part of the UBC Cognitive Systems Program, an interdisciplinary program combining Computer Science, Linguistics, Philosophy, and Psychology.
Steven Franconeri is Professor of Psychology at Northwestern University, and Director of the Northwestern Cognitive Science Program. His lab studies visual thinking, graph comprehension, and data visualization. He completed his Ph.D. in Experimental Psychology at Harvard University with a National Defense Science and Engineering Fellowship, followed by a Killam Postdoctoral Fellowship at UBC. He has received the Psychonomics Early Career Award and an NSF CAREER award, and his work is funded by the NSF, NIH, and the Department of Education.
Karen Schloss is an Assistant Professor at the University of Wisconsin – Madison in the Department of Psychology and Wisconsin Institute for Discovery. Her Visual Perception and Cognition Lab studies color cognition, information visualization, perceptual organization, and navigation in virtual environments. She received her BA from Barnard College, Columbia University in 2005, with a major in Psychology and a minor in Architecture. She completed her Ph.D. in Psychology at the University of California, Berkeley in 2011 and continued on as a Postdoctoral Scholar from 2011-2013. She spent three years as an Assistant Professor of Research in the Department of Cognitive, Linguistic, and Psychological Sciences at Brown University before joining the faculty at UW – Madison in 2016.