VIS 2019 – Highlights and Talk Slides

We hosted a workshop with poster presentations, short paper talks, and invited speakers Jeremy Wolfe (Harvard), Timothy Brady (UCSD), and Darko Odic (UBC):

 

Novel Directions in Vision Science and Visualization Research

Sponsored by: Adobe Inc., CU Boulder VisuaLab, Northwestern U Visual Thinking Lab

Following the last three years of fun interdisciplinary events at both VIS and VSS, we are holding the first official event for vision scientist and visualization researcher collaboration, interaction, and peer reviewed research-sharing at VIS. The specific goal of this workshop is to provide a forum where vision science and visualization researchers can share cutting-edge research at this interdisciplinary intersection, in preparation for publishing and presenting it at both IEEE VIS, as well as in the upcoming Journal of Vision Special Issue.


Short Papers:

Note: paper drafts can be downloaded via IEEE Xplore and here with the workshop password.

Invited Speaker Abstracts and Slides:

Visual search– Jeremy Wolfe

If you want people to find “it”, what should “it” look like?

Decades of research on visual search have given us quite a good understanding of how people look for targets in scenes containing distracting items. Knowing how people search is not the same as knowing how to design searchable visual stimuli, especially if we want users to be able to search those stimuli for a variety of different targets. Still, the topics of search and searchability must be related so we will explore what the rules governing the deployment of visual attention might suggest to the creators of new visualizations.


Working memory– Timothy Brady

How much visual information we can hold in mind at once: The role of visual ensembles & semantic knowledge.

When processing complex visual displays, people often need to hold information actively in mind to facilitate comparison or integration. Decades of research have shown that our ability to hold information actively in mind is incredibly limited (e.g., we can miss large changes to scenes if we happen to not be holding in mind the right information), and simple rules like people can remember 3-4 things are popular ways to conceive of these limits. In this talk, I discuss what aspects of visual information people can easily hold in mind; what things are extremely difficult to hold in mind; and how these limits relate to visualization design.


Visual magnitudes– Darko Odic

How perception perceives number, time, and space.

The perception of visual magnitudes – length, area, time, number, etc. – has been one of the foundational questions since the dawn of empirical psychology, stretching back from Weber and Helmholtz to today. In this talk, I will share a number of insights, new and old, about how we perceive number, time, and space representations throughout our entire lifespan, focusing especially on issues that might be relevant for data visualization. I will first discuss findings about how our perceptual system deals with competing magnitude dimensions: situations in which, e.g., both number and length are competing for attention. Next, I will share several findings demonstrating that surface area perception is susceptible to various surprising inconsistencies and illusions, whereby we perceive collections of objects to be cumulatively smaller than they really are. Finally, I will share findings on how perceptual magnitude representations allow us to easily find the maximal and minimal element in a set.

Thanks to everyone who attended this event!

VIS 2018 Highlights – Berlin, Germany

We Hosted Another Fun Meetup!

Screen Shot 2018-08-31 at 1.51.46 PM

Slides are not available from this meetup- as many presenters gave talks on unpublished and ongoing work for critique and feedback.

Selected Lightning Talk Abstracts


Maureen Stone (Tableau Research)

Rainbows Gone GoodMulti-hued color gradients applied to spatial data (aka rainbow color maps) can create a lot of problems, which are well and vigorously documented. But let’s not throw out the visual power of hue shifting just because most rainbow designs are bad. It’s possible to design good “rainbows,” especially for dark backgrounds. Will show some examples recently designed for Tableau’s recent density mark (aka heat maps) feature.


Robert Kosara (Tableau Research)

Distortion for Science: 3D Pie Charts to Settle the Arc vs. Area Question


Ghulam Jilani Quadri (University of South Florida)

Modelling Cluster Multi-factor Perception in Scatterplots using Merge Trees: In the representation of information, design choices matter for effectively communicating information. Human perception plays an important role in what we infer from a visualization, and a better understanding of perception helps design the visualization in both a quantitative and a qualitative manner. Surveying the last decade of information visualization, many perception-based experiments have been carried out to understand the effectiveness of visualization with respect to viewers. Understanding a viewer’s ability to rapidly and accurately understand the clusters in a scatter-plot is theme of this work.

We present a rigorous empirical study on visual perception of clustering in scatter-plots modeling around a topological data structure known as the merge tree. We tested our cluster identification model in scatter-plots on a variety of multi-factor variables to understand the main and interaction effects. Our two staged psycho-physical study are a calibration experiment and an Amazon Mechanical Turk (AMT) study. On the basis of calibration result, we selected multi-factor factor variable ranges to be more effective in the wider AMT experiments.

Factors: Number of data points, point size and distribution (i.e. cluster sigma) are studied to understand their main and interaction effect on identification of features in visual encoding. We performed experiments to measure the effect of cluster distance and visual density on identification of cluster in scatter-plots.

Evaluation and analysis of results categorized into 1) the distance-based model and 2) the density-based model. The distance-based model analyses the effect of variables (number of data points, size of data points and distribution) with respect to distance between assumed cluster centers; The density-based model uses topology to study effect of visual density and prediction of the number of clusters that will be visible. The merge tree provides the threshold value to predict the number of clusters based on the perceptual analysis. Using these number we can redesign and test the model.

Our initial results and analysis are successfully proved our hypothesis and main effects of multi-factor variables are significant along with the interaction effect. Main and interaction effect plot support our hypothesis. This work is a visual summary task and sensitivity analysis variation. Transferring the specialized views of perception to generalized application on the visualization is the continuing work.


Steve Haroz (INRIA)

Extreme Visual-Linguistic Interference in Recall of Graph Information: In data visualization, gaze is directed towards the text and specifically titles, which can affect what is recalled from memory (Borkin, Bylinskii, et al. 2016; Matzen et al. 2017). In a more controlled setting, we investigate how wording in a line graph’s title impacts recall of the trend’s slope. In an experiment, we showed subjects a simple graph with an increasing or decreasing trend, paired with a title that is either strongly stated (“Strong Increase in Contraceptive Use”), more neutral (“The Increase in Contraceptive Use”), or subtle (“Slight Increase in Contraceptive Use”). To avoid rehearsal, subjects then performed a series of letter recall tasks, before being asked to recall the title and choose between two slopes, one was shallower and the other was steeper than the original. In trials with strongly stated titles, subjects were biased towards reporting a steeper slope, whereas we did not find an effect for the neutral or subtle conditions.


Ron Rensink (University of British Columbia)

Visual Features as Information Carriers: To determine the extent to which visual features can carry quantitative information, observers can be tested on their ability to estimate and discriminate Pearson correlation in graphical representations of artificial datasets. In these representations, the first dimension of each data element is represented by the horizontal position of a short vertical line (as in a scatterplot), while the second is represented by some visual feature (e.g., orientation or color). In the tests here, the visual feature used was physical luminance (intensity). Results show performance that is broadly similar to that for scatterplots (Rensink, 2017): just noticeable differences (JNDs) were roughly proportional to the distance from perfect correlation, and estimated correlations were logarithmic functions of the same quantity. These results support the proposal that luminance can be an information carrier—i.e., it can convey quantitative information in much the same way as spatial position—although its inherent noise may be greater. They also support the suggestion that the information obtained from the graphical representation is extracted by processes involved in ensemble coding, with at least some degree of binding between the luminance and position of each ensemble element.


Alex Kale (University of Washington)

Benefits of Bayesian Hierarchical Models for Inferring Just-Noticeable Differences: Vision scientists and data visualization researchers often use just-noticeable differences (JNDs)- the amount of change in a feature of a stimulus (e.g., color, orientation, motion) which is detectable to an observer with 75% accuracy -as a measure of observers’ perceptual sensitivity to changes in a stimulus. Researchers estimate JNDs by modeling accuracy as a function of stimulus intensity (i.e., units of change in the facet of the stimulus under study), a process called fitting psychometric functions (PFs). We argue that the current procedure for fitting PFs is lossy: researchers fit PFs to observer responses then use only the estimated JNDs to make statistical inferences about the perceptual sensitivity of a population of observers. By separating the estimation of JNDs and statistical inference into two different models, researchers are discarding important information about uncertainty in PF fits. This can lead to overconfidence in the reliability of average JND estimates at the population level. We argue that fitting PFs and statistical inference should be integrated into a single Bayesian hierarchical regression model. Such a model would simultaneously estimate PFs for individual observers, hyperparameters describing the population of observers (e.g., the mean and standard deviation of JNDs), and the average impact of experimental manipulations like visualization conditions on JNDs. We demonstrate this approach with data collected in a recent crowdsourced experiment on uncertainty visualization. Incorporating JND estimation with statistical inference inside one unified model will lead to more conservative estimates of effects in studies of perceptual sensitivity and more accurate characterization of uncertainty in those estimates.


Eric Alexander (Carleton College)

Exploring Crowding Effects on Font Size Encodings: Word clouds are a form of text visualization that take a set of words sized according to some data attribute (often frequency within a document) and jumble them together. As a visualization technique, they are often used and often maligned. Proponents say they give a playful and engaging high-level overview, while detractors dismiss them as “pop” visualizations that fail to help users perform the tasks for which they are often employed. Empirical evidence for these stances has been varied: word clouds have been shown to be bad at search tasks, but decent at helping users perceive the gist of a group of related words.

It is still partially in question whether people can read the data being encoded in a word cloud accurately. Given the ways in which two words’ shapes may vary aside from their font size, font size encodings have in the past been thought too difficult for users to read. Despite this impression, in work published in 2017, we found that across 17 experiments covering a wide range of word shape variations, participants were able to accurately perceive even minute differences in font size, seeming to suggest that font size encodings may be more effective than previously thought. However, there are a number of factors that we still need to investigate, including color, word orientation, semantic meanings of words, and more.

In collaboration with Professor Danielle Albers Szafir (University of Colorado-Boulder), I plan to explore one such additional factor: the effect of crowding on word cloud legibility. “Crowding” refers the phenomenon in which it becomes much more difficult to identify visual targets when they are surrounded by other potential targets. As the density of a word cloud rises, then, it is possible that a participant’s ability to make accurate judgments about font size encodings may decrease. This is particularly likely to happen at the periphery of their vision. While the limitations of peripheral vision may not be an issue for tasks that only require identification of a single word’s font size, higher level tasks like gist-forming are likely to incorporate perception not just of a center word, but of the words around it, as well. Understanding the limits of perception when scanning dense word clouds is crucial for identifying their utility in real-world tasks.

We will be investigating these questions through a series of experiments conducted on Amazon’s Mechanical Turk. By presenting participants with word clouds of varying densities and asking them to make judgments about the words at both the center and peripheral edges of the clouds, we hope to be able to model how their performance deteriorates as a function of increased crowding. We believe this work lies at a fruitful juncture of visualization and vision science, and hope it will allow us to more responsibly deploy font size encodings in a wide variety of fields and settings.


Caitlyn McColeman (Northwestern University)

The Interaction between Visual Encodings and Value on Perceptual Biases in Ratio Judgements: Perhaps the most familiar example of studying the perception of data visualization comes from Cleveland and McGill (1984) wherein they tested the participants’ ability to make ratio judgements using different types of perceptual encodings (including position, length, area and shading). Their analysis was informed by earlier psychophysical studies in which the relationship between the objective change in stimulation and the subjective experience of that change was tested. In most senses, a single unit increase impacts the perception of that change non-linearly as values increase, formalized famously by Weber’s Law.

In Cleveland and McGill (1984) they provide a conjecture that bar graphs representing one portion of a filled whole elicit Weber’s law from two directions: one from the bottom and one from the top of the rectangle.

We have tested and extended the above conjecture. In three conditions, we test participants’ ability to redraw ratio values (from 1-99%). One of three types of graphs is shown briefly on the screen, and then participants are asked to redraw the presented value using a mouse cursor. One condition (the “stacked bar” condition) is a single stacked bar, where the filled-in bar of the represents the ratio value of interest and the length of the frame around it represents the whole. In a second condition (the “side-by-side bar” condition), the ratio is represented again by a bar, but rather than stacked within a frame, it is represented separately from a larger bar that represents one. In a third, control condition (the “bar only condition”), we show only a single bar representing a magnitude without any reference with which to report a ratio.

The results from the study were largely consistent with the psychophysical take: as values in the “bar only” condition increased, errors from participants redrawing them increased non-linearly.

The “side-by-side bar” condition showed a general increase in error for values 0-40% and a decrease close to 50%, with an interesting bimodality in the remaining values (51-99%).

In the “stacked bar” condition, errors in participants’ drawings were mostly consistent with the prediction from Cleveland and McGill: essentially there were two Weber’s Law functions observed from the top and the bottom of the rectangle frame, excepting a remarkably accurate pattern of responses near 50%.

We propose that these findings may serve as evidence for categorical perception in the context of data visualization.


Madison Elliott (University of British Columbia)

Understanding Complex Ensemble Perception with Multi-Class Scatterplots: Our visual system rapidly extracts ensembles to help us understand our environment (Haberman & Whitney, 2012). However, it is not yet understood how multiple ensemble dimensions are used, or how attention can select one ensemble over another. As a first step, we investigated feature selection in attention for multi-dimensional ensembles. Specifically, we examined whether increasing featural differences, which aids perceptual grouping (Moore & Egeth, 1997), would boost selectivity for one ensemble over another.

The perception of correlation in scatterplots appears to be an ensemble process (Rensink, 2017), and adding an irrelevant set of data points causes interference (Elliott & Rensink, VSS 2016; 2017).  To investigate this more thoroughly, observers performed a correlation discrimination task for scatterplots containing both a “target” ensemble and an irrelevant “distractor” ensemble (Elliott & Rensink, VSS 2017) where target ensembles were distinguished by the color, shape, or color and shape combinations of their elements. Both tasks used ΔE from Szafir (2017) to create a precise experimental color space that takes into account stimulus area and mark type. Distractor colors varied in equal perceptual steps along three axes: luminance, chroma, and hue, which allowed us to investigate whether individual color dimensions influenced selection.

Below is a list of papers and presentations from members of VisXVision. A * denotes the VisXVision member name in a list of unaffiliated authors.

BELIV Workshop Papers

Anamaria Crisan & Madison Elliott* – How to evaluate an evaluation study? Comparing and contrasting practices in vis with those of other disciplines

Robert Kosara & Steve Haroz – Skipping the Replication Crisis in Visualization: Threats to Study Validity and How to Address Them

Steve Haroz – Open Practices in Visualization Research

IEEE VIS Conference Papers

Dominik Moritz*, Chenglong Wang, Greg L. Nelson, Halden Lin, Adam M. Smith, Bill Howe, & Jeffrey Heer – Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco

Hayeong Song and Danielle Albers Szafir* – Where’s My Data? Evaluating Visualizations with Missing Data

Alex Kale*, Francis Nguyen, Matthew Kay*, and Jessica Hullman – Hypothetical Outcome Plots Help Untrained Observers Judge Trends in Ambiguous Data

Brian Ondov, Nicole Jardin*, Niklas Elmqvist, and Steven Franconeri* – Face to Face: Evaluating Visual Comparison

Other Conference Presentations

Theresa-Marie Rhyne – Keynote Speaker at VisGuides – Formulating a Colorization Guide for VIS