Welcome to our website: a place where researchers can explore and promote a promising new partnership between visualization and vision science!
Welcome to our website: a place where researchers can explore and promote a promising new partnership between visualization and vision science!
Here’s your 2020 guide to VIS-related content at V-VSS! Make sure you check out the following presentations:
The Set Size Bias in Ensemble Comparison (Or Why Showing Raw Data May Be Misleading) 
Abstract: Ensemble perception is characterized by the rapid ability to estimate a summary statistic from a set without needing serial inspection. But which stimulus properties influence how that summary is made? In a within-subject experiment with per-trial feedback, subjects chose which set had a larger average value. Using data visualizations as stimuli, subjects were asked which of two sets had a higher position (dot plots), a larger size (floating bar graphs), or redundantly coded highest position and largest size (regular bar graphs). The experiment also varied set size (1vs1, 12vs12, 20vs20, 12vs20, and 20vs12), mean difference between the sets (0 to 80 pixels in 10 pixel increments), and which set had the largest single value. With 25 repetitions per condition, each subjects ran in over 5,000 trials. For single-item comparisons, position was unsurprisingly more precise than length alone. However, for set comparison, the noisiness of ensemble coding appears to overpower these differences, so position, length, and the redundant combination have indistinguishable discriminability, which contradicts Cleveland & McGill (1984). Moreover, for all visual features, responses were biased towards the larger set size. Previous results (Yuan, Haroz, & Franconeri 2018) suggested that this bias is caused by estimating a sum or total area. But because the effect occurs in the position (dot plot) condition, where sum or total area are unhelpful, that model is unlikely. Additional analyses did not reveal a bias towards the set with the largest single value, the smallest single value, or the largest range of values. These results imply that this bias is holistic and not driven by simpler proxies. As showing raw data rather than only summary statistics is common advice in visualization design, the set size bias could cause people to misinterpret visualizations that do not have the same number of items in each group.
Toward a Science of Effect Size Perception: the Case of Introductory Psychology Textbooks 
Sarah Kerns, Elisabeth Grinspoon, Laura Germine & Jeremy Wilmer
Abstract: In an increasingly data-producing world, effective visualization is an indispensable tool for understanding and communicating evidence. While vision science already provides broad cognitive guideposts for graph design, graphs themselves raise new constraints and questions that remain relatively unexplored. One central question is how the magnitude, or effect size, of a difference is interpreted. Here, we take several steps toward understanding effect size perception via the case example of college-level Introductory Psychology textbooks, selected for their reach to millions of students per year. A survey of all 23 major introductory textbooks found that graphs of central tendency (means) indicate distribution of individual data points less than five percent of the time, and are thus formally ambiguous with regard to effect size. To understand how this ambiguity is commonly interpreted, we needed a measure of effect size perception. After multiple rounds of piloting (45 total rounds, 300+ total participants), we settled on a drawing-based measure whereby participants “sketch hypothetical individual values, using dots” onto their own representation of a given bar graph. Effect sizes are then read out directly from these drawings, in standard deviation (SD) units, by a trained coder. Next, we selected two textbook graphs for their large effect sizes of 2.00 and 0.70 SDs, and, using our created measure, we tested 112 educated participants. In their drawings we observed inflated, widely varying drawn effect sizes for both graphs, with median drawn effect size that were 200% and 1143% of the true effect size, and interquartile ranges were 100-300% and 714-2000%, respectively. The present work therefore documents an influential domain where the norm in graphical communication is formally ambiguous with regard to effect size, develops a straightforward approach to measuring effect size perception, and provides existence-proof of widely varying, highly inflated effect size perceptions.
Further Evidence that Probability Density Shape is a Proxy for Correlation 
Madison Elliott & Ronald Rensink
Abstract: Previous work demonstrated a discrimination performance cost for selecting “target” correlation populations among irrelevant “distractor” populations in two-class scatterplots (Elliott, 2016). This cost cannot be eliminated by increasing featural differences, e.g., color, between the dots in the two populations (Elliott & Rensink, VSS 2018). These findings do not agree with predictions from feature-based attention models (Wolfe, 1994), motivating us to investigate whether feature information can in fact be used to select target correlation populations. Observers performed a correlation discrimination task for scatterplots containing a target and a distractor population. Both populations had the same mean, standard deviation, color, and number of dots; the resulting two-class plots were distinguished by the correlation of the target population only. In the first of two counterbalanced conditions, targets were more correlated than the distractors; in the second, they were less. Results showed that observers can successfully discriminate two-class plots based on the correlation of their target populations. Increased JNDs were found when targets had higher correlations than distractors, replicating the results of Elliott (2016); however, there was no cost for targets with lower correlations. This asymmetry supports the proposal (Rensink, 2017) that estimation of correlation in scatterplots is based on the width of the probability density function corresponding to the dot cloud; for a two-class plot this appears to be a single density function dominated by the width of the lower-correlation (and thus wider) population. In addition, there is a resistance to feature selection: performance is the same regardless of the difference in features between target and distractor populations. This suggests that a two-class scatterplot is coded as a single ensemble, with observers unable to select items based on the value of their features because ensemble structure is prioritized over item-level feature information (Brady & Alvarez, 2011).
Correlation structure does not affect number judgment 
Ford Atwater, Madison Elliott, Caitlin Coyiuto & Ronald Rensink
Abstract: Past work showed a performance cost for discriminating two-class scatterplots formed of a “target” population and an irrelevant “distractor” population (Elliott, 2016). As long as a second population was present, discrimination was the same regardless of color similarity between the two, resulting in “all-or-nothing performance” (Elliott & Rensink, VSS 2019). When the same dots were randomly distributed in a numerosity estimation task, severe performance costs were found for similar target and distractor colors, but no cost for opposite-colored distractors, showing “graded performance” similar to that encountered in visual search (Duncan & Humphreys, 1989). A possible explanation for this difference is that the structured spatial distributions of stimuli in the correlation task differed from the random distributions in the number estimation task. This prompted us to investigate whether or not numerosity estimates would be affected by correlation structure, and vice versa. Observers performed two within-subjects tasks: a correlation task and a numerosity discrimination task. Crucially, the stimuli were always the same: two-class scatterplots with target and distractor populations distinguished by color. Trial blocks were fully counterbalanced according to target dot number (50, 100, 150) and target dot correlation (.3, .6, .9). Distractor populations were always drawn with 100 dots at Pearson’s r = .3. Results showed that correlation JND slopes were unaffected by number of dots in the target population, and JND intercepts increased as the number of dots decreased, consistent with one-class scatterplot performance (Rensink, 2017). Number JNDs increased with the number of dots, consistent with past findings (Feigenson et al., 2004). Critically, number JNDs did not vary with target correlation, showing that the structure and geometric density of the target population does not affect our ability to select and estimate number information.
Shape size judgments are influenced by fill and contour closure 
David Burlinson & Danielle Szafir
Abstract: Shape and size are two important visual channels that underpin our ability to reason about the visual world in settings ranging from natural scenes to informational charts and diagrams. Strong evidence in the vision science (Makovski, 2017) and data visualization (Smart & Szafir, 2019) literature suggests that shape and size perception are inextricably linked to each other, with asymmetric influences of either channel upon the other. To better understand these influences and begin exploring the visual features that contribute to them, we designed an experiment to address the following question: “how do people judge the size of simple 2D shapes varying in geometric properties at multiple scales?” We asked 82 subjects on Amazon Mechanical Turk to adjust different target shapes until they appeared the same size as an array of homogeneous background shapes, and collected response data on combinations of representative filled, unfilled, and open shape categories at three levels of size. We analyzed the delta between target and background shape size judgments using a generalized linear model, and found statistical significance for the role of size, target and distractor features, and the interaction of shape and size (all with P < .001). As shape size increased, the delta between target and background shapes decreased. In medium and large conditions, open shapes needed to be made smaller to appear the same size as filled or unfilled shapes, lending support to the open-object illusion. These findings have practical and theoretical implications. Visualization tools and designers would benefit from sets of symbols normalized for perceptual size at different scales; future studies can explore more situated tasks and contexts to that end. Furthermore, theories underlying shape perception should account for characteristics such as visual density, geometric properties, and contour closure, as these features produced significant differences in perceived size in this study.
Our workshop was recorded and is now available for viewing and download here.
Sponsored by: Adobe Inc., CU Boulder VisuaLab, Northwestern U Visual Thinking Lab
Following the last three years of fun interdisciplinary events at both VIS and VSS, we are holding the first official event for vision scientist and visualization researcher collaboration, interaction, and peer reviewed research-sharing at VIS. The specific goal of this workshop is to provide a forum where vision science and visualization researchers can share cutting-edge research at this interdisciplinary intersection, in preparation for publishing and presenting it at both IEEE VIS, as well as in the upcoming Journal of Vision Special Issue.
Note: paper drafts can be downloaded via IEEE Xplore and here with the workshop password.
Invited Speaker Abstracts and Slides:
Visual search– Jeremy Wolfe
Decades of research on visual search have given us quite a good understanding of how people look for targets in scenes containing distracting items. Knowing how people search is not the same as knowing how to design searchable visual stimuli, especially if we want users to be able to search those stimuli for a variety of different targets. Still, the topics of search and searchability must be related so we will explore what the rules governing the deployment of visual attention might suggest to the creators of new visualizations.
Working memory– Timothy Brady
When processing complex visual displays, people often need to hold information actively in mind to facilitate comparison or integration. Decades of research have shown that our ability to hold information actively in mind is incredibly limited (e.g., we can miss large changes to scenes if we happen to not be holding in mind the right information), and simple rules like people can remember 3-4 things are popular ways to conceive of these limits. In this talk, I discuss what aspects of visual information people can easily hold in mind; what things are extremely difficult to hold in mind; and how these limits relate to visualization design.
Visual magnitudes– Darko Odic
The perception of visual magnitudes – length, area, time, number, etc. – has been one of the foundational questions since the dawn of empirical psychology, stretching back from Weber and Helmholtz to today. In this talk, I will share a number of insights, new and old, about how we perceive number, time, and space representations throughout our entire lifespan, focusing especially on issues that might be relevant for data visualization. I will first discuss findings about how our perceptual system deals with competing magnitude dimensions: situations in which, e.g., both number and length are competing for attention. Next, I will share several findings demonstrating that surface area perception is susceptible to various surprising inconsistencies and illusions, whereby we perceive collections of objects to be cumulatively smaller than they really are. Finally, I will share findings on how perceptual magnitude representations allow us to easily find the maximal and minimal element in a set.
Thanks to everyone who attended this event!
2:00-2:05 pm: Introduction
2:05-2:11 pm: Open vs Closed Perceptual Categories Persist in the Context of Overplotting and with Real-World Data – David Burlinson
2:17-2:23 pm: Perceptual averaging in visual communication: Ensemble representations in the perception of scientific data in graphs – Stefan Uddenberg
2:23-2:29 pm: Data Visualization in Introductory Psychology Textbooks – Jeremy Wilmer
2:29-2:35 pm: Using Analogies to Teach Novel Graphics – Emily Laitin
2:35-2:41 pm: Missing the forest and the trees in animated charts – Nicole Jardine
2:41-2:47 pm: Exploring Attention on Large-scale Visualizations using ZoomMaps, a Zoomable Crowdsourced Interface – Anelise Newman
2:47-2:53 pm: The role of spatial organization for interpreting colormap data visualizations – Shannon Sibrel
2:53-3:08 pm: All presenters on panel Q&A
3:08-3:23 pm: Guest speaker Brian Fisher
3:23-3:38 pm: JOV special edition editor presentation and Q&A
3:38-4:30 pm: “Meet & Greet” with refreshments
This event was sponsored by Adobe Inc., the Visual Thinking Lab at Northwestern, and Colorado Boulder’s VisuaLab.
Slides are not available from this meetup- as many presenters gave talks on unpublished and ongoing work for critique and feedback.
Selected Lightning Talk Abstracts
Maureen Stone (Tableau Research)
Rainbows Gone Good: Multi-hued color gradients applied to spatial data (aka rainbow color maps) can create a lot of problems, which are well and vigorously documented. But let’s not throw out the visual power of hue shifting just because most rainbow designs are bad. It’s possible to design good “rainbows,” especially for dark backgrounds. Will show some examples recently designed for Tableau’s recent density mark (aka heat maps) feature.
Robert Kosara (Tableau Research)
Distortion for Science: 3D Pie Charts to Settle the Arc vs. Area Question
Ghulam Jilani Quadri (University of South Florida)
Modelling Cluster Multi-factor Perception in Scatterplots using Merge Trees: In the representation of information, design choices matter for effectively communicating information. Human perception plays an important role in what we infer from a visualization, and a better understanding of perception helps design the visualization in both a quantitative and a qualitative manner. Surveying the last decade of information visualization, many perception-based experiments have been carried out to understand the effectiveness of visualization with respect to viewers. Understanding a viewer’s ability to rapidly and accurately understand the clusters in a scatter-plot is theme of this work.
We present a rigorous empirical study on visual perception of clustering in scatter-plots modeling around a topological data structure known as the merge tree. We tested our cluster identification model in scatter-plots on a variety of multi-factor variables to understand the main and interaction effects. Our two staged psycho-physical study are a calibration experiment and an Amazon Mechanical Turk (AMT) study. On the basis of calibration result, we selected multi-factor factor variable ranges to be more effective in the wider AMT experiments.
Factors: Number of data points, point size and distribution (i.e. cluster sigma) are studied to understand their main and interaction effect on identification of features in visual encoding. We performed experiments to measure the effect of cluster distance and visual density on identification of cluster in scatter-plots.
Evaluation and analysis of results categorized into 1) the distance-based model and 2) the density-based model. The distance-based model analyses the effect of variables (number of data points, size of data points and distribution) with respect to distance between assumed cluster centers; The density-based model uses topology to study effect of visual density and prediction of the number of clusters that will be visible. The merge tree provides the threshold value to predict the number of clusters based on the perceptual analysis. Using these number we can redesign and test the model.
Our initial results and analysis are successfully proved our hypothesis and main effects of multi-factor variables are significant along with the interaction effect. Main and interaction effect plot support our hypothesis. This work is a visual summary task and sensitivity analysis variation. Transferring the specialized views of perception to generalized application on the visualization is the continuing work.
Steve Haroz (INRIA)
Extreme Visual-Linguistic Interference in Recall of Graph Information: In data visualization, gaze is directed towards the text and specifically titles, which can affect what is recalled from memory (Borkin, Bylinskii, et al. 2016; Matzen et al. 2017). In a more controlled setting, we investigate how wording in a line graph’s title impacts recall of the trend’s slope. In an experiment, we showed subjects a simple graph with an increasing or decreasing trend, paired with a title that is either strongly stated (“Strong Increase in Contraceptive Use”), more neutral (“The Increase in Contraceptive Use”), or subtle (“Slight Increase in Contraceptive Use”). To avoid rehearsal, subjects then performed a series of letter recall tasks, before being asked to recall the title and choose between two slopes, one was shallower and the other was steeper than the original. In trials with strongly stated titles, subjects were biased towards reporting a steeper slope, whereas we did not find an effect for the neutral or subtle conditions.
Ron Rensink (University of British Columbia)
Visual Features as Information Carriers: To determine the extent to which visual features can carry quantitative information, observers can be tested on their ability to estimate and discriminate Pearson correlation in graphical representations of artificial datasets. In these representations, the first dimension of each data element is represented by the horizontal position of a short vertical line (as in a scatterplot), while the second is represented by some visual feature (e.g., orientation or color). In the tests here, the visual feature used was physical luminance (intensity). Results show performance that is broadly similar to that for scatterplots (Rensink, 2017): just noticeable differences (JNDs) were roughly proportional to the distance from perfect correlation, and estimated correlations were logarithmic functions of the same quantity. These results support the proposal that luminance can be an information carrier—i.e., it can convey quantitative information in much the same way as spatial position—although its inherent noise may be greater. They also support the suggestion that the information obtained from the graphical representation is extracted by processes involved in ensemble coding, with at least some degree of binding between the luminance and position of each ensemble element.
Alex Kale (University of Washington)
Benefits of Bayesian Hierarchical Models for Inferring Just-Noticeable Differences: Vision scientists and data visualization researchers often use just-noticeable differences (JNDs)- the amount of change in a feature of a stimulus (e.g., color, orientation, motion) which is detectable to an observer with 75% accuracy -as a measure of observers’ perceptual sensitivity to changes in a stimulus. Researchers estimate JNDs by modeling accuracy as a function of stimulus intensity (i.e., units of change in the facet of the stimulus under study), a process called fitting psychometric functions (PFs). We argue that the current procedure for fitting PFs is lossy: researchers fit PFs to observer responses then use only the estimated JNDs to make statistical inferences about the perceptual sensitivity of a population of observers. By separating the estimation of JNDs and statistical inference into two different models, researchers are discarding important information about uncertainty in PF fits. This can lead to overconfidence in the reliability of average JND estimates at the population level. We argue that fitting PFs and statistical inference should be integrated into a single Bayesian hierarchical regression model. Such a model would simultaneously estimate PFs for individual observers, hyperparameters describing the population of observers (e.g., the mean and standard deviation of JNDs), and the average impact of experimental manipulations like visualization conditions on JNDs. We demonstrate this approach with data collected in a recent crowdsourced experiment on uncertainty visualization. Incorporating JND estimation with statistical inference inside one unified model will lead to more conservative estimates of effects in studies of perceptual sensitivity and more accurate characterization of uncertainty in those estimates.
Eric Alexander (Carleton College)
Exploring Crowding Effects on Font Size Encodings: Word clouds are a form of text visualization that take a set of words sized according to some data attribute (often frequency within a document) and jumble them together. As a visualization technique, they are often used and often maligned. Proponents say they give a playful and engaging high-level overview, while detractors dismiss them as “pop” visualizations that fail to help users perform the tasks for which they are often employed. Empirical evidence for these stances has been varied: word clouds have been shown to be bad at search tasks, but decent at helping users perceive the gist of a group of related words.
It is still partially in question whether people can read the data being encoded in a word cloud accurately. Given the ways in which two words’ shapes may vary aside from their font size, font size encodings have in the past been thought too difficult for users to read. Despite this impression, in work published in 2017, we found that across 17 experiments covering a wide range of word shape variations, participants were able to accurately perceive even minute differences in font size, seeming to suggest that font size encodings may be more effective than previously thought. However, there are a number of factors that we still need to investigate, including color, word orientation, semantic meanings of words, and more.
In collaboration with Professor Danielle Albers Szafir (University of Colorado-Boulder), I plan to explore one such additional factor: the effect of crowding on word cloud legibility. “Crowding” refers the phenomenon in which it becomes much more difficult to identify visual targets when they are surrounded by other potential targets. As the density of a word cloud rises, then, it is possible that a participant’s ability to make accurate judgments about font size encodings may decrease. This is particularly likely to happen at the periphery of their vision. While the limitations of peripheral vision may not be an issue for tasks that only require identification of a single word’s font size, higher level tasks like gist-forming are likely to incorporate perception not just of a center word, but of the words around it, as well. Understanding the limits of perception when scanning dense word clouds is crucial for identifying their utility in real-world tasks.
We will be investigating these questions through a series of experiments conducted on Amazon’s Mechanical Turk. By presenting participants with word clouds of varying densities and asking them to make judgments about the words at both the center and peripheral edges of the clouds, we hope to be able to model how their performance deteriorates as a function of increased crowding. We believe this work lies at a fruitful juncture of visualization and vision science, and hope it will allow us to more responsibly deploy font size encodings in a wide variety of fields and settings.
Caitlyn McColeman (Northwestern University)
The Interaction between Visual Encodings and Value on Perceptual Biases in Ratio Judgements: Perhaps the most familiar example of studying the perception of data visualization comes from Cleveland and McGill (1984) wherein they tested the participants’ ability to make ratio judgements using different types of perceptual encodings (including position, length, area and shading). Their analysis was informed by earlier psychophysical studies in which the relationship between the objective change in stimulation and the subjective experience of that change was tested. In most senses, a single unit increase impacts the perception of that change non-linearly as values increase, formalized famously by Weber’s Law.
In Cleveland and McGill (1984) they provide a conjecture that bar graphs representing one portion of a filled whole elicit Weber’s law from two directions: one from the bottom and one from the top of the rectangle.
We have tested and extended the above conjecture. In three conditions, we test participants’ ability to redraw ratio values (from 1-99%). One of three types of graphs is shown briefly on the screen, and then participants are asked to redraw the presented value using a mouse cursor. One condition (the “stacked bar” condition) is a single stacked bar, where the filled-in bar of the represents the ratio value of interest and the length of the frame around it represents the whole. In a second condition (the “side-by-side bar” condition), the ratio is represented again by a bar, but rather than stacked within a frame, it is represented separately from a larger bar that represents one. In a third, control condition (the “bar only condition”), we show only a single bar representing a magnitude without any reference with which to report a ratio.
The results from the study were largely consistent with the psychophysical take: as values in the “bar only” condition increased, errors from participants redrawing them increased non-linearly.
The “side-by-side bar” condition showed a general increase in error for values 0-40% and a decrease close to 50%, with an interesting bimodality in the remaining values (51-99%).
In the “stacked bar” condition, errors in participants’ drawings were mostly consistent with the prediction from Cleveland and McGill: essentially there were two Weber’s Law functions observed from the top and the bottom of the rectangle frame, excepting a remarkably accurate pattern of responses near 50%.
We propose that these findings may serve as evidence for categorical perception in the context of data visualization.
Madison Elliott (University of British Columbia)
Understanding Complex Ensemble Perception with Multi-Class Scatterplots: Our visual system rapidly extracts ensembles to help us understand our environment (Haberman & Whitney, 2012). However, it is not yet understood how multiple ensemble dimensions are used, or how attention can select one ensemble over another. As a first step, we investigated feature selection in attention for multi-dimensional ensembles. Specifically, we examined whether increasing featural differences, which aids perceptual grouping (Moore & Egeth, 1997), would boost selectivity for one ensemble over another.
The perception of correlation in scatterplots appears to be an ensemble process (Rensink, 2017), and adding an irrelevant set of data points causes interference (Elliott & Rensink, VSS 2016; 2017). To investigate this more thoroughly, observers performed a correlation discrimination task for scatterplots containing both a “target” ensemble and an irrelevant “distractor” ensemble (Elliott & Rensink, VSS 2017) where target ensembles were distinguished by the color, shape, or color and shape combinations of their elements. Both tasks used ΔE from Szafir (2017) to create a precise experimental color space that takes into account stimulus area and mark type. Distractor colors varied in equal perceptual steps along three axes: luminance, chroma, and hue, which allowed us to investigate whether individual color dimensions influenced selection.
Below is a list of papers and presentations from members of VisXVision. A * denotes the VisXVision member name in a list of unaffiliated authors.
Anamaria Crisan & Madison Elliott* – How to evaluate an evaluation study? Comparing and contrasting practices in vis with those of other disciplines
Robert Kosara & Steve Haroz – Skipping the Replication Crisis in Visualization: Threats to Study Validity and How to Address Them
Steve Haroz – Open Practices in Visualization Research
Dominik Moritz*, Chenglong Wang, Greg L. Nelson, Halden Lin, Adam M. Smith, Bill Howe, & Jeffrey Heer – Formalizing Visualization Design Knowledge as Constraints: Actionable and Extensible Models in Draco
Hayeong Song and Danielle Albers Szafir* – Where’s My Data? Evaluating Visualizations with Missing Data
Alex Kale*, Francis Nguyen, Matthew Kay*, and Jessica Hullman – Hypothetical Outcome Plots Help Untrained Observers Judge Trends in Ambiguous Data
Brian Ondov, Nicole Jardin*, Niklas Elmqvist, and Steven Franconeri* – Face to Face: Evaluating Visual Comparison
Theresa-Marie Rhyne – Keynote Speaker at VisGuides – Formulating a Colorization Guide for VIS
Attentional Selection of Multiple Correlation Ensembles
Madison Elliott & Ron Rensink
Effects of title wording on memory of trends in line graphs
Anelise Newman, Zoya Bylinskii, Steve Haroz, Spandan Madan, Fredo Durand, Aude Oliva
Interpreting color-coding systems: the effects of concept activation on color inference
Kathleen Foley, Laurent Lessard, Karen B. Schloss
We’ve posted slides from the 4 presenters, as well as the original description from our submission below…
Where do people look on data visualizations?
(Aude Oliva & Zoya Bylinskii)
Segmentation, structure, and shape perception in data visualizations
Color Perception in Data Visualizations
Data is ubiquitous in the modern world, and its communication, analysis, and interpretation are critical scientific issues. Visualizations leverage the capabilities of the visual system, allowing us to intuitively explore and generate novel understandings of data in ways that fully-automated approaches cannot. Visualization research builds an empirical framework around design guidelines, perceptual evaluation of design techniques, and a basic understanding of the visual processes associated with viewing data displays. Vision science offers the methodologies and phenomena that can provide foundational insight into these questions. Challenges in visualization map directly to many vision science topics, such as finding data of interest (visual search), estimating data means and variance (ensemble coding), and determining optimal display properties (crowding, salience, color perception). Given the growing interest in psychological work that advances basic knowledge and allows for immediate translation, visualization provides an exciting new context for vision scientists to confirm existing hypotheses and explore new questions. This symposium will illustrate how interdisciplinary work across vision science and visualization simultaneously improves visualization techniques while advancing our understanding of the visual system, and inspire new research opportunities at the intersection of these two fields.
Historically, the crossover between visualization and vision science relied heavily on canonical findings, but this has changed significantly in recent years. Visualization work has recently incorporated and iterated on newer vision research, and the results has been met with great excitement from both sides (e.g., Rensink & Baldridge, 2010; Haroz & Whitney, 2012; Harrison et al., 2014; Borkin et al., 2016; Szafir et al., 2016). Unfortunately, very little of this work is presented regularly at VSS, and there is currently no dedicated venue for collaborative exchanges between the two research communities. This symposium showcases the current state of vision science and visualization research integration, and aspires to make VSS a home for future exchanges. Visualization would benefit from sampling a wider set of vision topics and methods, while vision scientists would gain a new real-world context that simultaneously provokes insight about the visual system and holds translational impact.
Congratulations Danielle! Read her profile here 🙂
“Discover Pasteur’s Quadrant: Four research communities that will inspire your work”. (FYI- we’re expecting video + slides to be posted on the OPAM website sometime soon- for now, find a link to the program here).
Tamara Munzner‘s panel talk, “Data Visualization as a Driver for Visual Cognition Research“, was declared (by the OPAM keynoter Jeremy Wolfe) to open millennia-worth of dissertation material for visioneers. Find her slides here!