The 2017 IEEE Vis conference took place in Phoenix, AZ, USA on the first week of October (1st-6th). There were many papers presented, but this report touches on those papers that are interesting or immediately applicable to visualization practictioners (and my product team!). Admittedly, this list has a strong InfoVis bent. If you’re not sure what the distinction between VAST, InfoVis, and SciVis is… consider yourself lucky. Someone on the internet must have ranted about it before. If not, bug me and maybe I’ll take a stab!

Rather than organize this summary by day (which doesn’t really augment the experience for anyone but attendees), I’ll try to organize these highlights by topic. Important to note is there is very little research done on the use of dashboards, which currently is the public face of visualization for much of the business and government world. The work by Zenning Qu and Jessica Hullman at the University of Washington (discussed below) is a distinct outlier in this regard.

TOPICS

Keeping Multiple Views Consistent: Constraints, Validations, and Exceptions in Visualization Authoring

Zenning Qu and Jessica Hullman

When creating or naviating around a dashboard, viewers often make data view mapping changes that modify the aggregation granularity and separation of data marks. Often in this scenario, the visualizations become inconsistent with previous iterations: displayed range of the data changes, the meanings of marks change, and comparison between filtered views becomes difficult. In this paper, the authors describe an organization around these inconsistencies, and suggested potentialy solutions to resolve these inconsistencies.

Through a study of how people use a visualization authoring tool (namely, Tableau and Excel), the authors identify some inconsistencies between design iterations. However, the authors also identify several analysis scenarios where the inconsistencies are not so critical for effectively communicating the data. They raise some discussions of these trade-offs and highlight the need for additional research to build understanding and consistency in this space.

Warning, Bias May Occur: A Proposed Approach to Detecting Cognitive Bias in Interactive Visual Analytics

Emily Wall, Leslie Blaha, Lyndsey Franklin, and Alex Endert

When using exploratory visualizations, it may be unclear if the viewer has looked at all representative elements of a dataset. This paper frames this problem from a cognitive bias perspective, proposing that the viewer develops an intuition about the entities that are important to consider, a strategy that could preclude consideration of relevant items. As a solution, the presented system helps to identify scenarios from visualization usage if large subspaces of data or interactive elements are not utilized or inspected.

This is an interesting approach to the problem of the “blank slate” problem, somewhat as a dual: “did you look at everything yet?” The paper demonstrates a strategy of keeping track of a viewer’s exploration through a dataset, which can subsequently be used to identify subspaces that lack coverage in the current view. Taking advantage of automatically-captured provenance to augment the viewer’s exploration is a fantastic way to move forward.

Sequence Synopsis: Optimize Visual Summary of Temporal Event Data

Yuanzhe Chen, Panpan Xu, and Liu Ren

A paper that talks about creating visual summaries of temporal event data! You may know “temporal event data” in the electronic health record (EHR) domain, where these records signify medications, tests, health outcomes, and other types of events over time. Machine logs such as application telemetry or user activity logs are also quite cheap to instrument, but using these logs for exploration is very difficult—often these data are used in a reactive sense as a post-portem to a problem surfaced elsewhere.

Visualization of sequence data was a hot topic, with two temporal sequence workshops held at VIS last year (2016): the Event Event workshop and the Logging Interactive Visualizations & Visualizing Interaction Logs workshops. These workshops surely led to several papers on temporal sequence analysis. This paper in particular provides a representative taxonomy of operations that analysts want to perform with this log data, tand provides some initial methods for supporting the interactive exploration of this data, even at scales of millions of events.

Scatterplots: Tasks, Data, and Designs

Alper Sarikaya and Michael Gleicher

(disclaimer: I am the first author!)

This paper stems from the fact that we couldn’t find any one place for scatterplot designs when we were attempting to empirically evaluate and compare several scatterplot-like designs to one another. As a result, we ended up using the visualization research literature as a resource to collect and abstract factors of analysis task and data characteristics that affect appropriate scatterplot designs. With these two factors, we also identified different components of scatterplot design and grouped them by how they affect the design: modified the marks, grouped the marks, moved the marks, other graph amenities, and interaction intents.

With these three factors of task, data, and designs, we moved toward showing how these factors can be linked together to highlight incompatible designs and how design strategies change as some data characteristics change (e.g., increasing the number of points shown). The framework is purposely open for future empirical experimentation, but we believe that the results are immediately actionable into visualization authoring tools, given that the designer inputs an analysis scenario (e.g., a collection of analysis tasks to support with a set of data characteristics).

Considerations for Visualizing Comparison

Michael Gleicher

(disclaimer: Michael Gleicher was [okay, is forever!] my PhD advisor at the University of Wisconsin-Madison)

This project is a quite theoretical paper that frames the problem of supporting comparison in a wide variety of visualizations. More or less, it explains our lab’s workflow and considerations when it comes to designing systems for stakeholders using a wide variety of visualization techniques. In the paper, four considerations of comparison are presented: the items being compared, the challenges in comparison, the strategy of comparison, and a supporting design for comparison. Like many visualization research design papers (much like the scatterplot paper above), the organization is not the absolute way to scaffold designing for comparison, but provides a framework for surfacing common design pitfalls.

The talk walks though quite a different path than the paper (although they discuss the same content!), so it may be worth it to watch the talk recording (17:50).

Priming and Anchoring Effects in Visualization

André Calero Valdez, Martina Ziefle, and Michael Sedlmair

This project investigates the cognitive effects of priming and anchoring when it comes to accuracy judgments about visualization. The provide an nice overview of what priming and anchoring are (influenced by previous stimulus and providing a frame of reference, respectively), and discuss their iterative experimental design to uncover what, if any, role these biases have in interpreting visualizations. This paper is an example of a empirical study that uses crowdsourcing tools (in this case Amazon’s Mechanical Turk) to rapidly collect a wide-range of participants.

[put image here]

In the study, they use a cluster separatability task in a scatterplot to gauge how priming and anchoring affects viewers’ judgments. In particular, they arrive at the initial conclusion that priming the users tends to bias their judgment toward the initial stimulus they received. The experimental design leaves a little bit to be desired, however, as the final experiment only collects four responses per participant and the factors were not within-subjects. Regardless, the project is a good example of iteration toward introducing a way to test for biases in visualization interpretation.

TACO: Visualizing Changes in Tables Over Time

Christina Niederer, Holger Stitz, Reem Hourieh, Florian Grassinger, Wolfgang Aigner, and Marc Streit

This sort of system seems a long time coming, but TACO presents a way to visualize comparisons in a table’s data and construction over time. The seminal Potter’s Wheel paper discusses different ways of data cleaning and interfaces to make identifying anomalies in the data apparently, TACO notes that it tries to surface important changes in the table makeup, including adding and removing columns and rows, highlighting changed cells, and so on.

They provide an organization of table modifications, and have an example video on their project page.

MyBrush: Brushing and Linking with Personal Agency

Philipp Koytek, Charles Perin, Jo Vermeulen, Elisabeth André, and Sheelagh Carpendale

The MyBrush system helps to enable extremely flexible linking between discrete visualizations by using an interactive technique. Linking helps to identify proportionality of a selected set of entities through showing those same entities in an alternate view. As a part of their paper, they present an abbreviated taxonomy of linking techniques, including understanding the different techniques in highlighting sources, linking mechanisms, and identifying targets. This organization is very useful for seeing how systems do (and do not) support the many different types of aggregation, and is the first step in understanding what are the merits and disadvantages of each combination of techniques.

[TODO: incorporate youtube embedding: https://bojan.io/blog/2016/10/better-youtube-videos-embedding-and-on-demand-loading-in-jekyll/] https://www.youtube.com/watch?v=1iAtPhWEV9I

The system as presented seems to expose a lot of decisions to the visualization viewer, nearly to the detriment of decision overload. Regardless, the flexibility of the implementation highlights the wide space of different design decisions when implementing a visualization authoring system. In particular, it would be fantastic if we could understand why a viewer would pick one set of techniques over another—is the viewer trying to understand proportionality or membership, or are they looking to identify anomalies?

A demonstration of the system is availalble in the above video, and the project page is available on the InnoVis group website.

Modeling Color Difference for Visualization Design

Danielle Albers Szafir

With a well-deserved best paper award for InfoVis, Prof. Szafir described her empirical results trying to understand how mark sizes for common visualization types can have an effect on the interpretation of the differences between colors. As a quick example, the colors of points in a scatterplot are much more difficult to distinguish when the points are small. When the points grow in size, it is easier to distinguish point colors from one another, helping judgments of series membership (which class of points does this point belong to?) and value judgment (what is the value communicated by this color?).

[include teaser image here]

Actionably, the author provides experiment-driven models of color differentiation based on mark size. This means, for example, that the visualization community at large has access to set of equations that can predict whether two colors can be distinguished as a function of mark size. This could lead to more principled selection of color mappings for a wide variety of mark types, including points in a scatterplot, lines in a line chart, and bars in a bar chart. In visualization authoring tools, specifically, this model of color difference can be used to coax users into making appropriate and distinguishable color choices, while respecting the original selections.

Reflection on Reflection in Design Studies

Jason Dykes, Miriah Meyer, Remco Chang, Uta Hinrichs, Nathalie Henry Riche, Petra Isenberg, Heidi Lam, and Tamara Munzner

A bit of a meta-discussion here (indicated by the title), this panel explored ideas in how to make design studies more useful outside of the realm of ths visualization academic community. Design studies (which one could read more about in the design study methodology pitfalls paper) describe the design process of generating appropriate design, ideally in a generalized way to inform others for toward new design strategies and techniques.

The panel focused on how to elevate and promote the value of what are sometimes seen as reports on engineering processes, including adherence to describing results based on existing visualization taxonomies and explicitly building off of one another instead of presenting one-off descriptions of very domain-specific systems.

Beyond Tasks: An Activity Typology for Visual Analytics

Darren Edge, Nathalie Henry Riche, Jonathan Larson, and Christopher White

[consider wiping, I don’t have much to say here]