The CU VisuaLab provides an opportunity for researchers to come together to tackle innovative questions about visualization, data analytics, and computer graphics driven by real-world challenges. Below is a sample of on-going projects in the VisuaLab. For more information about these projects or others, please contact Dr. Szafir.

Modeling Color for Visualization


Color is commonly used to encode values in a visualization. However, we know little about how the complexities of visualization impact color encoding perception and design. Shape, size, shading, and even viewing device all manipulate the colors that people will see in a visualization, and subsequently their ability to effectively use that visualization. We leverage sampling-based models to better understand and predict how color manifests in different types of visualizations. How do the marks used in a visualization impact their perceptions? How can we create encodings that are robust to the ubiquity of visualizations? How can we empower designers to craft effective color encodings? This project also explores how these questions impact encodings beyond color to design tools that pair perception and automation to encourage more effective visualization.

VisuaLab Personnel: Hemang Bansal, Stephen Smart
Funding: National Science Foundation

Example Publications:
D. A. Szafir, A. Sarikaya, & M. Gleicher. Lightness Constancy in Surface Visualization. Transactions on Visualization and Computer Graphics, 22(9), 2016.

D. A. Szafir, M. Stone & Michael Gleicher. Adapting Color Difference for Design. Proceedings of the IS&T 22nd Color and Imaging Conference, 2014.


Human-Machine Collaborative Sensemaking


As the volume of available data increases, analytics systems must leverage automated analysis methods to make sense of data. However, these methods often remove expert knowledge from the analytic process, obscuring important patterns and leveraging black-box statistical methods. In these projects, we explore how visualizations might enable fluid collaboration between analysts and statistical methods to reintegrate people into big data processes. Our systems explore how people might leverage data synthesized across multiple sources, how statistical processes might learn from expert behavior, and how analysts can intuitively provide input into statistical processes. This research also examines how interactive visualizations may help analysts understand the processes underlying machine learning to reduce barriers to its use and interpretation in practice.

VisuaLab Personnel: Michael Iuzzolino, Hayeong Song, Tetsumichi Umada
Collaborators: Michael Paul (Paul Lab), Luke Burks, Jeremy Muesing, Nisar Ahmed (COHRINT Lab), John Hatelid (Lockheed Martin), Jed Brubaker (IDLab), & Casey Fiesler (Fiesler Lab)
Funding: U.S. Air Force

Example Publications:
A. Sarikaya, D. A. Szafir, & M. Gleicher. Visualizing Validation of Protein Surface Classifiers. Computer Graphics Forum, 33(3), 2014.


Scaling Up Visualizations through Vision Science


Our understanding of visualization design is conventionally based on how well people can compare pairs of points. As people face more and more data, visualization must move beyond small-scale design thinking to understand how design might support people in understanding large collections of datapoints. Drawing from psychology, this work seeks to understand how people estimate properties across collections of points in a visualization (a process known as visual aggregation) through experimentation, and how visualizations might be designed to support these judgments. The results from these efforts have driven scalable systems in domains ranging from biology to the humanities.

VisuaLab Personnel: Pratima Sherkane, Ryan Mustari

Example Publications:
D. A. Szafir, D. Stuffer, Y. Sohail, & M. Gleicher. TextDNA: Visualizing Word Usage using Configurable Colorfields. Computer Graphics Forum, 35(3), 2016. (Project Page)

D. A. Szafir, S. Haroz, M. Gleicher, & S. Franconeri. Four Types of Ensemble Coding for Data Visualizations. Journal of Vision, 16(11), 2016.



Designing for Novel Interfaces


The space of consumer display technologies is evolving rapidly. This provides people with access to displays of different shapes, sizes, and capabilities, such as mobile phones, HMDs, and smartwatches. New displays afford new opportunities for analytics tools that help people make sense of our increasingly data-driven world. This project looks at how people perceive and interact with visual information with different display technologies. We develop guidelines, techniques, and tools that effectively leverage the capabilities of these technologies to enhance the ubiquity, accessibility, and effectiveness of data analytics and immersive visual applications.

VisuaLab Personnel: Matthew Whitlock
Collaborators: Catherine Diaz, Michael Walker, & Daniel Szafir (Iron Lab)
Funding: University of Colorado Innovative Seed Program

Project Press Release

Example Publications:
C. Diaz, M. Walker, D.A. Szafir, & D. Szafir. Designing for Depth Perceptions in Augmented Reality. Proceedings of the 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2017.