2018 |
|
![]() | Aygar, Erol; Ware, Colin; Rogers, David The Contribution of Stereoscopic and Motion Depth Cues to the Perception of Structures in 3D Point Clouds Journal Article ACM Trans. Appl. Percept., 15 (2), pp. 9:1–9:13, 2018, ISSN: 1544-3558. Abstract | Links | BibTeX | Tags: @article{Aygar:2018:CSM:3190502.3147914, title = {The Contribution of Stereoscopic and Motion Depth Cues to the Perception of Structures in 3D Point Clouds}, author = {Erol Aygar and Colin Ware and David Rogers}, url = {http://doi.acm.org/10.1145/3147914}, doi = {10.1145/3147914}, issn = {1544-3558}, year = {2018}, date = {2018-01-01}, journal = {ACM Trans. Appl. Percept.}, volume = {15}, number = {2}, pages = {9:1--9:13}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {Particle-based simulations are used across many science domains, and it is well known that stereoscopic viewing and kinetic depth enhance our ability to perceive the 3D structure of such data. But the relative advantages of stereo and kinetic depth have not been studied for point cloud data, although they have been studied for 3D networks. This article reports two experiments assessing human ability to perceive 3D structures in point clouds as a function of different viewing parameters. In the first study, the number of discrete views was varied to determine the extent to which smooth motion is needed. Also, half the trials had stereoscopic viewing and half had no stereo. The results showed kinetic depth to be more beneficial than stereo viewing in terms of accuracy and so long as the motion was smooth. The second experiment varied the amplitude of oscillatory motion from 0 to 16 degrees. The results showed an increase in detection rate with amplitude, with the best amplitudes being 4 degrees and greater. Overall, motion was shown to yield greater accuracy, but at the expense of longer response times in comparison with stereoscopic viewing.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Particle-based simulations are used across many science domains, and it is well known that stereoscopic viewing and kinetic depth enhance our ability to perceive the 3D structure of such data. But the relative advantages of stereo and kinetic depth have not been studied for point cloud data, although they have been studied for 3D networks. This article reports two experiments assessing human ability to perceive 3D structures in point clouds as a function of different viewing parameters. In the first study, the number of discrete views was varied to determine the extent to which smooth motion is needed. Also, half the trials had stereoscopic viewing and half had no stereo. The results showed kinetic depth to be more beneficial than stereo viewing in terms of accuracy and so long as the motion was smooth. The second experiment varied the amplitude of oscillatory motion from 0 to 16 degrees. The results showed an increase in detection rate with amplitude, with the best amplitudes being 4 degrees and greater. Overall, motion was shown to yield greater accuracy, but at the expense of longer response times in comparison with stereoscopic viewing. |
2017 |
|
![]() | Samsel, Francesca; Patchett, John; Rogers, David; Tsai, Karen Employing Color Theory to Visualize Volume-rendered Multivariate Ensembles of Asteroid Impact Simulations Inproceedings Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems, pp. 1126-1134, ACM, 2017, ISBN: 978-1-4503-4656-6, (LA-UR-17-20419). Abstract | Links | BibTeX | Tags: @inproceedings{LAPR-2017-027464, title = {Employing Color Theory to Visualize Volume-rendered Multivariate Ensembles of Asteroid Impact Simulations}, author = {Francesca Samsel and John Patchett and David Rogers and Karen Tsai}, url = {http://dl.acm.org/citation.cfm?doid=3027063.3053337}, doi = {10.1145/3027063.3053337}, isbn = {978-1-4503-4656-6}, year = {2017}, date = {2017-05-06}, booktitle = {Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems}, pages = {1126-1134}, publisher = {ACM}, abstract = {We describe explorations and innovations developed to help scientists understand an ensemble of large scale simulations of asteroid impacts in the ocean. The simulations were run to help scientists determine the characteristics of asteroids that NASA should track, so that communities at risk from impact can be given advanced notice. Of relevance to the CHI community are 1) hands-on workflow issues specific to exploring ensembles of large scientific data, 2) innovations in exploring such data ensembles with color, and 3) examples of multidisciplinary collaboration.}, note = {LA-UR-17-20419}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We describe explorations and innovations developed to help scientists understand an ensemble of large scale simulations of asteroid impacts in the ocean. The simulations were run to help scientists determine the characteristics of asteroids that NASA should track, so that communities at risk from impact can be given advanced notice. Of relevance to the CHI community are 1) hands-on workflow issues specific to exploring ensembles of large scientific data, 2) innovations in exploring such data ensembles with color, and 3) examples of multidisciplinary collaboration. |
![]() | Adhinarayanan, Vignesh; Feng, Wu-chun; Rogers, David; Ahrens, James; Pakin, Scott Characterizing and Modeling Power and Energy for Extreme-Scale In-Situ Visualization Inproceedings 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 978-987, 2017, (LA-UR-16-22435). Abstract | Links | BibTeX | Tags: @inproceedings{7967188, title = {Characterizing and Modeling Power and Energy for Extreme-Scale In-Situ Visualization}, author = {Vignesh Adhinarayanan and Wu-chun Feng and David Rogers and James Ahrens and Scott Pakin}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2017/08/CharacterizingandModelingPowerandEnergyforExtreme-ScaleIn-SituVisualization.pdf}, doi = {10.1109/IPDPS.2017.113}, year = {2017}, date = {2017-05-01}, booktitle = {2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS)}, pages = {978-987}, abstract = {Plans for exascale computing have identified power and energy as looming problems for simulations running at that scale. In particular, writing to disk all the data generated by these simulations is becoming prohibitively expensive due to the energy consumption of the supercomputer while it idles waiting for data to be written to permanent storage. In addition, the power cost of data movement is also steadily increasing. A solution to this problem is to write only a small fraction of the data generated while still maintaining the cognitive fidelity of the visualization. With domain scientists increasingly amenable towards adopting an in-situ framework that can identify and extract valuable data from extremely large simulation results and write them to permanent storage as compact images, a large-scale simulation will commit to disk a reduced dataset of data extracts that will be much smaller than the raw results, resulting in a savings in both power and energy. The goal of this paper is two-fold: (i) to understand the role of in-situ techniques in combating power and energy issues of extreme-scale visualization and (ii) to create a model for performance, power, energy, and storage to facilitate what-if analysis. Our experiments on a specially instrumented, dedicated 150-node cluster show that while it is difficult to achieve power savings in practice using in-situ techniques, applications can achieve significant energy savings due to shorter write times for in-situ visualization. We present a characterization of power and energy for in-situ visualization; an application-aware, architecture-specific methodology for modeling and analysis of such in-situ workflows; and results that uncover indirect power savings in visualization workflows for high-performance computing (HPC).}, note = {LA-UR-16-22435}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Plans for exascale computing have identified power and energy as looming problems for simulations running at that scale. In particular, writing to disk all the data generated by these simulations is becoming prohibitively expensive due to the energy consumption of the supercomputer while it idles waiting for data to be written to permanent storage. In addition, the power cost of data movement is also steadily increasing. A solution to this problem is to write only a small fraction of the data generated while still maintaining the cognitive fidelity of the visualization. With domain scientists increasingly amenable towards adopting an in-situ framework that can identify and extract valuable data from extremely large simulation results and write them to permanent storage as compact images, a large-scale simulation will commit to disk a reduced dataset of data extracts that will be much smaller than the raw results, resulting in a savings in both power and energy. The goal of this paper is two-fold: (i) to understand the role of in-situ techniques in combating power and energy issues of extreme-scale visualization and (ii) to create a model for performance, power, energy, and storage to facilitate what-if analysis. Our experiments on a specially instrumented, dedicated 150-node cluster show that while it is difficult to achieve power savings in practice using in-situ techniques, applications can achieve significant energy savings due to shorter write times for in-situ visualization. We present a characterization of power and energy for in-situ visualization; an application-aware, architecture-specific methodology for modeling and analysis of such in-situ workflows; and results that uncover indirect power savings in visualization workflows for high-performance computing (HPC). |
![]() | Samsel, Francesca; Turton, Terece; Wolfram, Phillip; Bujack, Roxana Intuitive Colormaps for Environmental Visualization Inproceedings Rink, Karsten; Middel, Ariane; Zeckzer, Dirk; Bujack, Roxana (Ed.): Workshop on Visualisation in Environmental Sciences (EnvirVis), The Eurographics Association, 2017, ISBN: 978-3-03868-040-6, (LA-UR-17-22224). Abstract | Links | BibTeX | Tags: @inproceedings{info:lanl-repo/lareport/LA-UR-17-22224, title = {Intuitive Colormaps for Environmental Visualization}, author = {Francesca Samsel and Terece Turton and Phillip Wolfram and Roxana Bujack}, editor = {Karsten Rink and Ariane Middel and Dirk Zeckzer and Roxana Bujack}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2017/08/IntuitiveColormapsforEnvironmentalVisualization.pdf}, doi = {10.2312/envirvis.20171105}, isbn = {978-3-03868-040-6}, year = {2017}, date = {2017-03-16}, booktitle = {Workshop on Visualisation in Environmental Sciences (EnvirVis)}, publisher = {The Eurographics Association}, abstract = {Visualizations benefit from the use of intuitive colors, enabling an observer to make use of more automatic, subconscious channels. In this paper, we apply the concept of intuitive color to the generation of thematic colormaps for the environmental sciences. In particular, we provide custom sets of colormaps for water, atmosphere, land, and vegetation. These have been integrated into the online tool: ColorMoves: The Environment to enable the environmental scientist to tailor them precisely to the data and tasks in a simple drag-and-drop workflow.}, howpublished = {EnvirVis ; 2017-06-12 - 2017-06-13 ; Barcelona, Spain}, note = {LA-UR-17-22224}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Visualizations benefit from the use of intuitive colors, enabling an observer to make use of more automatic, subconscious channels. In this paper, we apply the concept of intuitive color to the generation of thematic colormaps for the environmental sciences. In particular, we provide custom sets of colormaps for water, atmosphere, land, and vegetation. These have been integrated into the online tool: ColorMoves: The Environment to enable the environmental scientist to tailor them precisely to the data and tasks in a simple drag-and-drop workflow. |
![]() | Ware, Colin; Turton, Terece; Samsel, Francesca; Bujack, Roxana; Rogers, David Evaluating the Perceptual Uniformity of Color Sequences for Feature Discrimination Inproceedings Lawonn, Kai; Smit, Noeska; Cunningham, Douglas (Ed.): EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3), The Eurographics Association, 2017, ISBN: 978-3-03868-041-3, (LA-UR-17-24206). Abstract | Links | BibTeX | Tags: @inproceedings{eurorv3.20171107, title = {Evaluating the Perceptual Uniformity of Color Sequences for Feature Discrimination}, author = {Colin Ware and Terece Turton and Francesca Samsel and Roxana Bujack and David Rogers}, editor = {Kai Lawonn and Noeska Smit and Douglas Cunningham}, url = {https://diglib.eg.org/handle/10.2312/eurorv320171107}, doi = {10.2312/eurorv3.20171107 note=LA-UR-17-24206}, isbn = {978-3-03868-041-3}, year = {2017}, date = {2017-01-01}, booktitle = {EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3)}, publisher = {The Eurographics Association}, abstract = {Probably the most common method for visualizing univariate data maps is through pseudocoloring and one of the most commonly cited requirements of a good colormap is that it be perceptually uniform. This means that differences between adjacent colors in the sequence be equally distinct. The practical value of uniformity is for features in the data to be equally distinctive no matter where they lie in the colormap, but there are reasons for thinking that uniformity in terms of feature detection may not be achieved by current methods which are based on the use of uniform color spaces. In this paper we provide a new method for directly evaluating colormaps in terms of their capacity for feature resolution. We apply the method in a study using Amazon Mechanical Turk to evaluate seven colormaps. Among other findings the results show that two new double ended sequences have the highest discriminative power and good uniformity. Ways in which the technique can be applied include the design of colormaps for uniformity, and a method for evaluating colormaps through feature discrimination curves for differently sized features.}, note = {LA-UR-17-24206}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Probably the most common method for visualizing univariate data maps is through pseudocoloring and one of the most commonly cited requirements of a good colormap is that it be perceptually uniform. This means that differences between adjacent colors in the sequence be equally distinct. The practical value of uniformity is for features in the data to be equally distinctive no matter where they lie in the colormap, but there are reasons for thinking that uniformity in terms of feature detection may not be achieved by current methods which are based on the use of uniform color spaces. In this paper we provide a new method for directly evaluating colormaps in terms of their capacity for feature resolution. We apply the method in a study using Amazon Mechanical Turk to evaluate seven colormaps. Among other findings the results show that two new double ended sequences have the highest discriminative power and good uniformity. Ways in which the technique can be applied include the design of colormaps for uniformity, and a method for evaluating colormaps through feature discrimination curves for differently sized features. |
![]() | Turton, Terece; Ware, Colin; Samsel, Francesca; Rogers, David A Crowdsourced Approach to Colormap Assessment Inproceedings Lawonn, Kai; Smit, Noeska; Cunningham, Douglas (Ed.): EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3), The Eurographics Association, 2017, ISBN: 978-3-03868-041-3. Abstract | Links | BibTeX | Tags: @inproceedings{Turton2017crowdsourced, title = {A Crowdsourced Approach to Colormap Assessment}, author = {Terece Turton and Colin Ware and Francesca Samsel and David Rogers}, editor = {Kai Lawonn and Noeska Smit and Douglas Cunningham}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2017/08/ACrowdsourcedApproachtoColormapAssessment.pdf}, doi = {10.2312/eurorv3.20171106}, isbn = {978-3-03868-041-3}, year = {2017}, date = {2017-01-01}, booktitle = {EuroVis Workshop on Reproducibility, Verification, and Validation in Visualization (EuroRV3)}, publisher = {The Eurographics Association}, abstract = {Despite continual research and discussion on the perceptual effects of color in scientific visualization, psychophysical testing is often limited. In-person lab studies can be expensive and time-consuming while results can be difficult to extrapolate from meticulously controlled laboratory conditions to the real world of the visualization user. We draw on lessons learned from the use of crowdsourced participant pools in the behavioral sciences and information visualization to apply a crowdsourced approach to a classic psychophysical experiment assessing the ability of a colormap to impart metric information. We use an online presentation analogous to the color key task from Ware’s 1988 paper, Color Sequences for Univariate Maps, testing colormaps similar to those in the original paper along with contemporary colormap standards and new alternatives in the scientific visualization domain. We explore the issue of potential contamination from color deficient participants and establish that perceptual color research can appropriately leverage a crowdsourced participant pool without significant CVD concerns. The updated version of the Ware color key task also provides a method to assess and compare colormaps.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Despite continual research and discussion on the perceptual effects of color in scientific visualization, psychophysical testing is often limited. In-person lab studies can be expensive and time-consuming while results can be difficult to extrapolate from meticulously controlled laboratory conditions to the real world of the visualization user. We draw on lessons learned from the use of crowdsourced participant pools in the behavioral sciences and information visualization to apply a crowdsourced approach to a classic psychophysical experiment assessing the ability of a colormap to impart metric information. We use an online presentation analogous to the color key task from Ware’s 1988 paper, Color Sequences for Univariate Maps, testing colormaps similar to those in the original paper along with contemporary colormap standards and new alternatives in the scientific visualization domain. We explore the issue of potential contamination from color deficient participants and establish that perceptual color research can appropriately leverage a crowdsourced participant pool without significant CVD concerns. The updated version of the Ware color key task also provides a method to assess and compare colormaps. |
![]() | Turton, Terece; Berres, Anne; Rogers, David; Ahrens, James ETK: An Evaluation Toolkit for Visualization User Studies Inproceedings Kozlikova, Barbora; Schreck, Tobias; Wischgoll, Thomas (Ed.): EuroVis 2017 – Short Papers, The Eurographics Association, 2017, ISBN: 978-3-03868-043-7. Abstract | Links | BibTeX | Tags: @inproceedings{Turton2017etk, title = {ETK: An Evaluation Toolkit for Visualization User Studies}, author = {Terece Turton and Anne Berres and David Rogers and James Ahrens}, editor = {Barbora Kozlikova and Tobias Schreck and Thomas Wischgoll}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2017/08/ETKAnEvaluationToolkitforVisualizationUserStudies.pdf}, doi = {10.2312/eurovisshort.20171131}, isbn = {978-3-03868-043-7}, year = {2017}, date = {2017-01-01}, booktitle = {EuroVis 2017 – Short Papers}, publisher = {The Eurographics Association}, abstract = {This paper describes the design and features of the Evaluation Toolkit (ETK), a set of JavaScript/HTML/CSS modules leveraging the Qualtrics JavaScript API that can be used to automate image-based perceptual user evaluation studies. Automating the presentation of the images can greatly decrease the time to build and implement an evaluation study while minimizing the length and complexity of a study built within Qualtrics, along with decreasing the possibility of error in image presentation. The ETK modules each focus on automating a specific psychophysical or experimental approach. Because each module is an extension or plug-in to a Qualtrics question, the resultant study can be easily used in a laboratory setting or in a crowdsourced approach. We present the open source repository of ETK with the six modules that currently make up the toolkit and invite the community to explore, utilize, and contribute to the toolkit.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } This paper describes the design and features of the Evaluation Toolkit (ETK), a set of JavaScript/HTML/CSS modules leveraging the Qualtrics JavaScript API that can be used to automate image-based perceptual user evaluation studies. Automating the presentation of the images can greatly decrease the time to build and implement an evaluation study while minimizing the length and complexity of a study built within Qualtrics, along with decreasing the possibility of error in image presentation. The ETK modules each focus on automating a specific psychophysical or experimental approach. Because each module is an extension or plug-in to a Qualtrics question, the resultant study can be easily used in a laboratory setting or in a crowdsourced approach. We present the open source repository of ETK with the six modules that currently make up the toolkit and invite the community to explore, utilize, and contribute to the toolkit. |
![]() | Bujack, Roxana; Turton, Terece; Samsel, Francesca; Ware, Colin; Rogers, David; Ahrens, James The Good, the Bad, and the Ugly: A Theoretical Framework for the Assessment of Continuous Colormaps Inproceedings IEEE Visualization, 2017. Abstract | Links | BibTeX | Tags: @inproceedings{bujack2017good, title = {The Good, the Bad, and the Ugly: A Theoretical Framework for the Assessment of Continuous Colormaps}, author = {Roxana Bujack and Terece Turton and Francesca Samsel and Colin Ware and David Rogers and James Ahrens}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2017/10/TheGoodtheBadandtheUgly.pdf}, year = {2017}, date = {2017-01-01}, booktitle = {IEEE Visualization}, abstract = {A myriad of design rules for what constitutes a “good” colormap can be found in the literature. Some common rules include order, uniformity, and high discriminative power. However, the meaning of many of these terms is often ambiguous or open to interpretation. At times, different authors may use the same term to describe different concepts or the same rule is described by varying nomenclature. These ambiguities stand in the way of collaborative work, the design of experiments to assess the characteristics of colormaps, and automated colormap generation. In this paper, we review current and historical guidelines for colormap design. We propose a specified taxonomy and provide unambiguous mathematical definitions for the most common design rules.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } A myriad of design rules for what constitutes a “good” colormap can be found in the literature. Some common rules include order, uniformity, and high discriminative power. However, the meaning of many of these terms is often ambiguous or open to interpretation. At times, different authors may use the same term to describe different concepts or the same rule is described by varying nomenclature. These ambiguities stand in the way of collaborative work, the design of experiments to assess the characteristics of colormaps, and automated colormap generation. In this paper, we review current and historical guidelines for colormap design. We propose a specified taxonomy and provide unambiguous mathematical definitions for the most common design rules. |
![]() | Lange, Devin; Samsel, Francesca; Karamouzas, Ioannis; Guy,; Dockter, Rodney; Kowalewski, Timothy; Keefe, Daniel Trajectory Mapper: Interactive Widgets and Artist-Designed Encodings for Visualizing Multivariate Trajectory Data Inproceedings Kozlikova, Barbora; Schreck, Tobias; Wischgoll, Thomas (Ed.): EuroVis 2017 – Short Papers, The Eurographics Association, 2017, ISBN: 978-3-03868-043-7. Abstract | Links | BibTeX | Tags: @inproceedings{Lange2017trajectoryMapper, title = {Trajectory Mapper: Interactive Widgets and Artist-Designed Encodings for Visualizing Multivariate Trajectory Data}, author = {Devin Lange and Francesca Samsel and Ioannis Karamouzas and S J Guy and Rodney Dockter and Timothy Kowalewski and Daniel Keefe}, editor = {Barbora Kozlikova and Tobias Schreck and Thomas Wischgoll}, url = {http://ecxproject.org/wp-content/uploads/sites/18/2018/04/trajectory-mapper-interactive-widgets-and-artist-designed-encodings-for-visualizing-multivariate-trajectory-data.pdf}, doi = {10.2312/eurovisshort.20171141}, isbn = {978-3-03868-043-7}, year = {2017}, date = {2017-01-01}, booktitle = {EuroVis 2017 – Short Papers}, publisher = {The Eurographics Association}, abstract = {We present Trajectory Mapper, a system of novel interactive widgets and artist-designed visual encodings to support exploratory multivariate visualization of spatial trajectories. Trajectories are rendered using a three-way multi-texturing algorithm so that the color, texture, and shape of each mark can be manipulated separately in response to data. Visual encodings designed by artists and arranged in categories (e.g., divergent, linear, structured) are utilized as strong starting points for visual exploration. Interactive widgets including linked parallel coordinates plots, 3D camera controls, and projection to arbitrary 3D planes facilitate data exploration. An innovative visual mapper menu enables rapid experimentation with alternative data mappings using the artist-designed or custom encodings that can be created with no programming using image editing software. In addition to system design details and insights, two applications with collaborating domain science users are presented. The first requires analyzing 2D crowd simulations and the second 3D tool traces from laparoscopic surgery training exercises.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } We present Trajectory Mapper, a system of novel interactive widgets and artist-designed visual encodings to support exploratory multivariate visualization of spatial trajectories. Trajectories are rendered using a three-way multi-texturing algorithm so that the color, texture, and shape of each mark can be manipulated separately in response to data. Visual encodings designed by artists and arranged in categories (e.g., divergent, linear, structured) are utilized as strong starting points for visual exploration. Interactive widgets including linked parallel coordinates plots, 3D camera controls, and projection to arbitrary 3D planes facilitate data exploration. An innovative visual mapper menu enables rapid experimentation with alternative data mappings using the artist-designed or custom encodings that can be created with no programming using image editing software. In addition to system design details and insights, two applications with collaborating domain science users are presented. The first requires analyzing 2D crowd simulations and the second 3D tool traces from laparoscopic surgery training exercises. |
2016 |
|
![]() | Ware, Colin; Rogers, David; Petersen, Mark; Ahrens, James; Aygar, Erol Optimizing for Visual Cognition in High Performance Scientific Computing Journal Article Electronic Imaging, 2016 (16), pp. 1–9, 2016, ISSN: 2470-1173. Abstract | Links | BibTeX | Tags: @article{ware2016optimizing, title = {Optimizing for Visual Cognition in High Performance Scientific Computing}, author = {Colin Ware and David Rogers and Mark Petersen and James Ahrens and Erol Aygar}, url = {http://www.ingentaconnect.com/content/ist/ei/2016/00002016/00000016/art00041}, doi = {doi:10.2352/ISSN.2470-1173.2016.16.HVEI-130}, issn = {2470-1173}, year = {2016}, date = {2016-02-14}, journal = {Electronic Imaging}, volume = {2016}, number = {16}, pages = {1--9}, publisher = {Society for Imaging Science and Technology}, abstract = {High performance scientific computing is undergoing radical changes as we move to Exascale (1018 FLOPS) and as a consequence products for visualization must increasingly be generated in-situ as opposed to after a model run. This changes both the nature of the data products and the overall cognitive work flow. Currently, data is saved in the form of model dumps, but these are both extremely large and not ideal for visualization. Instead, we need methods for saving model data in ways that are both compact and optimized for visualization. For example, our results show that animated representations are more perceptually efficient than static views even for steady flows, so we need ways of compressing vector field data for animated visualization. Another example, motion parallax is essential to perceive structures in dark matter simulations, so we need ways of saving large particle systems optimized for perception. Turning to the cognitive work flow, when scientists and engineers allocate their time to high performance computer simulations their effort is distributed between pre and post run work. To better understand the tradeoffs we created an analytics game to model the optimization of high performance computer codes simulating ocean dynamics. Visualization is a key part of this process. The results from two analytics game experiments suggest that simple changes can have a large impact on overall cognitive efficiency. Our first experiment showed that study participants continued to look at images for much longer than optimal. A second experiment revealed a large reduction in cognitive efficiency as working memory demands increased. We conclude with recommendations for systems design.}, keywords = {}, pubstate = {published}, tppubtype = {article} } High performance scientific computing is undergoing radical changes as we move to Exascale (1018 FLOPS) and as a consequence products for visualization must increasingly be generated in-situ as opposed to after a model run. This changes both the nature of the data products and the overall cognitive work flow. Currently, data is saved in the form of model dumps, but these are both extremely large and not ideal for visualization. Instead, we need methods for saving model data in ways that are both compact and optimized for visualization. For example, our results show that animated representations are more perceptually efficient than static views even for steady flows, so we need ways of compressing vector field data for animated visualization. Another example, motion parallax is essential to perceive structures in dark matter simulations, so we need ways of saving large particle systems optimized for perception. Turning to the cognitive work flow, when scientists and engineers allocate their time to high performance computer simulations their effort is distributed between pre and post run work. To better understand the tradeoffs we created an analytics game to model the optimization of high performance computer codes simulating ocean dynamics. Visualization is a key part of this process. The results from two analytics game experiments suggest that simple changes can have a large impact on overall cognitive efficiency. Our first experiment showed that study participants continued to look at images for much longer than optimal. A second experiment revealed a large reduction in cognitive efficiency as working memory demands increased. We conclude with recommendations for systems design. |
![]() | Samsel, Francesca; Klassen, Sebastian; Petersen, Mark; Turton, Terece; Abram, Greg; Rogers, David; Ahrens, James Interactive Colormapping: Enabling Multiple Data Ranges, Detailed Views of Ocean Salinity Inproceedings Proceedings of the 34rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, ACM, San Jose, California, 2016, (LA-UR-15-20105). Abstract | Links | BibTeX | Tags: @inproceedings{Samsel:2016:CIP:2702613.2702975, title = {Interactive Colormapping: Enabling Multiple Data Ranges, Detailed Views of Ocean Salinity}, author = {Francesca Samsel and Sebastian Klassen and Mark Petersen and Terece Turton and Greg Abram and David Rogers and James Ahrens}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2016/06/InteractiveColormapping.pdf}, year = {2016}, date = {2016-01-01}, booktitle = {Proceedings of the 34rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems}, publisher = {ACM}, address = {San Jose, California}, series = {CHI EA '16}, abstract = {Ocean salinity is a critical component to understanding climate change. Salinity concentrations and temperature drive large ocean currents which in turn drive global weather patterns. Melting ice caps lower salinity at the poles while river deltas bring fresh water into the ocean worldwide. These processes slow ocean currents, changing weather patterns and producing extreme climate events which disproportionally affect those living in poverty. Analysis of salinity presents a unique visualization challenge. Important data are found in narrow data ranges, varying with global location. Changing values of salinity are important in understanding ocean currents, but are difficult to map to colors using traditional tools. Commonly used colormaps may not provide sufficient detail for this data. Current editing tools do not easily enable a scientist to explore the subtleties of salinity. We present a workflow, enabled by an interactive colormap tool that allows a scientist to interactively apply sophisticated colormaps to scalar data. The intuitive and immediate interaction of the scientist with the data is a critical contribution of this work.}, note = {LA-UR-15-20105}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Ocean salinity is a critical component to understanding climate change. Salinity concentrations and temperature drive large ocean currents which in turn drive global weather patterns. Melting ice caps lower salinity at the poles while river deltas bring fresh water into the ocean worldwide. These processes slow ocean currents, changing weather patterns and producing extreme climate events which disproportionally affect those living in poverty. Analysis of salinity presents a unique visualization challenge. Important data are found in narrow data ranges, varying with global location. Changing values of salinity are important in understanding ocean currents, but are difficult to map to colors using traditional tools. Commonly used colormaps may not provide sufficient detail for this data. Current editing tools do not easily enable a scientist to explore the subtleties of salinity. We present a workflow, enabled by an interactive colormap tool that allows a scientist to interactively apply sophisticated colormaps to scalar data. The intuitive and immediate interaction of the scientist with the data is a critical contribution of this work. |
![]() | Patchett, John; Samsel, Francesca; Tsai, Karen; Gisler, Galen; Rogers, David; Abram, Greg; Turton, Terece Visualization and Analysis of Threats from Asteroid Ocean Impacts Inproceedings 2016, (Winner, Best Scientific Visualization & Data Analytics Showcase; LA-UR-16-26258). Abstract | Links | BibTeX | Tags: @inproceedings{Patchett2016asteroidvis, title = {Visualization and Analysis of Threats from Asteroid Ocean Impacts}, author = {John Patchett and Francesca Samsel and Karen Tsai and Galen Gisler and David Rogers and Greg Abram and Terece Turton}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2017/08/VisualizationAndAnalysisOfThreatsFromAsteroidOceanImpacts.pdf}, year = {2016}, date = {2016-01-01}, journal = {2016 ACM/IEEE International Conference for High Performance Computing, Networking, Storage, and Analysis (SC)}, abstract = {An asteroid colliding with earth can have grave consequences. An impact in the ocean has complex effects as the kinetic energy of the asteroid is transferred to the water, potentially causing a tsunami or other distant effect. Scientists at Los Alamos National Laboratory are using the xRage simulation code on high performance computing (HPC) systems to understand the range of possible behaviors of an asteroid impacting the ocean. By running ensembles of large scale 3D simulations, scientists can study a set of potential factors for asteroid-generated tsunamis (AGTs) such as angle of impact, asteroid mass and air burst elevation. These studies help scientists understand the consequences of asteroid impacts such as water dispersement into the atmosphere, which can impact the global climate, or tsunami creation, which can place population centers at risk. The results of these simulations will support NASA’s Office of Planetary Defense in deciding how to best track near-Earth objects (NEOs).}, note = {Winner, Best Scientific Visualization & Data Analytics Showcase; LA-UR-16-26258}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } An asteroid colliding with earth can have grave consequences. An impact in the ocean has complex effects as the kinetic energy of the asteroid is transferred to the water, potentially causing a tsunami or other distant effect. Scientists at Los Alamos National Laboratory are using the xRage simulation code on high performance computing (HPC) systems to understand the range of possible behaviors of an asteroid impacting the ocean. By running ensembles of large scale 3D simulations, scientists can study a set of potential factors for asteroid-generated tsunamis (AGTs) such as angle of impact, asteroid mass and air burst elevation. These studies help scientists understand the consequences of asteroid impacts such as water dispersement into the atmosphere, which can impact the global climate, or tsunami creation, which can place population centers at risk. The results of these simulations will support NASA’s Office of Planetary Defense in deciding how to best track near-Earth objects (NEOs). |
![]() | Ware, Colin; Bolan, Daniel; Miller, Ricky; Rogers, David; Ahrens, James Animated Versus Static Views of Steady Flow Patterns Inproceedings Proceedings of the ACM Symposium on Applied Perception, pp. 77–84, ACM, Anaheim, California, 2016, ISBN: 978-1-4503-4383-1. Abstract | Links | BibTeX | Tags: @inproceedings{Ware:2016:AVS:2931002.2931012, title = {Animated Versus Static Views of Steady Flow Patterns}, author = {Colin Ware and Daniel Bolan and Ricky Miller and David Rogers and James Ahrens}, url = {http://doi.acm.org/10.1145/2931002.2931012}, doi = {10.1145/2931002.2931012}, isbn = {978-1-4503-4383-1}, year = {2016}, date = {2016-01-01}, booktitle = {Proceedings of the ACM Symposium on Applied Perception}, pages = {77--84}, publisher = {ACM}, address = {Anaheim, California}, series = {SAP '16}, abstract = {Two experiments were conducted to test the hypothesis that animated representations of vector fields are more effective than common static representations even for steady flow. We compared four flow visualization methods: animated streamlets, animated orthogonal line segments (where short lines were elongated orthogonal to the flow direction but animated in the direction of flow), static equally spaced streamlines, and static arrow grids. The first experiment involved a pattern detection task in which the participant searched for an anomalous flow pattern in a field of similar patterns. The results showed that both the animation methods produced more accurate and faster responses. The second experiment involved mentally tracing an advection path from a central dot in the flow field and marking where the path would cross the boundary of a surrounding circle. For this task the animated streamlets resulted in better performance than the other methods, but the animated orthogonal particles resulted in the worst performance. We conclude with recommendations for the representation of steady flow patterns.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Two experiments were conducted to test the hypothesis that animated representations of vector fields are more effective than common static representations even for steady flow. We compared four flow visualization methods: animated streamlets, animated orthogonal line segments (where short lines were elongated orthogonal to the flow direction but animated in the direction of flow), static equally spaced streamlines, and static arrow grids. The first experiment involved a pattern detection task in which the participant searched for an anomalous flow pattern in a field of similar patterns. The results showed that both the animation methods produced more accurate and faster responses. The second experiment involved mentally tracing an advection path from a central dot in the flow field and marking where the path would cross the boundary of a surrounding circle. For this task the animated streamlets resulted in better performance than the other methods, but the animated orthogonal particles resulted in the worst performance. We conclude with recommendations for the representation of steady flow patterns. |
2015 |
|
![]() | Adhinarayanan, Vignesh Performance, Power and Energy of In-situ and Post-processing Visualization: A Case Study in Climate Simulation Presentation 05.10.2015, (LA-UR-15-27749). Abstract | Links | BibTeX | Tags: @misc{Adhinarayanan2015, title = {Performance, Power and Energy of In-situ and Post-processing Visualization: A Case Study in Climate Simulation}, author = {Vignesh Adhinarayanan}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2016/07/PerformancePowerAndEnergyOfInSituAndPostProcessingVisualization.pdf}, year = {2015}, date = {2015-10-05}, abstract = {This presentation summarizes a summer study of the performance, power, and energy trade-offs among traditional post-processing, modern post-processing, and in-situ visualization pipelines. It includes both detailed sub-component level power measurements within a node to gain detailed insights and measurements at scale to understand problems unique to big supercomputers.}, note = {LA-UR-15-27749}, keywords = {}, pubstate = {published}, tppubtype = {presentation} } This presentation summarizes a summer study of the performance, power, and energy trade-offs among traditional post-processing, modern post-processing, and in-situ visualization pipelines. It includes both detailed sub-component level power measurements within a node to gain detailed insights and measurements at scale to understand problems unique to big supercomputers. |
![]() | Adhinarayanan, Vignesh; Feng, Wu-chun; Woodring, Jonathan; Rogers, David; Ahrens, James On the Greenness of In-Situ and Post-Processing Visualization Pipelines Inproceedings 11th workshop on High-Performance, Power-Aware Computing (HPPAC), Hyderabad, India, 2015, (LA-UR-15-21414). Abstract | Links | BibTeX | Tags: @inproceedings{vignesh-in-situ-hppac15, title = {On the Greenness of In-Situ and Post-Processing Visualization Pipelines}, author = {Vignesh Adhinarayanan and Wu-chun Feng and Jonathan Woodring and David Rogers and James Ahrens}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2016/06/OnTheGreenessOfIn-SituAndPost-ProcessingVisualizationPipelines.pdf}, year = {2015}, date = {2015-05-01}, booktitle = {11th workshop on High-Performance, Power-Aware Computing (HPPAC)}, address = {Hyderabad, India}, abstract = {Post-processing visualization pipelines are tradi- tionally used to gain insight from simulation data. However, changes to the system architecture for high-performance com- puting (HPC), dictated by the exascale goal, have limited the applicability of post-processing visualization. As an alternative, in-situ pipelines are proposed in order to enhance the knowl- edge discovery process via “real-time” visualization. Quantitative studies have already shown how in-situ visualization can improve performance and reduce storage needs at the cost of scientific exploration capabilities. However, to fully understand the trade- off space, a head-to-head comparison of power and energy (between the two types of visualization pipelines) is necessary. Thus, in this work, we study the greenness (i.e., power, energy, and energy efficiency) of the in-situ and the post-processing visualization pipelines, using a proxy heat-transfer simulation as an example. For a realistic I/O load, the in-situ pipeline consumes 43% less energy than the post-processing pipeline. Contrary to expectations, our findings also show that only 9% of the total energy is saved by reducing off-chip data movement, while the rest of the savings comes from reducing the system idle time. This suggests an alternative set of optimization techniques for reducing the power consumption of the traditional post- processing pipeline.}, note = {LA-UR-15-21414}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Post-processing visualization pipelines are tradi- tionally used to gain insight from simulation data. However, changes to the system architecture for high-performance com- puting (HPC), dictated by the exascale goal, have limited the applicability of post-processing visualization. As an alternative, in-situ pipelines are proposed in order to enhance the knowl- edge discovery process via “real-time” visualization. Quantitative studies have already shown how in-situ visualization can improve performance and reduce storage needs at the cost of scientific exploration capabilities. However, to fully understand the trade- off space, a head-to-head comparison of power and energy (between the two types of visualization pipelines) is necessary. Thus, in this work, we study the greenness (i.e., power, energy, and energy efficiency) of the in-situ and the post-processing visualization pipelines, using a proxy heat-transfer simulation as an example. For a realistic I/O load, the in-situ pipeline consumes 43% less energy than the post-processing pipeline. Contrary to expectations, our findings also show that only 9% of the total energy is saved by reducing off-chip data movement, while the rest of the savings comes from reducing the system idle time. This suggests an alternative set of optimization techniques for reducing the power consumption of the traditional post- processing pipeline. |
![]() | Samsel, Francesca; Petersen, Mark; Geld, Terece; Abram, Greg; Wendelberger, Joanne; Ahrens, James Colormaps that Improve Perception of High-Resolution Ocean Data Presentation 23.04.2015, (LA-UR-15-20105). Abstract | Links | BibTeX | Tags: @misc{Samsel2015, title = { Colormaps that Improve Perception of High-Resolution Ocean Data}, author = {Francesca Samsel and Mark Petersen and Terece Geld and Greg Abram and Joanne Wendelberger and James Ahrens}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2016/08/ColormapsThatImprovePerceptionOfHigh-ResolutionOceanData.pdf}, year = {2015}, date = {2015-04-23}, abstract = {Scientists from the Climate, Ocean and Sea Ice Modeling Team (COSIM) at the Los Alamos National Laboratory (LANL) are interested in gaining a deeper understanding of three primary ocean currents: the Gulf Stream, the Kuroshio Current, and the Agulhas Current & Retroflection. To address these needs, visual artist Francesca Samsel teamed up with experts from the areas of computer science, climate science, statistics, and perceptual science. By engaging an artist specializing in color, we created colormaps that provide the ability to see greater detail in these high-resolution datasets. The new colormaps applied to the POP dataset enabled scientists to see areas of interest unclear using standard colormaps. Improvements in the perceptual range of color allowed scientists to highlight structures within specific ocean currents. Work with the COSIM team members drove development of nested colormaps which provide further detail to the scientists.}, note = {LA-UR-15-20105}, keywords = {}, pubstate = {published}, tppubtype = {presentation} } Scientists from the Climate, Ocean and Sea Ice Modeling Team (COSIM) at the Los Alamos National Laboratory (LANL) are interested in gaining a deeper understanding of three primary ocean currents: the Gulf Stream, the Kuroshio Current, and the Agulhas Current & Retroflection. To address these needs, visual artist Francesca Samsel teamed up with experts from the areas of computer science, climate science, statistics, and perceptual science. By engaging an artist specializing in color, we created colormaps that provide the ability to see greater detail in these high-resolution datasets. The new colormaps applied to the POP dataset enabled scientists to see areas of interest unclear using standard colormaps. Improvements in the perceptual range of color allowed scientists to highlight structures within specific ocean currents. Work with the COSIM team members drove development of nested colormaps which provide further detail to the scientists. |
![]() | Samsel, Francesca; Petersen, Mark; Geld, Terece; Abram, Greg; Wendelberger, Joanne; Ahrens, James Colormaps That Improve Perception of High-Resolution Ocean Data Inproceedings Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems, pp. 703–710, ACM, Seoul, Republic of Korea, 2015, ISBN: 978-1-4503-3146-3, (LA-UR-15-20105). Abstract | Links | BibTeX | Tags: @inproceedings{Samsel:2015:CIP:2702613.2702975, title = {Colormaps That Improve Perception of High-Resolution Ocean Data}, author = {Francesca Samsel and Mark Petersen and Terece Geld and Greg Abram and Joanne Wendelberger and James Ahrens}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2016/06/ColormapsThatImprovePerceptionOfHigh-ResolutionOceanData.pdf}, doi = {10.1145/2702613.2702975}, isbn = {978-1-4503-3146-3}, year = {2015}, date = {2015-01-01}, booktitle = {Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems}, pages = {703--710}, publisher = {ACM}, address = {Seoul, Republic of Korea}, series = {CHI EA '15}, abstract = {Scientists from the Climate, Ocean and Sea Ice Modeling Team (COSIM) at the Los Alamos National Laboratory (LANL) are interested in gaining a deeper understanding of three primary ocean currents: the Gulf Stream, the Kuroshio Current, and the Agulhas Current & Retroflection. To address these needs, visual artist Francesca Samsel teamed up with experts from the areas of computer science, climate science, statistics, and perceptual science. By engaging an artist specializing in color, we created colormaps that provide the ability to see greater detail in these high-resolution datasets. The new colormaps applied to the POP dataset enabled scientists to see areas of interest unclear using standard colormaps. Improvements in the perceptual range of color allowed scientists to highlight structures within specific ocean currents. Work with the COSIM team members drove development of nested colormaps which provide further detail to the scientists.}, note = {LA-UR-15-20105}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Scientists from the Climate, Ocean and Sea Ice Modeling Team (COSIM) at the Los Alamos National Laboratory (LANL) are interested in gaining a deeper understanding of three primary ocean currents: the Gulf Stream, the Kuroshio Current, and the Agulhas Current & Retroflection. To address these needs, visual artist Francesca Samsel teamed up with experts from the areas of computer science, climate science, statistics, and perceptual science. By engaging an artist specializing in color, we created colormaps that provide the ability to see greater detail in these high-resolution datasets. The new colormaps applied to the POP dataset enabled scientists to see areas of interest unclear using standard colormaps. Improvements in the perceptual range of color allowed scientists to highlight structures within specific ocean currents. Work with the COSIM team members drove development of nested colormaps which provide further detail to the scientists. |
![]() | Samsel, Francesca; Petersen, Mark; Abram, Greg; Turton, Terece; Rogers, David; Ahrens, James Visualization of ocean currents and eddies in a high-resolution global ocean-climate model Inproceedings Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis 2015, 2015, (LA-UR-15-20105). Abstract | Links | BibTeX | Tags: @inproceedings{samsel2015visualization, title = {Visualization of ocean currents and eddies in a high-resolution global ocean-climate model}, author = {Francesca Samsel and Mark Petersen and Greg Abram and Terece Turton and David Rogers and James Ahrens}, url = {http://datascience.dsscale.org/wp-content/uploads/sites/3/2017/08/VisualizationofOceanCurrentsandEddiesinaHigh-resoutionOceanModel.pdf}, year = {2015}, date = {2015-01-01}, booktitle = {Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis 2015}, abstract = {Climate change research relies on models to better understand and predict the complex, interdependent processes that affect the atmosphere, ocean, and land. These models are computationally intensive and produce terabytes to petabytes of data. Visualization and analysis is increasingly difficult, yet is critical to gain scientific insights from large simulations. The recently-developed Model for Prediction Across Scales-Ocean (MPAS-Ocean) is designed to investigate climate change at global high-resolution (5 to 10 km grid cells) on high performance computing platforms. In the accompanying video, we use state-of-the-art visualization techniques to explore the physical processes in the ocean relevant to climate change. These include heat transport, turbulence and eddies, weakening of the meridional overturning circulation, and interaction between a warming ocean and Antarctic ice shelves. The project exemplifies the benefits of tight collaboration among scientists, artists, computer scientists, and visualization specialists.}, note = {LA-UR-15-20105}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Climate change research relies on models to better understand and predict the complex, interdependent processes that affect the atmosphere, ocean, and land. These models are computationally intensive and produce terabytes to petabytes of data. Visualization and analysis is increasingly difficult, yet is critical to gain scientific insights from large simulations. The recently-developed Model for Prediction Across Scales-Ocean (MPAS-Ocean) is designed to investigate climate change at global high-resolution (5 to 10 km grid cells) on high performance computing platforms. In the accompanying video, we use state-of-the-art visualization techniques to explore the physical processes in the ocean relevant to climate change. These include heat transport, turbulence and eddies, weakening of the meridional overturning circulation, and interaction between a warming ocean and Antarctic ice shelves. The project exemplifies the benefits of tight collaboration among scientists, artists, computer scientists, and visualization specialists. |
In submission:
- Colin Ware, Terece L. Turton, Roxana Bujack, Francesca Samsel, Piyush Shrivastava, David H. Rogers. Measuring and Modeling the Feature Discrimination Threshold Functions of Colormaps. IEEE Transactions on Visualization and Computer Graphics (Submitted July 2017).