ViA: A perceptual visualization assistant


Conference


C. G. Healey, R. St. Amant, M. Elhaddad
Proceedings 28th Applied Imagery Pattern Recognition Workshop, 1999, pp. 1-11

View PDF Semantic Scholar DOI
Cite

Cite

APA   Click to copy
Healey, C. G., Amant, R. S., & Elhaddad, M. (1999). ViA: A perceptual visualization assistant (pp. 1–11).


Chicago/Turabian   Click to copy
Healey, C. G., R. St. Amant, and M. Elhaddad. “ViA: A Perceptual Visualization Assistant.” In , 1–11. Proceedings 28th Applied Imagery Pattern Recognition Workshop, 1999.


MLA   Click to copy
Healey, C. G., et al. ViA: A Perceptual Visualization Assistant. 1999, pp. 1–11.


BibTeX   Click to copy

@conference{c1999a,
  title = {ViA: A perceptual visualization assistant},
  year = {1999},
  pages = {1-11},
  series = {Proceedings 28th Applied Imagery Pattern Recognition Workshop},
  author = {Healey, C. G. and Amant, R. St. and Elhaddad, M.}
}

Abstract

This paper describes an automated visualized assistant called ViA. ViA is designed to help users construct perceptually optical visualizations to represent, explore, and analyze large, complex, multidimensional datasets. We have approached this problem by studying what is known about the control of human visual attention. By harnessing the low-level human visual system, we can support our dual goals of rapid and accurate visualization. Perceptual guidelines that we have built using psychophysical experiments form the basis for ViA. ViA uses modified mixed-initiative planning algorithms from artificial intelligence to search of perceptually optical data attribute to visual feature mappings. Our perceptual guidelines are integrated into evaluation engines that provide evaluation weights for a given data-feature mapping, and hints on how that mapping might be improved. ViA begins by asking users a set of simple questions about their dataset and the analysis tasks they want to perform. Answers to these questions are used in combination with the evaluation engines to identify and intelligently pursue promising data-feature mappings. The result is an automatically-generated set of mappings that are perceptually salient, but that also respect the context of the dataset and users' preferences about how they want to visualize their data.


Share



Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in