Automated Visual Presentation: From Heterogeneous Information to Coherent Visual Discourse
Michelle X. Zhou and Steven K. Feiner
Automated visual presentation systems should be able to design effective presentations for heterogeneous (quantitative and qualitative) information. They should also be able to work in static or interactive environments and capable of employing a wide range of visual media and visual techniques. In this paper, we focus on three tasks in building visual production systems: establishing a thorough understanding of the presentation-related characteristics of domain-specific information; classifying several types of visual information and capturing their distinct syntactic, semantic, and pragmatic features; and formulating a set of design principles.
We define a data-analysis taxonomy to characterize heterogeneous information. In addition, we have modeled presentation context information such as audience identity to produce user-centered visual design. To utilize and manipulate visual information, we have classified it into visual objects and visual tools based upon its role in the visual production process. To guide the visual design process, we have formulated a set of design principles that ensure the expressiveness and effectiveness of a design. To test and evaluate our work, we have developed a prototype system called improvise based on the research results. We use examples generated by improvise to illustrate how it constructs visual presentations.
The research described in this overview is supported in part by DARPA Contract DAAL01-94-K-0119, the Columbia Univeristy Center for Advanced Technology in High Performance Computing and Communications in Healthcare (funded by the New York State Science and Technology Foundation), the Columbia Center for Telecommunications Research under NSF Grant ECD-88-11111, ONR Contract N00014-97-1-0838.
Get a PostScript copy of the full version.