Visual Task Characterization for Automated Visual Discourse Synthesis

Michelle X. Zhou and Steven K. Feiner

Department of Computer Science
Columbia University 500 West 120th St.,
450 CS Building
New York, NY 10027
+1 212 939 7000
{zhou, feiner}@cs.columbia.edu




Abstract

To develop a comprehensive and systematic approach to the automated design of visual discourse, we introduce a visual task taxonomy that interfaces high-level presentation intents with low-level visual techniques. In our approach, visual tasks describe presentation intents through their visual accomplishments, and suggest desired visual techniques through their visual implications. Therefore, we can characterize visual tasks by their visual accomplishments and implications. Through this characterization, visual tasks can guide the visual discourse synthesis process by specifying what presentation intents can be achieved and how to achieve them.

Acknowledgments

The research described in this overview is supported in part by DARPA Contract DAAL01-94-K-0119, the Columbia Univeristy Center for Advanced Technology in High Performance Computing and Communications in Healthcare (funded by the New York State Science and Technology Foundation), the Columbia Center for Telecommunications Research under NSF Grant ECD-88-11111, ONR Contract N00014-97-1-0838.

Permission to make digital/hard copies of all or part of this material for personal or classroot use is granted without fee provided that the copies are not made or distributed for profit or commericial advantage, the copyright notice, the title of the publication and its date appear, and the notice is given that copyright is by permission of the ACM/SIGCHI. To copy otherwise, to republish, to post on servers or to redistribute to list requires specific permissions and/or fee.

Copyright © 1997 ACM 0-89791-839-8/96/01 .. $3.50

Get a PostScript copy of the full version.