Knowledge-Based Visual Presentation System
Michelle X. ZhouSteven K. Feiner
Our research focuses on automatically generating coherent visual discourse. We use the term visual discourse to refer to a series of connected visual displays. To remain coherent, a visual discourse must maintain visual consistency within or among displays, have smooth transitions between displays, and effectively integrate new information into existing displays.
To develop systems which can automatically generate effective visual presentations, we start from a set of input. Our input comes from two sources: the information to be presented, we call it domain information; and the user tasks. To transform the input to final visual presentations, we have conducted research in the following four areas:
To accurately describe a wide variety of information, we need to understand what data properties are related to visual presentation design and how they are related. The task of abstracting presentation-related commonalities among different data is called data characterization. Through data characterization, we add meta-data structures on top of the domain information and enable the design system to map different types of data onto appropriate visual elements.
User tasks such as "summarize a set of information" or "search for a piece of data" are usually specified at a very high-level. On the other hand, visual techniques that can be used by a computer system to help accomplish the tasks are encoded at a very low level, e.g., resizing an object, and coloring an object. To connect high-level user tasks and low-level visual techniques together, we have come up with a middle-level abstraction--visual task specification. Each visual task directly specifies a desired visual effect: through visual accomplishments, it indicates which high-level user tasks could be achieved using this visual task; through visual implications, it suggests what visual techniques may be used to render the desired visual effect.
As a human graphic designer must learn a visual language before s/he can design any types of graphics, visual presentation systems also need to be equipped with a visual language. To cover a wide range of visual design patterns, visual transformation techniques for a wide variety of tasks and users, we have designed a general visual language including a visual object hierarchy, a set of visual techniques and visual design guidelines.
In essence, an automated visual presentation system is an expert system, which needs a powerful inference engine that is capable of reasoning about various knowledge and infer a visual presentation. In our approach, we transform visual design problems into planning problems with specific constraints. Visual techniques are used as planning operators, domain and visual information are encoded as planning objects, while design guidelines are specified as constraints. To create an efficient and flexible inferencing engine, we have combined a hierarichal-decompositional, partial-order planning algorithm with practical computational features such as variable specifications.
We have also proposed a general framework for constructing automated graphics generation systems. The general framework comprises four components: a knowledge base, an inference engine, a visual realizer, and an interaction handler. Any graphics generation system is composed of instantiations of each of these four components:
IMPROVISE (Illustrative Metaphor Production in Reactive Object-oriented VISual Environments) is a knowledge-based system that can automatically generate coherent visual discourse. We are building IMPROVISE so it can serve as a proof-of-concept for our proposed framework. The current system is being used in two application domains: computer network management and summarization of medical records.
is a collaborative research project between the Department of Computer Science and COMET group in CTR at Columbia University. Network management is a complex task; visualizing a network's structure and behavior can help network operators or researchers understand network activities better. However, to handcraft every single display or manually navigate in or between displays for various types of tasks in different situations is rather time-consuming. NetMaster aims at automatically visualizing various types of ATM network management activities. Our system uses both general knowledge about graphic design and domain-specific knowledge about network structures and varous types of network activities. We have been focused on two major tasks. One is to examine the physical or virtual structures of network entities (e.g., nodes and links); the other is to monitor the network traffic status inside the network entities.
is the graphics generation component in an AI system called MAGIC, which can automatically generate multimedia summaries of patient data, containing coordinated text, speech and graphics. (MAGIC is a collaborative effort with the Natural Language Processing Group, Knowledge Representation and Reasoning Group, and the Department of Medical Informatics.) MedAide automatically generates coherent visual displays and communicates with other media generators through a media coordinator component to produce coherent multimedia presentations.There is a large amount of medical information available on-line at Columbia Presbyterian Hospital, but the relevant information is not in a form that can be easily accessed by caregivers. Different caregivers have different information needs. Furthermore, some information must be presented in a limited time frame. Therefore, the goal of MAGIC is to generate multimedia presentations that are customized for specific users, and meet certain critical time constraints. Based on those criteria, MedAide automatically generates visual presentations and tailors them to the specific user based on the user's needs.
IMPROVISE is implemented using C++ and CLIPS. The knowledge-based design component is written in CLIPS, while the rendering component is written in C++ and SGI's Open Inventor/OpenGL, an interactive 3D graphics toolkit. The system runs on a 250 MHz R4400 SGI Indigo2 with a Maximum Impact graphics board. (Inventor sample codes)
Go back to the Computer Graphics and User Interface Lab.
Go back to Michelle Zhou's other links: