Mobile Augmented Reality Systems


Table of Contents:

  1. Principal Investigator.
  2. Productivity Measures.
  3. Summary of Objectives and Approach.
  4. Detailed Summary of Technical Progress.
  5. Transitions and DOD Interactions.
  6. Software and Hardware Prototypes.
  7. List of Publications.
  8. Invited and Contributed Presentations.
  9. Honors, Prizes or Awards Received.
  10. Project Personnel Promotions.
  11. Project Staff.
  12. URLs.
  13. Keywords.
  14. Business Office.
  15. Expenditures.
  16. Students.
  17. Book Plans.
  18. Sabbatical Plans.
  19. Related Research.
  20. History.


Principal Investigator.


Productivity Measures.


Summary of Objectives and Approach.

  1. Augmented reality refers to the use of head-tracked see-through displays that overlay graphics, audio, and other modalities on the real world. Our research addresses software support for wearable systems that augment what the mobile user sees and hears in outdoor and indoor environments. One aspect of our work treats the development of mobile hybrid user interfaces. We use this term to refer to systems that use multiple heterogeneous displays and interaction devices in synergistic combination to capitalize on the advantages of each. For example, one of our prototypes combines personal see-through headworn and handheld displays that present complementary information.
  2. We are developing flexible software technologies for prototyping mobile augmented reality systems, including their extension to mobile hybrid user interfaces. Our emphasis has been on the development of distributed infrastructure and on its application to 3D graphics. We are also exploring the use of hybrid tracking technologies that combine differential GPS position tracking with orientation tracking using magnetometers, inclinometers, and gyroscopes. Our work is being tested through the development of prototype applications that deliver strategic information describing the user's environment. Our initial domain involves providing information about our local campus infrastructure.


Detailed Summary of Technical Progress.

  1. We have continued to develop COTERIE (Columbia Object-oriented Testbed for Exploratory Research in Interactive Environments), a toolkit that provides language-level support for building distributed virtual environments. COTERIE is based on the distributed data-object paradigm for distributed shared memory, and is implemented in Modula-3. Any data object in COTERIE can be declared to be a shared object that is replicated fully in any process that is interested in it. These shared objects support asynchronous data propagation with atomic serializable updates, and asynchronous notification of updates. Unlike other VE toolkits, COTERIE is based on a set of general-purpose parallel and distributed language constructs designed to accommodate the needs of virtual environments research.

    This year we continued to enhance the basic infrastructure and add support for more interaction devices. We also finished work on the core components of ``Repo'' (the version of the Obliq language augmented with our replicated object, which we referred to as ``Obliq*'' last year), and added more libraries as we needed them, such as HTTP client and server functionality and enhanced file system support.

  2. Over the last few months, we have been using both COTERIE's and Repo's shared data objects to create a distributed 3D graphics library, based on Obliq-3D, called ``Repo-3D.'' All current 3D graphics libraries are non-distributed: a library is used to create a graphical display on one machine. Therefore, to build a distributed, interactive, graphical application, be it a VE system or some other form of groupware, the data that describes the global state of the system is distributed using facilities that are separate from the graphics libraries. This means that programmers must maintain separate data structures for the global state and the local graphical display. Our work on Repo-3D is driven by the belief that by making the basic building blocks of 3D graphics programming (geometry, lights, cameras, hierarchical groups of objects, and object properties) into shared objects, distributed graphics applications can be built much more quickly and with fewer errors.

  3. We extended the set of COTERIE prototype augmented-reality applications that we began to implement last year, and allowed users to try out our ARC project at ACM '97 in San Jose, March 1-4, 1997.


Transitions and DOD Interactions.

  1. Visited Harold Smith, Tri-Service CADD/GIS Technology Center in the Information Technology Laboratory at Waterways Experiment Station, to discuss use of augmented reality for siting and assembling temporary structures, Vicksburg, MS, April 18, 1997.
  2. Presented poster on our research at ONR Computer Technology Gathering, Dahlgren, VA, May 20, 1997.
  3. Demonstrated our work at Columbia to a DOD group doing a tour of information visualization research labs, organized by Nahum Gershon (MITRE): Kevin Mills (DARPA), Robert Douglass (DARPA), F.T. Case (DARPA), Ken Boff (Armstrong Lab).
  4. Demonstrated our work at Columbia to Bob Williams (bobw@henry.nawcad.navy.mil) of the Naval Air Warfare Center, and discussed applications of augmented reality for assisting fighter pilots.


Software and Hardware Prototypes.

  1. Prototype Name: COTERIE


List of Publications.

  1. Feiner, S., MacIntyre, B., Tobias, H., and Webster, A. A touring machine: Prototyping 3D mobile augmented reality systems for exploring the urban environment. Proc. ISWC '97 (Int. Symp. on Wearable Computers), Cambridge, MA, October 13-14, 1997, 74-81. Describes the software and hardware architecture of an experimental backpack-based augmented reality system that combines handheld and headworn displays, developed our COTERIE distributed infrastructure. Download a gzip'ed PostScript file (2,251,243 bytes compressed)
  2. Zhou, M. and Feiner, S. The representation and use of a visual lexicon for automated graphics generation. Proc. IJCAI '97 (1997 Int. Joint Conf. on Artificial Intelligence), Nagoya, Japan, August 23-29, 1997, 1056-1062. Introduces a comprehensive set of parametrized primitive visual objects, and describes how they are used in the automated design of graphical presentations. Download a gzip'ed PostScript file (250,626 bytes compressed)
  3. Webster, A., Feiner, S., MacIntyre, B., Massie, W., and Krueger, T. Augmented reality applications in architectural construction. In D. Bertol (ed.), Designing Digital Space: An Architect's Guide to Virtual Reality, John Wiley & Sons, New York, 1997, 193-200. Presents several experimental applications developed using our COTERIE infrastructure: architectural anatomy and ARC.


Invited and Contributed Presentations.

  1. Seeing on top of the world. Invited talk in Department of Computer Science Seminar Series, University of British Columbia, Vancouver, Canada, January 30, 1997.
  2. ARC: Augmented Reality for Construction. (with A. Webster, B. MacIntyre, and T. Höllerer) Invited demonstration, ACM '97, San Jose, CA, March 1-4, 1997.
  3. 3D User Interfaces for visualizing information. Invited talk at P1000 Information Visualization Program Meeting, Office of Research and Development, Directorate of Science and Technology, Gettysburg, PA, April 30-May 1, 1997.
  4. Keynote panel: Visualization of information (with B. Spence, A. Gagalowitz, B. Shneiderman, and M. Gross). CODATA Euro-American Workshop on Visualization of Information and Data. Ministère de l'Education Nationale de l'Enseignement Supérieur de la Recherche, Paris, France, June 24-25, 1997.
  5. Knowledge-based 3D graphics. Invited talk at IJCAI '97 Workshop on Intelligent Multimodal Systems, Nagoya, Japan, August 23, 1997.
  6. The future of UIST (Chair: R. Jacob, with D. Olsen, J. Foley, and J. Mackinlay). ACM UIST '97, Banff, Alberta, October 14-17, 1997.
  7. Seeing on top of the world: Research in augmented reality. Invited talk at Euro-VR '97, Amsterdam, The Netherlands, November 10-11, 1997.
  8. Seeing on top of the world. Invited talk at Real Time Computer Graphics for Virtual Environments, UK Engineering and Physical Sciences Research Council, London, England, December 18, 1997.


Honors, Prizes or Awards Received.

  1. Member of executive committee, IEEE Task Force on Human-Centered Information Systems (1997-present)
  2. Member of IBM Mobile Scientific Advisory Board (1996-present)
  3. Associate editor, ACM Transactions on Graphics (1995-present)
  4. Member of executive board, IEEE Technical Committee on Visualization, Graphics, and Interaction (formerly IEEE Technical Committee on Computer Graphics) (1993-present)
  5. Member of editorial board, IEEE Transactions on Visualization and Computer Graphics (1994-present), The Virtual Reality Society (1994-present), Electronic Publishing (1988-present)
  6. North-American co-chair of program committee, ACM VRST '97 (Virtual Reality Software and Technology)
  7. Associate papers chair, ACM CHI '98 (Human Factors in Computing Systems)
  8. Member of papers committee, ACM SIGGRAPH '98
  9. Member of technical sketches committee, ACM SIGGRAPH '97
  10. Member of symposium committee, IEEE InfoVis '97
  11. Member of program committee, IEEE VRAIS '97, IEEE VRAIS '98, Graphics Interface '97, Computer Graphics International '97, IJCAI '97 Workshop on Intelligent Multimodal Systems, ISWC '97 (1st Int. Symp. on Wearable Computers), IEEE Visualization '97, IUI '98 (Int. Conf. on Intelligent User Interfaces), 1998 Int. Conf. on Web-Based Modeling and Simulation, ACM VRST '98 (Virtual Reality Software and Technology)


Project Personnel Promotions.


Project Staff.

  1. Name: Dr Steven K. Feiner


URLs.

  1. Annual Report FY97
  2. QUAD FY97
  3. Annual Report FY96
  4. QUAD FY96


Keywords.

  1. Augmented Reality
  2. Distributed Virtual Environments
  3. Mobile Computing
  4. Wearable Computing


9Business Office.


Expenditures.

  1. FY97: 41%


Current and Former Students.

  1. Name: Mr Blair MacIntyre
  2. Name: Mr Tobias Höllerer


Book Plans.

  1. Topic: Computer graphics


Sabbatical Plans.

  1. Person: Dr. Steven Feiner


Related Research.

  1. CMU VuMan Project
  2. Registration Errors in Augmented Reality, UNC Chapel Hill
  3. U. Toronto ETC-Lab
  4. NPSNET
  5. MIT Wearable Computing Project
  6. Augmented Reality and Computer Augmented Environments


History.

  1. Microsoft's MS Chat automatically generates a comic-strip representation of the IRC ``chat session'' in which a user is participating. It uses heuristics to determine the characters to place in each panel, and when to start a new panel; the characters' gestures, expressions, positions, and orientations; the shape and layout of word balloons; and panel zoom level. MS Chat was created by my former Ph.D. student David Kurlander, based in part on the idea of a graphical history, developed from 1988-1993 and reported on in his Ph.D. dissertation.
  2. As a graduate student of Andy van Dam at Brown University, I was supported in part by funding from Marv Denicoff. With the goal of using computers to improve technical documentation, we developed the Interactive Graphical Document system (IGD) from 1979-82. IGD supported the creation of graphical hypermedia documents. A document's pages not only presented static pictures and text, but served as the interface to interactive animations and simulations. Pages could be linked to other pages and nested in a recursive chapter hierarchy. IGD included a window-based layout system, through which authors viewed, created, and edited iconic representations of the pages, chapters, and links.
  3. IGD was the first system to allow users to view and edit a hypertext as a directed graph, whose nodes were depicted as scaled miniatures of the pages, complete with all their graphical content. It was also the first system to support automatically generated navigable graphical displays of user history (the forerunner of HyperCard's display of recently seen pages), of all links into and out of a page, and of the document's index. Hierarchical containment within chapters was used to reduce the visual complexity of large documents to a manageable level by limiting the number of nodes and links that were drawn, while allowing the user to drill down arbitrarily deeply where desired. (Today this would be called a user-controlled fisheye view.) As well, IGD was one of the first systems to allow non-programmers to create a graphical interface entirely through a direct manipulation, WYSIWIG, graphical interface. An author drew pictures, placed them on pages, created ``buttons'' within the pictures, and attached to the buttons assorted actions, such as linking to another page or running an animation.
  4. This work is reflected in current and past commercial ``card-based'' hypertext systems, such as HyperCard. With the current popularity of the WWW, many of the navigation facilities originally developed for IGD are being rediscovered by a new generation of researchers.
  5. IGD's documents were only as good as the abilities and efforts of the human authors and designers who created them. Recognizing this bottleneck to effective technical documentation prompted my dissertation work on APEX (Automated Pictorial EXplanations), a knowledge-based system that synthesized sequences of static 3D pictures that illustrate the actions performed by a problem solver. This work, coupled with drastic improvements in the performance and size of graphics processors and displays, led to my current research on knowledge-based virtual environments, and augmented reality.