next up previous
Next: System Overview Up: Stochasticks: Augmenting the Previous: Introduction

Wearable Augmented Reality

Traditionally, immersive environments and augmented reality have required large and non-transferable components. For example, flight simulators and even heads-up-displays (HUDs) have required large, powerful and immobile equipment. However, recent work has reduced the requirements for virtual reality to 1 or 2 workstations, a display and a single camera. For instance, the Pfinder [18] and the Survive [15] systems require relatively minimal computational resources to produce a virtual interactive experience. This virtual experience involves visual tracking of the user and includes interacting with agents as well as arcade gaming. The natural progression would be a totally wearable, person-centric augmented reality with head mounted displays where vision would track the world instead of the user.

The use of such personal imaging and augmented reality has been discussed and investigated in [17], [12] and [16]. This previous research has shown some of the advantages and issues of using computer vision on the external environment to assist the wearable augmented reality experience.

Other current research in interactive environments has stressed the importance of maintaining intermediate representations between the real and the virtual. These help maintain the rich multi-modality we have come to expect [7]. For instance, Ishii and Ullmer [8] include physical objects as input to the computer to preserve tangibility in the virtual experience. Feiner et. al. [6] use the notion of overlayed graphical output on see-through head-mounted displays to keep the rich visual stimuli of the real world.

Tony Jebara
Wed Feb 18 18:52:15 EST 1998