next up previous
Next: System Overview Up: DyPERS: Dynamic Personal Enhanced Previous: Introduction

Background and Related Work

This section describes related areas, compares other systems to DyPERS, and describes some new contributions emphasized by the proposed system.

Ubiquitous vs. Wearable Computing: Both wearable/personal computing and ubiquitous computing present interesting routes to augmenting human capabilities with computers. However, wearable computers attempt to augment the user directly and provide a mobile platform while ubiquitous computing augments the surrounding physical environment with a network of machines and sensors. Weiser [Weiser, 1991] discusses the merits of ubiquitous computing while Mann [Mann, 1997] argues in favor of mobile, personal audio-visual augmentation in his wearable platform.

Memory Augmentation: Memory augmentation has evolved from simple pencil and paper paradigms to sophisticated personal digital assistants (PDAs) and beyond. Some closely related memory augmentation systems include the ``Forget-me not'' system [Lamming and Flynn, 1993], which is a personal information manager inspired by Weiser's ubiquitous computing paradigm, and the Remembrance Agent [Rhodes and Starner, 1996], which is a text-based context-driven wearable augmented reality memory system. Both systems collect and organize data that is relevant to the human user for subsequent retrieval.

Augmented Reality: Augmented reality systems form a more natural interface between user and machine which is a critical feature for a system like DyPERS. In [Kakez et al. , 1997] a virtually documented environment system is described which assists the user in some performance task. It registers synthetic multimedia data acquired using a head-mounted video camera. However, information is retrieved explicitly by the user via speech commands.

On the other hand, the retrieval process is automated in [Levine, 1997], a predecessor of DyPERS. This system used machine vision to locate `visual cues,' and then overlaid a stabilized image, messages or clips on top of the users view of the cue object (via a HUD). The visual cues and the images/messages had to be prepared offline and the collection process was not automated. In addition, the machine vision algorithm used, was limited to 2D objects viewed from head-on and at appropriate distance. An earlier version, described in [Starner et al. , 1997], further simplified the machine vision by using colored bar code tags as the visual cue.

In [Rekimoto and Nagao, 1995] the NaviCam system is described as a portable computer with video camera which detects pre-tagged objects. Users view the real-world together with context sensitive information generated by the computer. NaviCam is extended in the Ubiquitous Talker [Rekimoto and Nagao, 1995] to include a speech dialogue interface. Other applications include a navigation system, WalkNavi [Nagao and Rekimoto, 1996]. Audio Aura [Mynatt et al. , 1997] is an active badge distributed system that augments the physical world with auditory cues. Users passively trigger the transmission of auditory cues as they move through their workplace. Finally, Jebara [Jebara et al. , 1997] proposes a vision-based wearable enhanced reality system called Stochasticks for augmenting a billiards game with computer generated shot planning.

Perceptual Interfaces: Most human-computer interaction is still limited to keyboards and pointing devices. The usability bottleneck that plagues interactive systems lies not in performing the processing task itself but rather in communicating requests and results between the system and the user [Jacob et al. , 1993]. Faster, more natural and convenient means for users to exchange information with computers are needed. This communication bottleneck has spurred increased research in providing perceptual capabilities (speech, vision, haptics) to the interface. These perceptual interfaces are likely to be a major model for future human-computer interaction [Turk, 1997].


next up previous
Next: System Overview Up: DyPERS: Dynamic Personal Enhanced Previous: Introduction
Tony Jebara
1998-10-07