DyPERS, 'Dynamic Personal Enhanced Reality System', is a wearable
system which uses augmented
reality and computer vision to autonomously retrieve 'media memories'
based on associations with real objects the user encounters.
These are evoked as audio and video clips taken by the user and
overlayed on top of real objects the user looks at.
The user's visual and auditory scene is stored in real-time
by the system (upon request) and is then associated (by user input)
with a snap shot of a visual object. The object acts as a key which is
detected by a real-time vision system when it is in view, triggering
DyPERS to play back the appropriate audio-visual sequence. The vision
system is a probabilistic algorithm which is capable of discriminating
between hundreds of everyday objects under varying viewing conditions
(lighting, pose changes, etc.). The record-and-associate paradigm
of the system has many potential applications. Results of the use of
the system in a museum's tour scenario are described.
1