Projects

SpaceTokens: Interactive Map Widgets for Location-centric Interactions


Map users often need to interact with multiple important locations repetitively. For example, a traveler may frequently check her hotel or a train station on a map, use them to localize an unknown location, or investigate routes involving them. Ironically, these location-centric tasks cannot be performed using locations directly; users must instead pan and zoom the map or use a menu to access locations. We propose SpaceTokens, interactive widgets that act as clones of locations, and which users can create and place on map edges like virtual whiteboard magnets. SpaceTokens make location a first-class citizen of map interaction. They empower users to rapidly perform location-centric tasks directly using locations: users can select combinations of on-screen locations and SpaceTokens to control the map window, or connect them to create routes.

SpaceBar: A Scrollbar for a Route


Reviewing a route requires both macro and micro reading, i.e., seeing overview and detail. However, overview and detail are polar opposite. Map users often need to repetitively zoom in to see detail, then zoom out to gain overview. These repetitive and excess interactions mar the user experience and prevent users from processing information efficiently. We introduce SpaceBar, a scrollbar-like instrument that associates a simple linear slider with a complex nonlinear route. Similar to a scrollbar, a SpaceBar has an elevator indicator that severs as a 1D overview+detail indicator. A user can change the size and position of the elevator indicator to change the visible portion of a route. Conversely, the elevator indicator is dynamically updated as the user interacts with a route. SpaceBar facilitates a user to comprehend and interact with a route efficiently.

Personalized Compass: A Compact Visualization for Direction and Location


Maps on mobile/wearable devices often make it difficult to determine the location of a point of interest (POI). For example, a POI may exist outside the map or on a background with no meaningful cues. To address this issue, we present Personalized Compass, a self-contained compact graphical location indicator. Personalized Compass uses personal a priori POIs to establish a reference frame, within which a POI in question can then be localized. Graphically, a personalized compass combines a multi-needle compass with an abstract overview map.

Focal Sweep Videography with Deformable Optics


A number of cameras have been introduced that sweep the focal plane using mechanical motion. However, mechanical motion makes video capture impractical and is unsuitable for long focal length cameras. In this project, we present a focal sweep telephoto camera that uses a variable focus lens to sweep the focal plane. Our camera requires no mechanical motion and is capable of sweeping the focal plane periodically at high speeds. We use our prototype camera to capture EDOF videos at 20fps, and demonstrate space-time refocusing for scenes with a wide depth range. In addition, we capture periodic focal stacks, and show how they can be used for several interesting applications such as video refocusing and trajectory estimation of moving objects
Focal Sweep Camera

Focal Sweep Photography for Space-Time Refocusing


A conventional camera has a limited depth of field (DOF), which often results in defocus blur and loss of image detail. The technique of image refocusing allows a user to interactively change the plane of focus and DOF of an image after it is captured. One way to achieve refocusing is to capture the entire light field. But this requires a significant compromise of spatial resolution. This is because of the dimensionality gap - the captured information (a light field) is 4-D, while the information required for refocusing (a focal stack) is only 3-D. In this project, we present an imaging system that directly captures a focal stack by physically sweeping the focal plane. We first describe how to sweep the focal plane so that the aggregate DOF of the focal stack covers the entire desired depth range without gaps or overlaps. Since the focal stack is captured in a duration of time when scene objects can move, we refer to the captured focal stack as a duration focal stack. We then propose an algorithm for computing a space-time in-focus index map from the focal stack, which represents the time at which each pixel is best focused. The algorithm is designed to enable a seamless refocusing experience, even for textureless regions and at depth discontinuities.
Gigapixel Computational Imaging

Gigapixel Computational Imaging


Today, consumer cameras produce photographs with tens of millions of pixels. The recent trend in image sensor resolution seems to suggest that we will soon have cameras with billions of pixels. However, the resolution of any camera is fundamentally limited by geometric aberrations. We derive a scaling law that shows that, by using computations to correct for aberrations, we can create cameras with unprecedented resolution that have low lens complexity and compact form factor. In this project, we present an architecture for gigapixel imaging that is compact and utilizes a simple optical design. The architecture consists of a ball lens shared by several small planar sensors, and a post-capture image processing stage. Several variants of this architecture are shown for capturing a contiguous hemispherical field of view as well as a complete spherical field of view. We demonstrate the effectiveness of our architecture by showing example images captured with two proof-of-concept gigapixel cameras.