Haptic Sensing

Note: this work was done by our friend and colleague Paul Michelman, who passed away unexpectedly on June 30, 2000.

Contact: Peter Allen<allen@cs.columbia.edu>

One area of research has been investigating the use of tactile information to recover 3-D object information. While acquisition of 3-D scene information has focused on either passive 2-D imaging methods (stereopsis, structure from motion etc.) or 3-D range sensing methods (structured lighting, laser scanning etc.), little work has been done using active touch sensing with a multi-fingered robotic hand to acquire scene descriptions, even though it is a well developed human capability. Touch sensing differs from other more passive sensing modalities such as vision in a number of ways. A multi-fingered robotic hand with touch sensors can probe, move, and change its environment. In addition, touch sensing generates far less data than vision methods; this is especially intriguing in light of psychological evidence that shows humans can recover shape and a number of other object attributes very reliably using touch alone.

In Allen and Michelman [7][5][4], shape recovery experiments are described using active strategies for global shape recovery, contour following, and surface normal computation. Our approach is to find gross object shape initially and then use a hypothesis and test method to generate more detailed information about an object, as discussed in Allen [1]. The first active strategy is grasping by containment, which is used to find an initial global estimate of shape which can then be further refined by more specific and localized sensing. Superquadrics are used as the global shape model. Once a superquadric has been fit to the initial grasp data, we have a strong hypothesis about an object's shape. Of particular importance are the shape parameters. The shape of an object can be inferred from these parameters and used to direct further exploration. For example, if the shape parameters indicate a rectangular object, then a strategy can trace out the plane and perform a least square fit of the trace data to test the surface's planarity. The second procedure we have developed is the planar surface strategy. After making contact with an object, the hand and arm move to the boundaries of the object's surface to map it out and determine its normal. The third exploratory procedure we have implemented is for surface contour following with a two-fingered grasp. This procedure allows us to determine an object's contour which has been shown to be a strong shape cue from previous vision research. During these experiments, the hand was equipped with Interlink force sensing resistive tactile sensors. The exploratory procedures were tested on a variety of objects-such as blocks, wedges, cylinders, bottles. The overall Utah/MIT hand system we have been using is described in Allen, Michelman and Roberts [3][2].

Click for an mpeg video of our experiment WARNING!!! the video is 4880128 bytes long so you may have to wait a while...

Return to the Robotics Lab home page