A number of sophisticated multi-fingered, dextrous hands have been built in the last few years showing great promise in extending robotic capabilities for tasks such as assembly, inspection and repair. While the mechanical design of these hands has been a major advance in robotics, building intelligent, robust, task-based control of these hands has proven very difficult. My work has been centered on addressing the specific goal of extending the capabilities of dextrous robotic hands so that they may become a major component of any robotic system. Our research began with results from single finger experiments to creation of a sophisticated and integrated multi-fingered dextrous hand control system \cite{alle87,alle88a,alle90a,alle90b,alle90c,alle92d,alle92c}. Our work in creating control and sensing primitives has allowed us to explore the use of robotic hands in tasks ranging from object recognition to studies of understanding the nature of generalized grasping tasks and implementation of active force control strategies. Below, we outline 5 areas in which we are working to make dextrous robotic hands a major component of a robotic system.

Haptic Object Recognition

Humans have a very highly developed haptic perception system. By haptic, we mean the interplay of both the skin and tactile receptors, and the joints, muscle and bone. If you blindfold yourself, you can still recognize an object's shape, size, texture, compliance and function by manipulating it with your hands. Using this human capability as a goal, our research has developed robotic analogs of 3 human haptic sensing strategies (called Exploratory Procedures or EP's) to recover the 3-D shape of objects using the hand/arm system we have developed. We are committed to a full 3-D object recognition; many other researchers are still studying recognition of 2-D, polygonal objects. The robotic EP's we have built allow the system to autonomously recognize and reconstruct the shape of objects by having the dextrous hand 1) grasp an object and encompass it, 2) follow planar surfaces to find edges, vertices and extents of surfaces, and 3) use the fingers to follow the contours of objects and reconstruct the object's surfaces. This work will be expanded to include more EP's such as procedures to identify articulated parts, quantify surface texture, and determine object compliance. Also, we expect to be able to work with more complex objects containing multiple segments. Within the next few years, we expect to acquire an additional dextrous robotic hand, and two handed exploration will begin, which we believe is a necessary part of any robotic grasping system. We also note the need to integrate other sensing modalities such as vision to serve as a front end to further exploration of an object by touch. The EP's we are building can be thought of as a set of primitive haptic functions that can be used as the building blocks for an active, autonomous haptic recognition system. As we develop these EP's, we can begin to answer important questions about the sufficiency of these primitives in performing broad classes of grasping and recognition tasks, and this will allow us to design the next generation of robot hands and hand controllers, as well as determine correct strategies for using robotic hands in assembly and related tasks.

Force Control and Tool Usage

This new research will investigate the ways in which tools can be used by dextrous robot hands. The ultimate goal of the research is to demonstrate the feasibility of implementing tool usage tasks on robot hands and strategies for implementing a set of tool tasks. Although the goal of dextrous manipulation research is the development of manipulators that exhibit dexterity in all environments--- structured and unstructured---tool usage tasks present constrained manipulation problems with a fixed task decomposition which adds structure to the problem and allows experimentation to proceed in modular, scientific fashion. In addition, they admit to the use of easily specified geometric models, and they establish a criteria for determining the success or failure of any experimental results. The important issues raised by the research in force control and hand tool usage deal with how to control the position of the tool during the execution of a task and how to sense and control the forces of interaction between the tool and environment. A typical task has a three-level structure. On the top level there is a specification in symbolic terms of the task to be achieved: ``tighten the screw at a specified location with a known screwdriver.'' That command is decomposed into a sequence of commands in a geometric space that are useful to the robot: ``align the tool tip with the axis of the screw.'' Finally, the task frame commands are converted into commands that the low-level servomechanisms can obey. We intend to investigate the decomposition of tasks from the symbolic level to the servomechanism level, with the overall goal of creating a framework that will allow symbolic descriptions of tasks to generate robust and correct low-level control mechanisms \cite{mich93}. Figure \ref{utah-hand2}) shows our robotic hand manipualting a block.

Reduced Degree-of-Freedom Dextrous Hands

We have entered into a joint research agreement with Toshiba Corporation to exploit the reduced degree-of-freedom Flexible Micro-Actuator (FMA) robotic hand developed by Toshiba. Our Laboratory was selected for this research due to our experience with multi-fingered dextrous hands and our expertise in sensor based control of multiple degree-of-freedom systems. This hand is a radical departure from more anthropomorphic hands that have been built. Our task is to implement sensors on this hand and to develop closed loop feedback control algorithms that allow task-directed manipulation of objects using this hand. The scientific goal is to see if fewer degrees of freedom are sufficient for generalized grasping tasks. This is important in that we can control these reduced DOF systems much more easily than complex devices. We have also begun experiments with continuous pneumatic servo control as opposed to the discrete system Toshiba has implemented to see if continuous positioning is necessary. The results of this research project will be important in determining the amount of complexity needed for robotic hands of the future. This joint research is also a model for the industrial collaboration we think is important in making progress in robotics.

Designing and Building New Tactile Sensors

In complex control tasks related to grasping with a hand, sensors are required for various parameters such as temperature, force, texture, and surface properties of objects; this research aims to fill this need. Accordingly, we are designing and constructing new tactile sensors, small enough to be mounted on a robotic hand's fingers, which are inexpensive and robust, and can provide high bandwidth, high resolution response. To date, there has been no effective tactile sensor produced which fulfill the requirements above. We are fabricating and testing both PTF (Polymer Thick Film) and micromachined silicon-based tactile sensor arrays. We will attach these sensors (prototypes already exist) to the dextrous robotic hand's fingers, with concurrent development of a glove-like system which includes the entire haptic measurement sensors in an environmentally robust package. The advantage of these sensor's over previous designs are numerous; they are inexpensive to fabricate, they can be made in redundant arrays for fault tolerance, they are extremely small in size, and the use of silicon device technology means that sensor transduction, signal conditioning, and normalization can be performed on the sensor itself. This is a joint research project with the department of Electrical Engineering \cite {whitea5}.

Click here to return to the Robotics Lab home page