Multi-Tracker


We developed a patch tracker which uses multiple, independent trackers working together to track an object over long periods of time.  We applied this to tracking the gripper of the IREP.


Each tracker results in a confidence level, which is used to determine how well each tracker has performed.  If one of them fails, the other trackers can help it “recover”.  In this way, different features of the object are exploited to best keep hold of the object in dynamically-changing environments.  We use:


  1. 1)Color Tracking: we use the technique of Robert Collins (PSU) to compute tuned color features that are locally discriminative in a log-likelihood approach.  This captures color feature of the object.  The color likelihood yields a confidence level in the area of the object.


  1. 2)Correlation-based Tracking: use the last know patch location, we compute a normalized cross-correlation surface to find the best location of the track patch.  This captures large-scale geometry of the object.  The correlation is converted into a confidence level.


  1. 3)Optical Flow Tracking: we use optical flow of small corner features with the KLT-tracking method to track individual locations of the object, locally.  This captures small-scale geometry of the object.  By correlating a small patch around each corner feature with it’s new location, we capture a confidence of the optical flow method.


In the end, we take the solutions from each of the 3 trackers which provide a high enough confidence, and take the median as a result.  Then, on the next frame, we take the overall location of the tracker and feed that into each of the multiple trackers as a starting location; this helps recover any individual trackers that have failed.

























Three screenshots from tracking the gripper from the IREP robot, shown separated from the arm, as it is tracked over a changing background.  The color and NCC likelihoods are shown overlaid in the upper-left corner of each image and the corner features used in the optical flow tracker are shown with previous and current positions within the track box. The purple bounding box shows the final track location as output from the multi-tracker.  Below the overlays is a status message indicating when a particular tracker has failed on the current frame, or if all are succeeding, as shown by the message ’Status OK’. For the overlays, the colormap defines red as high values and blue as low values.





































Four screenshots from tracking one snake arm from the IREP robot with a gripper and wrist attached through severe pose changes. The checkerboard markers are ignored. The upper-left corner image shows both color (top) and correlation (bottom) likelihoods. The purple bounding box is the final track measurement as output from the multi-tracker.

























Track confidence shown as a function of frame number. The y-axis shows P(correct), the probability of detecting the object correctly according to each tracker’s algorithm. The blue curve (top) shows the confidence outputs from the correlation tracker along with its threshold for success/failure; the red curve (middle) shows the color tracker; and the green curve (bottom) shows the optical flow tracker. Thresholds for each tracker were determined empirically for optimal performance.


























Using a stereo device we previously developed for in-vivo surgical imaging, we were able to recover the 3-D position and orientation of the gripper shown above with the Multi-Tracker.  By tracking the gripper in one camera, we detect the patch  in the other camera, use simple shape analysis to detect the tip and base of the gripper, and then triangulate using the camera calibration parameters to get the position and orientation as it moves though space.