next up previous contents
Next: Performance and Code Efficiency Up: Implementation Previous: Implementation

System Overview

A block diagram description of the overall algorithm is depicted in Figure [*]. The diagram shows the flow through the individual modules used by the system. The algorithm's structure is complex and somewhat fragmented since it is designed to deal with multiple scales and multiple possible face detections in an image.


  
Figure 5.1: Block diagram description of the overall algorithm
\begin{figure}\center
\epsfig{file=implem/figs/overall.ps,height=13cm, angle=-90} \end{figure}

We begin at a large scale with the image reduced so that operators acting upon it detect the largest objects in the scene. The face blob localization module finds all blobs in the image at the current scale and then transmits the coordinates of the strongest blob to the facial contour estimation module. If no face-like contour is present around the blob, the face-contour estimation module sends a failure signal to the blob detector which, in turn, provides it with another blob to process. If, however, a facial contour exists, the algorithm proceeds to the eye localization module.

The eye localization stage finds all eye-like blobs in the facial contour's eye band and sends the dominant pair to the mouth localization stage. We then find the nose line and the iris. To find the exact position of the nose, we sample the nose line ten times and generate 10 normalized mug-shots from 10 nose anchor points on the nose line. The DFFS is computed for each and the nose anchor point which yields the minimal DFFS is output. The nose is then fully localized and we compute a final normalization to obtain a high resolution mug-shot image (a probe). This probe image is recognized in the recognition module which finds its closest match in the database. The match and its distance from the probe image are then stored and we loop back to the face blob localization stage to check out the remaining face blobs in the image.

If none of the 10 normalizations along the face-line generated an adequate DFFS, the DFFS threshold stage would generate a failure signal and inform the eye localization stage to transmit another pair of eye blobs. Similarly, the lack of a valid mouth or nose-line could also generate a failure from the corresponding module. This would also issue a request for another pair of eyes from the eye localization module.

If the eye localization module has transmitted all the possible eye blobs and none have successfully passed through all the subsequent stages, it generates a failure signal itself. This informs the face blob localization stage to transmit another blob to the face contour stage, forcing the search to process another face blob elsewhere in the image.

Once all blobs detected by the blob localization module have been investigated, it generates a signal to the 'Reduce Scale' module. This generates an image at a new scale at which face blob localization is re-executed. Thus, we have a new set of smaller face blobs to investigate. This process continues, allowing the algorithm to search each scale progressively (from large to small scales) for face blobs. Once the algorithm has reached the smallest allowable scale ($4 \times$), all face blobs have been processed. The system then stops searching and generates its recognition output.

Throughout the search, the algorithm will have localized several face-like objects which were used as probe images to query its database of faces. Each probe image is matched to a database member and the distance from the probe image to the database member is stored. The probe image with the lowest distance to a database member is the one that most accurately resembles a member of our database. Thus, we return this face as the recognition result.


next up previous contents
Next: Performance and Code Efficiency Up: Implementation Previous: Implementation
Tony Jebara
2000-06-23