next up previous contents
Next: Mirroring Along the Mid-Line Up: Inverse 3D Projection Previous: Texture Mapping

Occlusion and Back Surfaces

It is possible that certain points in the range data correspond to surfaces patches whose normals point away from the 2D image. For instance, the back of the head can be projected upon the 2D image yet it should not be coated with the intensity values it intersects since its surface points away from the image plane, not towards it. Furthermore, certain components of the range data may occlude others. For example, if a face is slightly rotated, the nose will overlap part of the cheek. Thus, it is necessary to avoid ``colorizing'' the range data points which are occluded since these can never project onto the image plane.

To prevent the erroneous ``colorization'' of 3D surface patches that are occluded or point away from the intensity image, it is necessary to compute the normals of each point in the 3D range data image and to perform exhaustive polygon occlusion checks or polygon rendering. However, real-time constraints prevent us from implementing such techniques in a fast face-recognition system. Instead, we shall using the symmetry of the human face to perform mirroring. This simple, efficient, though suboptimal technique is described below.

Furthermore, we choose to crop the 3D model so that only the front of the face will be utilized in texture mapping. The back of the head, the neck and the top of the head will not be useful for recognition, so there is no need to compute their projections onto the image or to worry about the validity of their colorization by tracking their surface normals.


next up previous contents
Next: Mirroring Along the Mid-Line Up: Inverse 3D Projection Previous: Texture Mapping
Tony Jebara
2000-06-23