This is a writeup of some of the work I did during my post graduate studies. The purpose of this research was to find solutions to the problem of facial reflectance capture in realtime on constrained hardware.Where facial reflectance is albedo & roughness textures for physically based BRDFs.
Constrained hardware refers to the use of webcams or phone cameras, in addition to laptop hardware or low power mobile chips, with limited compute and graphics processing capability.
|Test images were rendered in Blender using HDRIs for lighting.|
Using the normal and color data, spherical harmonics can be extracted from the image. Lighting extraction can the be performed by using inverse rendering (Marschner, Guenter and Raghupathy, 2000).
Image = Diffuse * Albedo + Specular.
Rearranging the equation by dividing by the diffuse component results in an image that contains only the Albedo and specular. Spherical harmonics are used to approximate the diffuse component and have been shown to be up to 98% accurate for this task (Ramamoorthi, 2006) .
Image / Diffuse = Albedo + Specular.
The result of diffuse removal is shown above. The bright highlights on grazing angles can be removed by accounting for the fresnel affect manually.
However, a more powerful technique is to combine highlight removal into a single step with a process called corrective fields. (Ichim et al., 2015)
Corrective fields remove a lot of the artifacts introduced by grazing angles and as a bonus help to reduce the specular component that is still present in the image.
There are still two issues with the calculated result.
- The image still contains specular information.
- The image is limited to a single angle so the texture is stretched on the sides of faces, or missing if part of the face is occluded.
|An example of multiple viewpoints merged into a single texture.|
The above image combines many angles to obtain the final result. Diffuse lighting has been removed from the image, however specular highlights are still clearly visible.
An interesting fact about specular highlights is that they are view dependent, whereas albedo is not. Using the reverse rendering via spherical harmonic technique as described above. A texture containing albedo + specular can be obtained. Because albedo is not affected by viewing angle, whereas specular is. Then any variation in luminosity can be attributed to a change in specular intesity.
In theory, by choosing the minimum value of a point on the surface given multiple viewing angles, we can find the angle with the lowest specular response and use that value to get the most accurate estimation for the surfaces albedo.
Choosing a minimum is technically correct, but in practice, limitations of the capture process and errors in the spherical harmonic estaimation, mean that niavely choosing the darkest pixel, often results in very visible seams in the image.
|Seams appear when niavely sampling based on minimum pixel intensity.|
|pre-merge extration (top) vs post merge extraction (bottom)|
|Weighted least squares linear solve|