Unrealistic Avatar
A full run of the pipeline using the VIBE tooling.
Introduction
These tests use the core tooling of the target audience for the project - BUAS. Their VIBE tooling was used to create avatars whose expressions will be analysed in the pipeline. The most important part of the test is the generation of metrics using OpenFace: can OpenFace analyse these and give usable results for further analysis?
The test is split into 3 smaller tests: one using a normal expression, one using a slightly distorted or more unrealistic expression, and one using an expression that is impossible to create in real life. The OpenFace results (or lack thereof) will be discussed under each section. How the analysis itself is performed will not be elaborated on. For more information, see Full Run of Pipeline.
As the results of the analysis module are less important here (there is not yet a calibrated reference set of parameters), the default "full-run-test" parameter set will be used again.
Test 1: Normal Expression
This test uses the most normal expression - one any person could make. The expected result is for the analysis to perform in a similar manner as the previous tests.


OpenFace Analysis
The image was uploaded and a job created for it:

When checking the associated container, the OpenFace service was clearly able to analyse the image.

As a final check, the data parsed from the .csv
file and placed in the database will be verified. Though the accuracy of the contents of the .csv
file cannot be verified, the fact that is is there makes it clear that there was no irregularity in the structure of the .csv
file (meaning both the OpenFace analysis and analysis module passed normally).

The final analysis results are also similar to the previous tests.

Conclusion
The analysis performed similarly to the previous full run. This was expected, as this image is the most human-like expression.
Test 2: Slightly Distorted Expression
This image displays a slightly more odd expression, but one that could conceivably be expressed in real life. The expectation is for this one to perform similarly to the other tests.


Analysis
The image was uploaded and a job created for it:

When checking the associated container, the OpenFace service was again able to analyse the image.

Finally, the data parsed from the .csv
file and placed in the database will again be verified.

And the final results:

Conclusion
Again, the test has performed similarly to other images. There were no peculiarities to be seen.
Test 3: Incredibly Distorted Expression
This image is clearly something not possible in real life. The question here is whether OpenFace will recognize this image as a face at all - and if it does, how useful the results will be.


Analysis
The image was uploaded and a job created for it:

When checking the associated container, the OpenFace service was again able to analyse the image.

Moving on to the parsed .csv
data in the database...

While the structure of the data remains the same, it is clear that some AU values are higher than seen so far. Perhaps this indicates another possible check: a maximum sum of AU values or something similar. More research is necessary to create a clear image, however.
Finally, the final analysis results:

Conclusion
Though providing more extreme values, the extreme face was just as analysable by OpenFace as the more neutral images.
Overall Conclusion
From the (limited) testing so far, it seems the pipeline is able to analyse the avatars, even when rather distorted. The key to good, usable results will be creating tailored parameter sets. These can be based on reference material, or generic parameter sets associated with certain emotions.
Last updated
Was this helpful?