Full Run of the Pipeline
Step 1: Creating a parameter set.
This starts by visiting the Avatar Webapp page:
On the main page, click on "Upload parameter set".
Once here, the desired parameters are filled in. To test, each deviation will be set to a random number between 1 and 10. The actual reference value will be either 0 or 1 (as some action units will not be active, hence the 0) with some random numbers thrown in. The set will be named "full-run-test". There can be no spaces in the set names, as this does not work when turning it into a URL.
Adding the parameter set. The above name would cause it to fail. After clicking on "Submit", the set is stored and the next step starts.
This test will use an expressive, well-taken photo. Doing this will hopefully give a best-case scenario result.
The image is uploaded using the "Choose file" button. The default (and only supported) type as of writing is Wild. This changes which analysis OpenFace performs. After selecting the parameter set, clicking on "Submit" starts the analysis process.
The data filled in to submit. Once the data has been submitted, a success page is shown displaying the ID of the job. This can be used later to determine which container the image has been placed in and which logs are associated. Now, the image is uploaded to the BLOB storage.
The job ID is shown after submitting. Triggering the OpenFace Service
As mentioned previously, the OpenFace service is triggered using a function app triggered when something is uploaded to the BLOB storage. First, it is important to verify the image was actually uploaded. This can be done by checking if there is a container with the aforementioned job ID:
An overview of some containers, with the one made just now. The image has been uploaded correctly. The analysis type and parameter set are added to the name of the file for later use in the analysis service. By checking the logs of the cloud function, it is clear that a call to the OpenFace service was triggered once the image was uploaded. It builds a URL and performs a simple REST call based on the contents thereof.
Logs from the cloud function indicating it was triggered by the image being uploaded OpenFace Service
This service has now been triggered and analyses the uploaded image (more on that here). The results thereof can already be seen, as the .csv file has been placed in the same container that houses the image:
A screenshot of the container showing the csv file has been uploaded. This can also be verified using the logs uploaded to Azure using a simple query.
Logs containing the aforementioned image. Triggering the Analysis Service
Similarly to how the OpenFace service was triggered, the cloud function makes another call. It differentiates what type of file was uploaded and performs the corresponding REST call.
Screenshot displaying logs corresponding to the .csv file being uploaded. Analysis Service
The analysis service performs the core analysis and stores these results in the database. This can also be confirmed both by viewing the end result and the logs:
Logs from the analysis service related to the created job. Viewing Results
To further highlight the functionality of the Analysis service, the final results table can be seen in the front-end. First, visit the reports page:
A screenshot of the reports page. By selecting and submitting the desired report, the calculated results can be shown in a table. Selecting the results for "87459401b" displays this:
Here, each AU is shown with the respective result (whether it was within the tolerable limits or not). More details on the actual results comparison will be in the other validation articles.
This article has shown that the pipeline can be used to upload media, analyse it and view the results. Aside from that, the status (logs) of the pipeline can also be traced and queried.