Ecological Validation
Describing whether the project fulfils the demands set out and looking at what it means in the provided context.
Comparing the Ideal and Current Pipelines
Current Pipeline
At the end of my internship I have delivered a functioning proof-of-concept of the Avatar Validation Pipeline. Users can upload images (creating a job), have them analysed and the metrics thereof compared to user-defined parameter sets. This comparison is then detailed in a basic report.
The status of these jobs can be monitored using the logs published to Azure. These logs can be queried based on the unique ID given to each job.
Ideal Pipeline
As the final discussion with BUAS has not yet taken place, this is not yet clear. Some characteristics were mentioned previously, however:
The ability to acquire more metrics than those provided by OpenFace.
Ability to interpret avatar generation logs instead of using images for analysis.
Live interpretation and analysis.
Elaborating on the Differences
The goal of the project was to set up a pipeline to aid in the validating the realism of avatars generated by the BUAS tooling. This entailed limiting the breadth and depth of the functionality in certain directions.
More Metrics than OpenFace
During the first meeting with BUAS, one of the highlighted points was that OpenFace data is incomplete when looking at the totality of metrics used during the avatar creation process. Because of this, sole use of OpenFace could be used to give an indication while more in-depth analysis would require another analysis suite. The work required to create such a suite would be far outside the scope of this project, however.
Media vs Replay Data
While the end result is similar, using replay data (avatar logs to reproduce expressions) as opposed to media (images or video) is far more practical in the long run. Media requires extensive data storage and is more cumbersome to work with. The difficulty here would be to create an interpreter for the replay data and slot it into the pipeline. This would also require extensive collaboration with BUAS. Because of this, working with media was by far the more straightforward option for this project.
Live Interpretation and Analysis
The final goal would be to be able to stream video and analyse it in real-time. The results would then be displayed in a dashboard of sorts.
Based on my current knowledge, this would be the most difficult addition. The pipeline architecture is focused asynchronous jobs. Anything is possible with enough work, but this is not something that lends well to live updates. Recording in real-time and analysing the data at specific intervals would be relatively easy to implement, however.
Moving Towards a Complete Pipeline
Aside from the factors elaborated on above, there are various other features which would be desirable for the application. More on that here.
Last updated
Was this helpful?