Analysis Module
Comparing OpenFace data to parameters and storing the results.
Overview
The analysis module takes the OpenFace data of a specific item of media and compares it to the chosen set of parameters. The results of this comparison are then forwarded to the reporting module.
Functionality
Each set of parameters contains reference values for each AU and degrees to which the uploaded media is allowed to deviate from it. This is expressed as a percentage.

The values for every AU have been determined by the OpenFace module prior to this.
The Analysis
The first step in the analysis is to retrieve the AU data calculated previously by the OpenFace service. This is done by providing the service with the correct container id and downloading the .csv
file inside. The name of the .csv file has the parameter set and analysis type embedded, meaning no further data is required..
Once this file has been downloaded, the data is parsed and the AU values are stored in the database as an Action Unit Report. This allows the data to be tied to the final Analysis Report later down the line.
Now that all the data has been stored, the core analysis can be performed by comparing the stored AU values with the chosen parameter set.
def compare_set_with_action_units(self, report, set):
def sort_tuple_vals(tuple):
tuple.sort(key = lambda x: x[0])
return tuple
# Set up list of tuples with reference value, max deviation and the AU label
try:
parameters = Parameter.objects.filter(parameter_set=set)
ref_values_with_max_dev = []
for item in parameters:
ref_values_with_max_dev.append((item.action_unit, item.reference_value, item.maximum_deviation))
except Exception as e:
self.logger.exception(e, extra=self.properties)
self.logger.critical("\nCould not set up parameter set for analysis.", extra=self.properties)
try:
outcomes = []
for ref in ref_values_with_max_dev:
metric = MetricEntry.objects.get(action_unit_report=report, action_unit__icontains=ref[0])
# Check if equal to reference value or between max and min values
if metric.action_unit_value == ref[1] or (ref[1] * ((100 - ref[2]) / 100)) <= metric.action_unit_value <= (ref[1] * ((100 + ref[2]) / 100)):
outcomes.append((metric.action_unit, True))
else:
outcomes.append((metric.action_unit, False))
self.store_analysis_report(outcomes, report, set)
except Exception as e:
self.logger.exception(e, extra=self.properties)
self.logger.critical("\nError occurred when comparing AU metrics to reference values.", extra=self.properties)
For each AU, a calculation is made comparing the OpenFace data, parameter reference value and the desired deviation. These results are compiled in a list of tuples, with each tuple containing the action unit and a boolean relating to whether the AU value is within the allowed limits. Once the list has been completed it is forwarded to the function used to store the data.
def store_analysis_report(self, outcomes, report, set):
analysis_report = AnalysisReport(action_unit_report=report, parameter_set=set, job_id=self.properties['custom_dimensions']['UUID'], name=self.csv_name)
analysis_report.save()
for item in outcomes:
result = AnalysisResult(analysis_report=analysis_report, action_unit=item[0], result=item[1])
result.save()
Technical Specifications
Like the front-end, the analysis module is a Django application. It contains no templates as it has no user-facing functionality, but is accessed through REST calls from the Azure functions.
Parameter sets are retrieved from the database, while OpenFace data is retrieved from the BLOB storage.
Last updated
Was this helpful?