Week 7 (11/10 - 15/10)

Development and feedback

Working on the MVP

Week 7 was a week entirely focused on development. My goal was to convert the initial C++ scripts into a Django application, and this succeeded!

The application starts with the main screen, where the user can provide an image or video URL and choose its type.

The main page which allows the user to provide an image or video URL and choose its type.

Once the URL has been submitted, it is processed and analysed using OpenFace. This analysis is stored in a .csv file, which is then read and stored in a database.

def analyse_image(self):
        name = self.generate_name(10)

        os.system("cd ~/test_images && curl %s > %s.png" %(self.url, name,))

        if self.type == 'IW':
            print('Analyse (Wild)')
            os.system("cd ~/OpenFaceHeadless/OpenFace/build/ && ./bin/FaceLandmarkImg -f ~/test_images/%s.png -wild" %(name))
        if self.type == 'IP':
            print('Analyse (Posed)')
            os.system("cd ~/OpenFaceHeadless/OpenFace/build/ && ./bin/FaceLandmarkImg -f ~/test_images/%s.png" %(name))

        report = ActionUnitReport(name = name, url = self.url, date = self.date, type = self.type)
        report.save()

        return self.get_image_csv(name, report)

Once the database has been filled with data, the results are shown to the user. These results are then ready to be processed in any subsequent analysis!

A sample of the results.

Results are stored and tied to Action Unit Reports. These reports have a name, date and URL. Administrators can view and edit these in the admin panel.

An overview of reports in the admin panel.

Feedback

The other main event this week was a more technical meeting with a developer in the project. This meeting has changed and clarified the focus of the project somewhat. Instead of my prior perception of the analysis determining whether a face is realistic being the most important part, emphasis should instead be placed on the pipeline itself and the traceability of jobs.

What this means concretely is that it should be possible to perform the above analysis (with a rudimentary analysis to determine realism) 10 or more times in parallel, while being able to track the status of these jobs. A reference was made to GitHub Actions, which I will use as inspiration.

A screenshot of the job overview GitHub Actions provides.

A desire for more documentation supporting my architectural and technological choices was also requested.

Spoilers for next week

  • Re-evaluate architecture based on the renewed focus

  • Better support and document my prior choices

  • Research how to achieve the goals stated in my feedback, including what kind of cloud infrastructure would help me achieve them

Last updated

Was this helpful?