Interfacing with OpenFace in the Cloud

An overview of options when it comes to using OpenFace in the cloud

Interfacing with OpenFace in a cloud-based context offers a unique challenge as it is an entirely separate application that is interfaced with (locally) using the terminal. Running terminal commands on a remote docker container from another docker container is, unfortunately, not an option. Some research was therefore needed to tackle this problem.

One large container

One option is to avoid the problem in the first place by integrating the two applications into a single container. This is simply done by adding OpenFace in the Dockerfile of the Django application. This has various benefits: the web-app is then not dependant on the state of the remote OpenFace instance, it is easy to interface with (little to no code would need to be changed) and it is architecturally simple and easy to maintain. It is not all benefits, however, as it does create extra overhead. Each instance of the web-app would need its own instance of OpenFace, which may prove not to be necessary. On the other hand, it would then also not be possible to run multiple seperate containers of OpenFace to serve a single web-app instance. In short, it loses flexibility.

Multiple separate containers

OpenFace has its own Docker imagearrow-up-right. This means it is relatively straightforward to deploy an isolated instance of OpenFace. How Django will communicate with this container can differ, however.

SSH into one container from the other

This is mostly mentioned as an aside, as this is a relatively hacky way of getting two containers to communicate (and likely very unsafe). In this case, the Django container would basically have to SSH into the OpenFace container and run commands in that way. In my opinion this goes against the principles behind containers.

Run on the same VM

Running both containers on the same VM is one of the more straightforward implementations. In this case, a bridge networkarrow-up-right could be used to allow the two to communicate. Another option is to run commands on the host from the Django container, after which the results can be written to a properly shared docker volume. Running commands in such a way is similarly to the SSH method somewhat "hacky", but still more safe.

Have a separate API in a container with OpenFace and interface over HTTP

This is probably the most straightforward "safe" option. A small REST server would be made to receive a request from the main Django app and run a command (similar to the current implementation) to run an OpenFace analysis. The results would then be forwarded to the main application, after which they can be stored in the database. Another possible implementation of this would be to simply split off the current analysis modules and bundle them with OpenFace.

Azure WebJob

WebJobs are Azure services tied to app deployments. Their goal is to host a small script or program that can either be run concurrently with the main application or started when necessary. One possibility is to deploy OpenFace as a WebJob and interface with it using the WebJob API. This has not yet been investigated fully, however, and may not be applicable in this context.

Last updated