Developing Jetson solutions on your laptop

TL;DR – how to use cross-platform tech in Docker to build arm64 solutions on an x86 platform

At Seechange, we want our partners and customers to be able to integrate easily with our platform.

Models can be developed elsewhere and deployed by SeeWare to deliver inference data in our secure, scalable environment.

One of the challenges for developers creating models for deployment to edge devices is cross-platform development.

For example, the initial development environment is typically x86-based laptop, MacBook or Virtual Machine.

The target hardware could be Arm-based Jetson Nvidia, Google TPU, a virtual GPU instance in the cloud or some other combination.

Hardware at the edge is developing rapidly and Seechange is expanding our range of targets to match.

Target requirements can be quite complex;

If the solution is running accelerated on Nvidia hardware, which version of JetPack is being used?

Is the model based on PyTorch or TensorRT?, written in C++ or Python?

Is the Nvidia hardware at the edge, running on an Arm-based platform or in an x86-based server?

Seechange creates a set of Docker images that is updated as AI toolsets and our solution develops.

Each of our Docker images has a complete framework in which to build model assets, available for each type of target hardware.

So, back to the purpose of this post; how can a model be built from a laptop to be deployed on an edge device running on a different CPU? 

The concept of cross-platform development in linux has been available for many years through platform emulation, such as qemu

Docker has recently made this easy to implement in Docker Desktop with buildx.

So as an example, to run a Seechange build environment for an Nvidia Jetson AGX/NX from your laptop, you should understand the following:

First choose your development hardware and run through ONE of the following options:

  • For Windows 10 or a current Mac OS, install Docker Desktop from here:

  • If you want to work in the Windows Subsystem for Linux (WSL), please make sure you are running WSL2, for more information:

  • As an alternative to Docker Desktop – If you are a hardcore linux developer, then install Docker and docker-compose:

# install the qemu packages

$ sudo apt-get install qemu binfmt-support qemu-user-static

# The following step will execute the registering scripts to enable docker multi-arch support

$ docker run –rm –privileged multiarch/qemu-user-static –reset -p yes

# Test the emulation environment

$ docker run –rm -t arm64v8/ubuntu uname -m

# should respond with ‘aarch64’

  1. Read about support of multiple system architectures in Docker Desktop multi-arch support

  1. Once either Docker Desktop or Docker plus qemu is in place you will find it easy to run any platform architecture supported, simply by adding the –platform variable in your Docker command line:

$ docker run -it –platform linux/aarch64 <docker image> /bin/bash

Next steps?

Please get in touch, talk to us about running your model in SeeWare