Running the SeeWare edge dev environment on an NVIDIA Jetson Nano

It is possible to use the Nvidia Jetson dev kit devices for the build environment. There can be restrictions due to lack of disk storage and RAM, especially on the Jetson Nano.

In this document this native build environment will be run on a Nano, with the addition of a low cost USB3 SSD add-on.

Running on the Nvidia Jetson Nano

The advantage of the Jetson Nano is that it is the cheapest hardware on which to run CUDA-accelerated software at the edge.

It comes with a power supply, the Jetson Nano compute module on a motherboard that exposes 4 USB3.0, standard HDMI and DisplayPort, RJ45 1Gb/s ethernet and a micro-SD card socket
The Jetson Nano dev kit with 4GB RAM costs ~£100.

Please note that the 2GB RAM version does not have sufficient RAM space for effective development.

To start, go through the initialisation of your Nano, following the steps outlined here: https://developer.nvidia.com/embedded/learn/get-started-jetson-nano-devkit

The current build is JetPack 4.6 and is a ~6GB zip file. You will need a 32GB micro SD card for this, but the card is only needed for the setup.

The default config depends on using the SD card as the main system disk and this really does not have sufficient read/write performance for development activities.

Fortunately, the current and future versions of JetPack transfers /boot from the SD card on first install to NOR memory on the Nano (QSPI-NOR) so that the root filesystem can exist either on an SD card or USB storage.

This move to placing /boot in NOR means that SD card image can be transferred to a USB SSD with a typical performance boost of 5-10x over the fastest SD card.

SSD hardware is designed for constant read/write, unlike SD cards – so it really makes sense to buy a USB3 SSD device; a 120GB version costs ~£30

Note that USB3 SSD is not the same as a USB3 memory stick – just search online for USB3 SSD
Refer to https://jetsonhacks.com/2021/03/10/jetson-nano-boot-from-usb/ for full instructions.

The second issue is lack of RAM for complex make/compile development. By default, the Jetson Nano image runs a virtual swap space using a special compressed file format (nvzram) that runs in RAM.

With the USB SSD in place, we can dispense with nvzram, in preference to a standard swapfile and there is a script available from JetsonHacks – https://jetsonhacks.com/2019/04/14/jetson-nano-use-more-memory/

A 6GB swapfile is reasonable to run with the 4GB RAM device.
Note that the nvzram swapfile solution needs to be disabled

sudo systemctl stop nvzramconfig.service
sudo systemctl disable nvzramconfig.service

Make sure to reboot after creating the new swapfile and disabling the nvzram.
The Nvidia JetPack SDcard image has Docker installed, so the SeeChange build environment can be pulled from Docker Hub directly onto the Nano.

Pull the relevant build environment from Docker Hub

If you haven’t already, create yourself an account on https://hub.docker.com/ and login
Search for ‘insightarm/cpp’

There are several build environment docker images available and they are labelled using with a specific convention, for example

– 4-0-0 is the build revision for the engine
– p3448 is the Nvidia internal code for the Jetson Nano platform
– The 3rd parameter of the tag indicates the jetpack SDK version – 4.2.2, 4.3, 4.4

Note that the build revision should match the revision of the engine source provided separately by Seechange as a tarball

Click on Tags and filter for ‘4-0-0-p3448’ to find images that are relevant for the Nano, build revision 4.0.0

Docker Image Set-up (Local Development Environment)

There are lots of clever ways to build a personalised docker environment, but it is possible to load the image and run a bash shell with

docker run -it --name engine_builder insightarm/cpp:4-0-0-p3448-4.3 /bin/bash

This will take some time as it pulls and extracts around 15GB of image and should finally deliver a shell prompt

Just FYI, in Docker Hub click on the highlighted name directly under the word TAG to see the build script for the image

docker images

To see a list of the images available. Copy the IMAGE ID for build environment into

docker run -it /bin/bash

Building the asset

The engine source code tarball provided by Seechange is a stripped-down version of the engine source code to build your C++ or Python model or library into an asset that can be loaded to a Jetson Nvidia device via the SeeWare adminUI.

Docker allows the assignment of a folder (volume) in your local development environment into a Docker container

Create a local source folder (~/src) and expand into it the engine sourcecode tarball

The Docker command line option ‘-v local:container’ provides a symbolic link between your local folder and an accessible folder within the build environment container (/tmp)

So, to include a local build environment into the Docker container

docker run -it -v /home/$USER/src:/tmp /bin/bash

This will open the Docker container at a bash shell prompt and if you look in the ‘/tmp’ directory you will see the engine framework

The next step is to run through a cmake, make, make test, make install process to build the example inferrer in the Docker container

cd /tmp/SeeChange-Engine-4.0.0/engine/engine_bindings/
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=$PWD/../ ../src
make -j$(nproc –-ignore=2)
make test
make install

The example from cpp/src/engine/example has been built and is available here

build/engine/example/libengine_example.so

Packaging the asset

A tarball containing this file and a descriptive json file should be bundled together. This can then be imported as an asset via the SeeWare adminUI

The json file has the following structure

{
"module_name": "engine.inference.CppInferrer",
"class_name": "CppInferrer",
"cpp": {
"library_name": "libengine_example.so",
"function_name": "CreateInferrer"
},
"model_name": "demo_cpp_v1"
}

Note the “model_name” and the name of the json file should match the name of the tarball
It is possible to manually create the tarball like so

tar czf demo_cpp_v1.tar.gz demo_cpp_v1.json libengine_example.so

A more flexible method to create the tarball is provided by a Python script – asset_maker.py Here is an example, starting from the build folder

cd ../..
export PYTHONPATH=$PWD:$PYTHONPATH
./engine_bindings/asset_maker.py --asset cpp_example_simple_inferrer.tar.gz --metadata '{"cpp":{"libraryName":"libengine.example.so","functionName":"CreateExampleSimpleInferrer"}}' --file engine_bindings/libengine_example.so

The result is a loadable asset that can be added for deployment via a solution in the SeeWare admin UI. It will simply draw random bounding boxes with a single pixel red line on any stream associated with the Nano as processor.