How to deploy an Intel Openvino zoo model on OpenNCC

The key step of developing Edge-AI camera on open camera is to deploy deep learning neural network model at the edge. The model may be trained from Caffe *, tensorflow *, mxnet *, Kaldi *, paddy *, and onnx *.

OpenNCC SDK installation

Please install the SDK development environment according to documentation.Such as:

  • YOUR OPENNCC SDK INSTALL PATH'/Platform/Linux/readme.md

  • We use Linux for demo,other platform also well supported by OpenNCC.

If you still don't have a SDK,you could clone it from github openncc

  • To use Model Optimizer and BLOB Converter,please make sure the OpenVINO ToolKit already installed. You could download the OpenVINO here. And MUST choose the 2020 3 LTS version which comprehensive tested with OpenNCC.

  • If you want to download a model from Open Model Zoo,you need Model Downloader. Or you could also download from website.

Steps of deployment a deep learning model on edge

The following figure shows a complete AI model development process:


If we start with a existing model, the steps are simplified as follows:

Step1: Prepare a trained model

Use the Open Model Zoo to find open-source, pretrained, and preoptimized models ready for inference, or use your own deep-learning model.

Download a model from open model zoo,like:

  • $./downloader.py --name person-detection-retail-0002

  • Note: The OpenNCC support FP16 format,please use FP16 model.


Step2: Model Optimizer

Run the trained model through the Model Optimizer to convert the model to an Intermediate Representation (IR), which is represented in a pair of files (.xml and .bin). These files describe the network topology and contain the weights and biases binary data of the model.

  • If you download the model from Open model zoo,it is already IR files.So we don't need run model optimizer.

  • If you use your own model,you need run model optimizer,please follow Model Optimizer Developer Guide.

Step3: Converte to BLOB format

After the model optimization is completed,means you already have two files(.xml and .bin), the model needs to be converted to the Blob format before it can be deployed on OpenNCC.

You need run myriad_compile tool to packet the IR files to a BLOB file,the openncc using the blob file to inference the model.

  • Example:

$ ./myriad_compile -m input_xxx-fp16.xml -o output_xxx.blob -VPU_PLATFORM VPU_2480 -VPU_NUMBER_OF_SHAVES 8 -VPU_NUMBER_OF_CMX_SLICES 8

  • Note:

The myriad_compile is a tool of OpenVINO ToolKit,you need install the openvino on your development host. The tool under:

/opt/intel/openvino/deployment_tools/inference_engine/lib/intel64myriad_compile


Step4: Inference on OpenNCC and extract the results

Use the OpenNCC SDK to download the BLOB file,run inference and output results on OpenNCC Cameras.

The OpenNCC SDK would output two types of streams:

* Normal video stream,support YUV420,YUV422,MJPG,H.264 and H.265

* AI-Meta data stream is the binary results by inference with frame based data. The specific output structure depends on the model which running on OpenNCC.Take the person-detection-retail-0002 model as an example:


  • Outputs:

The net outputs "detection_output" blob with shape: [1x1xNx7], where N is the number of detected pedestrians. For each detection, the description has the format: [image_id, label, conf, x_min, y_min, x_max, y_max], where: