Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Content

Excerpt
Table of Contents

Introduction

iam ML with the Xilinx Deep Learning Processing Unit (DPU) implemented in the FPGA section of the System-on-Chip (SoC). The DPU is a configurable accelerator device for convolutional network calculation. It supports network features like convolution, deconvolution, pooling, batch normalization and dense layers.

Two versions of iam ML with different FPGA sizes are available. The versions provide different performances and model complexity support, see Xilinx PG338 DPU v3.2 (external PDF) for details.

iam Version

DPU

PeakOps/Clock @325Mhz

Zu2

B1152

1150

Zu5

B4096

4096

View file
namexilinx_pg338-dpu.pdf

Video: Machine Learning with iam

Confluence youtube macro video
fullscreenConfluenceButton
privacyEnhancedtrue
fullscreenConfluenceButton
start185
videoIdhttps://youtuwww.youtube.be/JkTNg7Ihgy8com/watch?v=6FJaaO-Pizw
alignCenter

See all Video Tutorial Session .

Workflow

Machine learning models are usually trained with neuronal network packages like Tensorflow or Caffe in Python environments. The trained model can also be executed directly in this training environment for evaluation purpose. For execution of models with DPU accelerator, the model has to be quantized and the DPU instruction code for the specific model has to be generated. This is done with the Xilinx Vitis AI toolset, which provides the required model conversion functions for the DPU accelerator.

Software Requirements

Model training can be done with Tensorflow, Caffe or PyTorch. When using Tensorflow for training, Tensorflow 1.15.xx is required . For advanced training performance, the GPU supported Tensorflow installation is highly recommended. See Tensorflow install and older-versions-of-tensorflow for installation instructions.

For quantization and compilation the Vitis Ai 1.2.1 toolset has to be used. See Vitis AI User Guide. The Xilinx Vitis Ai toolset is provided by in a docker image, which requires a docker environment. Since no complex training is needed during this process, GPU support is not mandatory. See Vitis Ai and Vitis-AI Release 1.2.1 for installation instructions.

Several iam ML sample applications are available. Model training scripts, Vitis Ai instructions and iam application code are provided. Training is done using Tensorflow. The application repository also contains a detailed description of the software requirements and the development workflow.

iam ML Applications Repository

There is a Git repository for an iam ML sample application: