Zynq UltraScale MPSoC Base TRD 2017.1 - Design Module 9

Zynq UltraScale MPSoC Base TRD 2017.1 - Design Module 9

Design Overview


This module shows how to add two image-processing filters between the capture and the display. The 2D filter and dense optical flow algorithm are implemented in hardware.





Design Components


This module requires the following components:
  • zcu102_base_trd (SDSoC)
  • filter2d_optflow (HW)
  • video_lib
  • video_qt2



Build Flow Tutorials


2D Filter and Optical Flow Combined Sample


This tutorial shows how to build the hls_video version of filter2d combined with the xfopencv dense optical flow algorithm sample based on the Base TRD SDSoC platform.

  • Open the existing SDx workspace from design module 8 using the SDx tool.
    % cd $TRD_HOME/apu/video_app
    % sdx -workspace . &&
  • Create a new SDx Project.
  • Enter 'filter2d_optflow' as project name.
  • Click 'Add Custom Platform', browse to the $TRD_HOME/apu/sdsoc_pfm directory and confirm. Select the newly added 'zcu102_base_trd (custom)' platform for production silicon or 'zcu102_es2_base_trd (custom)' for ES2 silicon from the list and click 'Next'.
    Note: You can skip this step if you have added the platform in a previous design module.
  • Check the 'Shared Library' box and click 'Next'.
  • Select 'Filter 2D and Optical Flow Library' template and click 'Finish'.
  • Verify the functions added to the HW functions in the project settings panel and the clock frequency is set to 299.97MHz. Wait until the C/C++ indexer has finished, indicated by the progress icon in the lower right corner. Only then change the active build configuration to 'Release'.
  • Right-click the filter2d_optflow project, select 'C/C++ Build Settings'. Navigate to the 'Build Artifacts' tab and add the output prefix 'lib'. Click OK.
  • Right-click the filter2d_optflow project and select 'Build Project'.
  • Copy the content of the generated sd_card folder to the dm9 SD card directory
    % mkdir -p $TRD_HOME/images/dm9
    % cp -r filter2d_optflow/Release/sd_card/* $TRD_HOME/images/dm9

Video Qt Application


This tutorial shows how to build the video library and the video Qt application

  • Source the Qt setup script and generate the Qt Makefile.
    % cd $TRD_HOME/apu/video_app/video_qt2
    % source qmake_set_env.sh
    % qmake video_qt2-dm9.pro -r -spec linux-oe-g++
  • Close SDx and reopen the same workspace so the Qt environment gets picked up correctly by eclipse. If you have sourced the qmake_set_env.sh script in the same shell before opening SDx, you can skip this step.
  • Right-click the video_qt2 project and click 'Build Project'.
  • Copy the generated video_qt2 executable to the dm9 SD card directory.
    % cp video_qt2 $TRD_HOME/images/dm9/



Run Flow Tutorial


  • See here for board setup instructions.
  • Copy all the files from the $TRD_HOME/images/dm9 SD card directory to a FAT formatted SD card.
  • Power on the board to boot the images; make sure INIT_B, done and all power rail LEDs are lit green.
  • After ~30 seconds, the display will turn on and the application will start automatically, targeting the max supported resolution of the monitor (one of 3840x2160 or 1920x1080 or 1280x720). The application will detect whether DP Tx or HDMI Tx is connected and output on the corresponding display device.
  • The SD card file system is mounted as read-only at /media/card. The remount the file system as read-write, run
    % remount
  • To re-start the TRD application with the max supported resolution, run
    % run_video.sh
  • To re-start the TRD application with a specific supported resolution use the -r switch e.g. for 1920x1080, run
    % run_video.sh -r 1920x1080
  • The user can now control the application from the GUI's control bar (bottom) displayed on the monitor.
  • The user can select from the following video source options:
    • TPG (SW): virtual video device that emulates a USB webcam purely in software
    • USB: USB Webcam using the universal video class (UVC) driver
    • TPG (PL): Test Pattern Generator implemented in the PL
    • HDMI: HDMI input implemented in the PL
  • The user can select from the following accelerator options:
    • 2D convolution filter with configurable coefficients
    • Dense optical flow algorithm (shown in figure)
  • The supported accelerator modes depend on the selected filter:
    • OFF - accelerator is disabled/bypassed
    • SW - accelerator is run on A53
    • HW - accelerator is run on PL
  • The video info panel (top left) shows essential settings/statistics.
  • The CPU utilization graph (top right) shows CPU load for each of the four A53 cores.



Return to the Design Tutorials Overview.

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy