OpenAMP 2018.1


This page includes information to complement and clarify UG1186 "Getting Started Guide" for Zynq UltraScale+ MPSoC.

Table of Contents

Quick try!

Here are the basic steps to boot Linux and run an openamp application using pre-built images.

e.g for ZCU102:
The echo-test application sends packets from Linux running on quad-core Cortex-A53 to a single cortex-R5 running FreeRTOS which send them back.
  • Extract files BOOT.BIN, image.ub and openamp.dtb files from a pre-built Petalinux BSP tarball to sdcard
  • host shell$ tar xvf xilinx-zcu102-v2017.3-final.bsp --strip-components=4 --wildcards */BOOT.BIN */image.ub */openamp.dtb
    host shell$ cp BOOT.BIN image.ub openamp.dtb <your sd card>
Note: Alternatively, if you already created a Petalinux project with a provided BSP for your board, pre-built images can also be found under the <your project>/pre-built/linux/images/ directory.
  • Go to u-boot prompt and boot Linux from sdcard
  • ...
    Hit any key to stop autoboot:  0 
    ZynqMP> mmcinfo &&&& fatload mmc 0 ${netstart} ${kernel_img} &&&&  fatload mmc 0 0x14000000 openamp.dtb
    Device: sdhci@ff170000
    ...
    reading image.ub
    31514140 bytes read in 2063 ms (14.6 MiB/s)
    reading openamp.dtb
    38320 bytes read in 18 ms (2 MiB/s)
    ZynqMP> bootm $netstart $netstart 0x14000000
    ...
Note: As an alternative to all steps above to sd-boot, you can jtag-boot the board. For this you need to have connected a jtag cable, installed jtag drivers and created a Petalinux project using a provided BSP. You would then go into the <your project>/pre-built/linux/images directory and replace file system.dtb by openamp.dtb, then enter: "petalinux-boot --jtag --prebuilt 3"
  • At Linux login prompt enter 'root' for user and 'root' for password and run echo-test demo
  • plnx_aarch64 login: root
    Password: 
    root@plnx_aarch64:~# echo image_echo_test > /sys/class/remoteproc/remoteproc0/firmware 
    root@plnx_aarch64:~# echo start > /sys/class/remoteproc/remoteproc0/state   
    [  177.375451] remoteproc remoteproc0: powering up ff9a0100.zynqmp_r5_rproc
    [  177.384705] remoteproc remoteproc0: Booting fw image image_echo_test, size 644144
    [  177.396832] remoteproc remoteproc0: registered virtio0 (type 7)
    [  177.399108] virtio_rpmsg_bus virtio0: rpmsg host is online
    [  177.412370] zynqmp_r5_remoteproc ff9a0100.zynqmp_r5_rproc: RPU boot from TCM.
    [  17Starting application...
    Try to init remoteproc resource
    Init remoteproc resource succeeded
    Waiting for events...
    7.422089] remoteproc remoteproc0: remote processor ff9a0100.zynqmp_r5_rproc is now up
    [  177.442121] virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel addr 0x1
    root@plnx_aarch64:~# modprobe rpmsg_user_dev_driver
    [  188.089835] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: rpmsg_user_dev_rpmsg_drv_probe
    [  188.101250] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: new channel: 0x400 -> 0x1!
    root@plnx_aarch64:~# echo_test
     Echo test start 
     Open rpmsg dev! 
    [  190.364739] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: Sent init_msg to target 0x1.

Docs and source code:

Documents:

  • The following document describes libmetal APIs:

URLs to source code:


Xilinx Openamp and Libmetal related code:

The following location provide access to the code:

Additional examples:



ZynqMP Linux Master running on APU with RPMsg in kernel space and 2 RPU slaves.


Enabling Linux Drivers and other packages
Proceed as indicated in UG1186 to enable Linux remoteproc driver support and other openamp packages.

Device tree:
  • Add the folloing to <petalinux project>/project-spec/meta-user/recipes-bsp/device-tree/file/system-user.dtsi
/ {
    reserved-memory {
        #address-cells = <2>;
        #size-cells = <2>;
        ranges;
        rproc_0_reserved: rproc@3ed00000 {
            no-map;
            reg = <0x0 0x3ed00000 0x0 0x1000000>;
        };
    };
 
    power-domains {
        pd_r5_0: pd_r5_0 {
            #power-domain-cells = <0x0>;
            pd-id = <0x7>;
        };
        pd_r5_1: pd_r5_1 {
            #power-domain-cells = <0x0>;
            pd-id = <0x8>;
        };
        pd_tcm_0_a: pd_tcm_0_a {
            #power-domain-cells = <0x0>;
            pd-id = <0xf>;
        };
        pd_tcm_0_b: pd_tcm_0_b {
            #power-domain-cells = <0x0>;
            pd-id = <0x10>;
        };
        pd_tcm_1_a: pd_tcm_1_a {
            #power-domain-cells = <0x0>;
            pd-id = <0x11>;
        };
        pd_tcm_1_b: pd_tcm_1_b {
            #power-domain-cells = <0x0>;
            pd-id = <0x12>;
        };
    };
 
    amba {
        r5_0_tcm_a: tcm@ffe00000 {
            compatible = "mmio-sram";
            reg = <0 0xFFE00000 0x0 0x10000>;
            pd-handle = <&&pd_tcm_0_a>;
        };
        r5_0_tcm_b: tcm@ffe20000 {
            compatible = "mmio-sram";
            reg = <0 0xFFE20000 0x0 0x10000>;
            pd-handle = <&&pd_tcm_0_b>;
        };
        r5_1_tcm_a: tcm@ffe90000 {
            compatible = "mmio-sram";
            reg = <0 0xFFE90000 0x0 0x10000>;
            pd-handle = <&&pd_tcm_1_a>;
        };
        r5_1_tcm_b: tcm@ffe92000 {
            compatible = "mmio-sram";
            reg = <0 0xFFEB0000 0x0 0x10000>;
            pd-handle = <&&pd_tcm_1_b>;
        };
 
        elf_ddr_0: ddr@3ed00000 {
            compatible = "mmio-sram";
            reg = <0 0x3ed00000 0x0 0x40000>;
        };
 
        elf_ddr_1: ddr@3ed40000 {
            compatible = "mmio-sram";
            reg = <0 0x3ed40000 0x0 0x40000>;
        };
 
        test_r50: zynqmp_r5_rproc@0 {
            compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
            reg = <0x0 0xff9a0100 0 0x100>, <0x0 0xff340000 0 0x100>, <0x0 0xff9a0000 0 0x100>;
            reg-names = "rpu_base", "ipi", "rpu_glbl_base";
            dma-ranges;
            core_conf = "split0";
            srams = <&&r5_0_tcm_a &&r5_0_tcm_b &&elf_ddr_0>;
            pd-handle = <&&pd_r5_0>;
            interrupt-parent = <&&gic>;
            interrupts = <0 29 4>;
 
        } ;
 
        test_r51: zynqmp_r5_rproc@1 {
            compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
            reg =<0x0 0xff9a0200 0 0x100>, <0x0 0xff340000 0 0x100>, <0x0 0xff9a0000 0 0x100>;
            reg-names = "rpu_base", "ipi", "rpu_glbl_base";
            dma-ranges;
            core_conf = "split1";
            srams = <&&r5_1_tcm_a &&r5_1_tcm_b &&elf_ddr_1>;
            pd-handle = <&&pd_r5_1>;
            interrupt-parent = <&&gic>;
            interrupts = <0 29 4>;
 
        } ;
 
    };
};
 


  • rebuild the device tree

    petalinux-build -c device-tree
Building remote processor demo applications with Xilinx SDK:
For RPU 0 (cortex_r5_0)
Proceed as documented in UG1186 to generate remote processor openamp applications with Xilinx SDK.
RPU 0 is also used by default for the pre-built applications provided with Petalinux BSPs.

For RPU 1 (cortex_r5_1)
Remote processor applications (echo_test, matrix multiply, rpc demo) code is by default set to run with RPU 0 and need to be slightly modified for RPU-1.
When RPU-1 is selected in Xilinx SDK, the code generated need to be modified as follow:
  • Check that RING_TX, RING_RX and the RSC_RPROC_MEM entries in rsc_table.c are within the reserved memory section defined in the device tree but do not overlap with any other sections within. (E.g. The DDR for RPU 0/1, vring device nodes, etc.)
  • Check that the linker script addresses match and fit the DTS zynqmp_r5_rproc memory sections.

Example: Running two echo_test application concurrently on Linux, each communicating to a separate RPU
  1. Use Petalinux to build/boot your target and then login to Linux console serial port.
  2. If you haven't added the remote processor firmware applications to your Linux root filesystem (see UG1186 ch. 3) you can tftp them in the target directory /lib/firmware
  3. Check remoteproc driver is already loaded (normally it is if your device tree is properly configured):

    root@plnx_aarch64:/lib/firmware# lsmod
        Tainted: G  
    virtio_rpmsg_bus 20480 0 - Live 0xffffff800098e000
    rpmsg_core 16384 1 virtio_rpmsg_bus, Live 0xffffff800097c000
    zynqmp_r5_remoteproc 16384 0 - Live 0xffffff800096a000
    remoteproc 40960 1 zynqmp_r5_remoteproc, Live 0xffffff8000959000
    virtio 16384 2 virtio_rpmsg_bus,remoteproc, Live 0xffffff8000951000
    virtio_ring 20480 2 virtio_rpmsg_bus,remoteproc, Live 0xffffff8000948000
    uio_pdrv_genirq 16384 0 - Live 0xffffff8000940000
  4. Load rpmsg_user_dev_driver LKM:

    root@plnx_aarch64:/lib/firmware# modprobe rpmsg_user_dev_driver
  5. Start RPU-0:

    root@plnx_aarch64:/lib/firmware# echo image_echo_test_r5_0 > /sys/class/remoteproc/remoteproc0/firmware
    root@plnx_aarch64:/lib/firmware#
    root@plnx_aarch64:/lib/firmware# echo start > /sys/class/remoteproc/remoteproc0/state
    root@plnx_aarch64:/lib/firmware#
    [70982.961635] remoteproc remoteproc0: powering up ff9a0100.zynqmp_r5_rproc
    [70982.971366] remoteproc remoteproc0: Booting fw image image_echo_test_r5_0, size 638724
    [70982.985672] virtio_rpmsg_bus virtio0: rpmsg host is online
    [70982.993691] remoteproc remoteproc0: registered virtio0 (type 7)
    [70983.002197] zynqmp_r5_remoteproc ff9a0100.zynqmp_r5_rproc: RPU boot from TCM.
    [7Starting application...
    Try to init remoteproc resource
    Init remoteproc resource succeeded
    Waiting for events...
    0983.012367] remoteproc remoteproc0: remote processor ff9a0100.zynqmp_r5_rproc is now up
    [70983.032821] virtio_rpmsg_bus virtio0: creating channel rpmsg-openamp-demo-channel addr 0x1
    [70983.043731] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: rpmsg_user_dev_rpmsg_drv_probe
    root@plnx_aarch64:/lib/firmware# [70983.055535] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: new channel: 0x400 -> 0x1!
  6. Start RPU-1:

    root@plnx_aarch64:/lib/firmware# echo image_echo_test_r5_1 > /sys/class/remoteproc/remoteproc1/firmware
    root@plnx_aarch64:/lib/firmware#
    root@plnx_aarch64:/lib/firmware# echo start > /sys/class/remoteproc/remoteproc1/state 
    [71185.157615] remoteproc remoteproc1: powering up ff9a0200.zynqmp_r5_rproc
    [71185.167453] remoteproc remoteproc1: Booting fw image image_echo_test_r5_1, size 639140
    [71185.182180] virtio_rpmsg_bus virtio1: rpmsg host is online
    [71185.190226] remoteproc remoteproc1: registered virtio1 (type 7)
    [71185.198724] zynqmp_r5_remoteproc ff9a0200.zynqmp_r5_rproc: RPU boot from TCM.
    [7Starting application...
    Try to init remoteproc resource
    Init remoteproc resource succeeded
    Waiting for events...
    1185.208915] remoteproc remoteproc1: remote processor ff9a0200.zynqmp_r5_rproc is now up
    [71185.229420] virtio_rpmsg_bus virtio1: creating channel rpmsg-openamp-demo-channel addr 0x1
    [71185.240367] rpmsg_user_dev_driver virtio1:rpmsg-openamp-demo-channel: rpmsg_user_dev_rpmsg_drv_probe
    root@plnx_aarch64:/lib/firmware# [71185.252200] rpmsg_user_dev_driver virtio1:rpmsg-openamp-demo-channel: new channel: 0x400 -> 0x1!
  7. Run echo_test Linux application with RPU-0 using either the serial port or other telnet or ssh connection:

    root@plnx_aarch64:/lib/firmware# echo_test
     Echo test start 
     Open rpmsg dev! 
    [71507.962881] rpmsg_user_dev_driver virtio0:rpmsg-openamp-demo-channel: Sent init_msg to target 0x1.
  8. Run a concurrent echo_test Linux application with RPU-1 using another connection (telnet, ssh...):

    root@plnx_aarch64:/lib/firmware# echo_test -d /dev/rpmsg1 
     Echo test start 
     Open rpmsg dev! 
Note: The order in which you start the RPU determines which /dev/rpmsgX device is being used with that RPU.
In the above case /dev/rpmsg0 is used for RPU-0.
If however RPU-1 was started first, it would have been associated with /dev/rpmsg0 and RPU-0 would have been using /dev/rpmsg1.

ZynqMP Linux Master running on APU with RPMsg in userspace and 2 RPU slaves.


Setting Device Tree
1. Add following device tree content to <petalinux project>/project-spec/meta-user/recipes-bsp/device-tree/file/system-user.dtsi


/ {
    reserved-memory {
        #address-cells = <2>;
        #size-cells = <2>;
        ranges;
        /* Reserved memory for both firmware and shared memory */
        rproc_0_reserved: rproc@3ed000000 {
            no-map;
            reg = <0x0 0x3ed00000 0x0 0x8000000>;
        };
    };
 
    amba {
 
 
        vring: vring@0 {
            compatible = "vring_uio";
            reg = <0x0 0x3ed40000 0x0 0x40000>;
        };
        shm0: shm@0 {
            compatible = "shm_uio";
            reg = <0x0 0x3ed80000 0x0 0x80000>;
        };
 
        vring1: vring@1 {
            compatible = "vring_uio";
            reg = <0x0 0x3ef00000 0x0 0x40000>;
        };
        shm1: shm@1 {
            compatible = "shm_uio";
            reg = <0x0 0x3ef40000 0x0 0x80000>;
        };
 
        ipi0: ipi@0 {
            compatible = "ipi_uio";
            reg = <0x0 0xff340000 0x0 0x1000>;
            interrupt-parent = <&&gic>;
            interrupts = <0 29 4>;
        };
 
 
    };
};


2. Enable the OpenAMP and libmetal packages with "petalinux-config -c rootfs":
Filesystem Packages --->
   libs --->
       libmetal   --->
         [*] libmetal
       open-amp   --->
         [*] open-amp
 
 
Building Linux OpenAMP user space application with XSDK
  • Create Empty Application for Linux and for A53
    • OS: Linux
    • Processor: psu_cortexa53
    • Linux sysroot: the sysroot from the Petalinux project:
      • <plnx proj>/build/tmp/sysroots/plnx_aarch_64
    • Once project is built, select properties:
      • C/C++ Build --> Settings
        • Tool Settings Tab Libraries
          • Libaries (-l) add "metal" and "open_amp"
          • Miscellaneous
            • in Linker Flags, add --sysroot=<plnx proj>/build/tmp/sysroots/plnx_aarch_64
    • Copy the following files for OpenAMP application for ZynqMP into the directory:
      • platform_info.c, platform_info.h, rsc_table.c and rsc_table.h from here
      • helper.c from here
    • And the Linux files for one of the three OpenAMP applications in the link:
        • OpenAMP echo-test
        • OpenAMP matrix multiplication Demo
        • OpenAMP RPC Demo
  • When building the application note a few configuration parameters:
    • If building Linux application to communicate with RPU 1:
      • replace the following in platform_info.c: change IPI_MASK to 0x200
    • The RSC_RPROC_MEM entries must be within the corresponding vring device tree node.
    • Update the following inside of the Linux application's platform_info.c to reflect possible changes to the device tree nodes:
      • IPI_DEV_NAME
      • VRING_DEV_NAME
      • SHM_DEV_NAME
    • The above means that if the new vring entry in the device tree entry is at 0x3ef00000, then the string for the VRING_DEV_NAME should now be "3ef00000.vring" as shown in sysfs on target.
    • Update RING_TX and RING_RX to reflect the vring entry in the device tree.
Install XSDK-built Linux applications in the Petalinux project
The Linux applications can be installed via a yocto recipe as follows:
SUMMARY = "Simple test application"
SECTION = "PETALINUX/apps"
LICENSE = "MIT"
LIC_FILES_CHKSUM =
"file://${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
SRC_URI = "file://<myapp>"
S = "${WORKDIR}"
INSANE_SKIP_${PN} = "arch"
do_install() {
             install -d ${D}/lib/firmware
             install -m 0644 ${S}/<myapp> ${D}/lib/firmware/<myapp>
}
FILES_${PN} = "/lib/firmware/<myapp>
Building remote processor demo applications to run on RPU 0 (cortex_r5_0) with Xilinx SDK
Proceed as documented in UG1186 to generate remote processor openamp applications with Xilinx SDK.

Building remote processor demo applications to run on RPU 1 (cortex_r5_1) with Xilinx SDK
Remote processor applications (echo_test, matrix multiply, rpc demo) code is by default set to run with RPU 0 and need to be slightly modified for RPU-1.
When RPU-1 is selected in Xilinx SDK, the code generated need to be modified as follow:
  • Edit rsc_table.c such that RING_TX and RING_RX are within the memory for the vring device tree node.
  • Check that the linker script addresses match are within reserved memory but do not overlap.

Example: Running two proxy_app-openamp application concurrently on Linux, each communicating to one RPU

1. Generate BOOT.BIN with RPU firmware built from XSDK
Below is a BIF example:
the_ROM_image:
{
         [fsbl_config] a53_x64
         [bootloader] <plnx project>/images/linux/zynqmp_fsbl.elf
         [destination_device=pl] <plnx project/images/linux/download.bit
         [destination_cpu=pmu] <plnx project>/images/linux/pmufw.elf
         [destination_cpu=r5-0] /path/to/openamp_firmware_on_rpu0.elf
         [destination_cpu=r5-1] /path/to/openamp_firmware_on_rpu1.elf
         [destination_cpu=a53-0, exception_level=el-3, trustzone] <plnx project>/images/linux/bl31.elf
         [destination_cpu=a53-0, exception_level=el-2] <plnx project>/images/linux/u-boot.elf
}
2. When booting ,the RPU firmware for RPU 0 and RPU 1 should have output resembling the following:
Starting application...
Try to init remoteproc resource
Init remoteproc resource succeeded
Waiting for events...
4. Run proxy_app-openamp Linux application with RPU-0 using either the serial port or other telnet or ssh connection:
root@plnx_aarch64# proxy_app-openamp
metal: info: metal_uio_dev_open: No IRQ for device 3ed40000.vring.
metal: info: Initializating I/Os redirection...
metal: info: metal_uio_dev_open: No IRQ for device 3ed40000.vring.
metal: info: metal_uio_dev_open: No IRQ for device 3ed80000.shm.
Master> Remote proc resource initialized.
Master> RPMSG channel has created.
Remote>Baremetal Remote Procedure Call (RPC) Demonstration
5. Run a concurrent proxy_app-openamp Linux application with RPU-1 using another connection (telnet, ssh...):
root@xilinx-zcu102-2017_3:~# linux_proxy_userspace_to_r5_1.elf
metal: info:      metal_uio_dev_open: No IRQ for device 3ef00000.vring.
metal: iInitializating I/Os redirection...
nfo:      metal_uio_dev_open: No IRQ for device 3ef00000.vring.
metal: info:      metal_uio_dev_open: No IRQ for device 3ef40000.shm.
Master> Remote proc resource initialized.
Master> RPMSG channel has created.
Remote>Baremetal Remote Procedure Call (RPC) Demonstration

ZynqMP Linux Master running on APU with RPMsg in kernel space and one RPU slave.


When running with RPU in split mode and only one RPU is an OpenAMP slave, the second RPU can still run another non-openamp application.

  • RPU-0 slave:
    Petalinux BSPs provide a default template to generate a DTB with support for OpenAMP running on RPU-0, see:
    <petalinux project>/project-spec/meta-usr/recipes-bsp/device-tree/files/openamp-overlay.dtsi
    Add its content to file <petalinux project>/project-spec/meta-user/recipes-bsp/device-tree/file/system-user.dtsi
  • RPU-1 slave:
    Proceed as for the two RPU configuration above and edit your device tree to remove the unused 'zynmp_r5_rproc' entry and associated nodes (tcm, pd,...) that may not be needed any more.

ZynqMP Linux Master running on APU Linux loads OpenAMP RPU Firmware


Overview
The information below is intended to provide guidance to users who wish to set up a Linux on APU + Bare-metal/RTOS on RPU. This configuration relies on the FSBL to start the software running on the APU, and then APU Linux using remoteproc will load the RPU.

To Boot RPU Firmware via APU with Linux
These instructions assume the user has already generated firmware for the RPU and that the user is using Petalinux to create their embedded Linux solution.
  • As directed in UG 1186, create an application inside of the Petalinux project to install the firmware into the Linux host's file system in /lib/firmware.
To create a template for a yocto recipe to install the firmware, do the following:
  1. Create yocto application inside of Petalinux project
    • petalinux-create -t apps --template install -n <app_name> --enable
  2. copy firmware (.elf file) into project-spec/meta-user/recipes-apps/<app_name>/files/ directory
  3. Modify the project-spec/meta-user/recipes-apps/<app_name>/<app_name>.bb to install the remote processor firmware in the RootFS as follows:
SUMMARY = "Simple test application"
SECTION = "PETALINUX/apps"
LICENSE = "MIT"
LIC_FILES_CHKSUM ="file:${COMMON_LICENSE_DIR}/MIT;md5=0835ade698e0bcf8506ecda2f7b4f302"
SRC_URI = "file:<myfirmware>"
S = "${WORKDIR}"
INSANE_SKIP_${PN} = "arch"
do_install() {
  install -d ${D}/lib/firmware
  install -m 0644 ${S}/<myfirmware> ${D}/lib/firmware/<myfirmware>
}
FILES_${PN} = "/lib/firmware/<myfirmware>


  • Build Linux images with the "petalinux-build" command inside the PetaLinux project.


2. Modify the device tree at project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi. For example:
/ {
    reserved-memory {
        #address-cells = <2>;
        #size-cells = <2>;
        ranges;
        rproc_0_reserved: rproc@3ed000000 {
            no-map;
            /* DDR memory reserved for RPU firmware.
             * If you want to use predefined shared memory,
             * you should also reserved them here.
             */
            reg = <0x0 0x3ed00000 0x0 0x1000000>;
        };
    };
 
    power-domains {
        /* For TCM memories, you will need specify the power domain
         * IDs. As APU will need to use the power domain ID to request
         * access through PMU FW.
         */
         pd_r5_0: pd_r5_0 {
            #power-domain-cells = <0x0>;
            pd-id = <0x7>;
          };
        pd_tcm_0_a: pd_tcm_0_a {
            #power-domain-cells = <0x0>;
            pd-id = <0xf>;
        };
        pd_tcm_0_b: pd_tcm_0_b {
            #power-domain-cells = <0x0>;
            pd-id = <0x10>;
        };
 
    };
 
    amba {
         /* You will need to specify the firmware memory as "mmio-sram". */
         r5_0_tcm_a: tcm@ffe00000 {
            compatible = "mmio-sram";
            reg = <0 0xFFE00000 0x0 0x10000>;
            pd-handle = <&&pd_tcm_0_a>;
        };
        r5_0_tcm_b: tcm@ffe20000 {
            compatible = "mmio-sram";
            reg = <0 0xFFE20000 0x0 0x10000>;
            pd-handle = <&&pd_tcm_0_b>;
        };
 
        elf_ddr_0: ddr@3ed00000 {
            compatible = "mmio-sram";
            reg = <0 0x3ed00000 0x0 0x40000>;
        };
 
        test_r50: zynqmp_r5_rproc@0 {
            compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
            reg = <0x0 0xff9a0100 0 0x100>, <0x0 0xff9a0000 0 0x100>;
            reg-names = "rpu_base", "rpu_glbl_base";
            dma-ranges;
            core_conf = "split0";
            /* Specify the firmware memories here */
            srams = <&&r5_0_tcm_a &&r5_0_tcm_b &&elf_ddr_0>;
            pd-handle = <&&pd_r5_0>;
        } ;
    };
};
 
 
3. Run the following to build your petalinux project.
petalinux-build
4. After booting the Petalinux project, run the following to boot the RPU firmware onto RPU.
echo <name of firmware> > /sys/class/remoteproc/remoteproc0/firmware
echo start > /sys/class/remoteproc/remoteproc0/state
 
5. Run Linux application
6. Stop firmware
echo stop > /sys/class/remoteproc/remoteproc0/state

ZynqMP Linux loads RPU, Linux OpenAMP Application talks to RPU OpenAMP Application


Overview
The information below is intended to provide guidance to users who wish to set up a Linux on APU+ Bare-metal/RTOS on RPU. APU Linux will use remoteproc to load the RPU. Linux running on APU will communicate with RPU via OpenAMP library implementation of RPMsg.

Setting up Remote Firmware
The user can use, for example, a similar structure as the OpenAMP RPU applications created in the Building Remote Applications sections of UG1186 .

To Boot RPU Firmware via APU with Linux
These instructions assume the user has already generated firmware for the RPU and that the user is using Petalinux to create their embedded Linux solution.
  • As directed in UG 1186, create a yocto recipe inside of the Petalinux project to install the firmware into the Linux host's file system in /lib/firmware. Refer to the previous example:ZynqMP Linux Master running on APU Linux loads OpenAMP RPU Firmware for a guide on how to create such a yocto recipe.
  • Modify the device tree at project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi. For example:
/include/ "system-conf.dtsi"
/{
    reserved-memory {
        #address-cells = <2>;
        #size-cells = <2>;
        ranges;
        rproc_0_reserved: rproc@3ed000000 {
            no-map;
            reg = <0x0 0x3ed00000 0x0 0x1000000>;
        };
    };
 
    power-domains {
        pd_r5_0: pd_r5_0 {
            #power-domain-cells = <0x0>;
            pd-id = <0x7>;
        };
        pd_tcm_0_a: pd_tcm_0_a {
            #power-domain-cells = <0x0>;
            pd-id = <0xf>;
        };
        pd_tcm_0_b: pd_tcm_0_b {
            #power-domain-cells = <0x0>;
            pd-id = <0x10>;
        };
    };
 
    amba {
        r5_0_tcm_a: tcm@ffe00000 {
            compatible = "mmio-sram";
            reg = <0x0 0xFFE00000 0x0 0x10000>;
            pd-handle = <&&pd_tcm_0_a>;
        };
        r5_0_tcm_b: tcm@ffe20000 {
            compatible = "mmio-sram";
            reg = <0x0 0xFFE20000 0x0 0x10000>;
            pd-handle = <&&pd_tcm_0_b>;
        };
        elf_ddr_0: ddr@3ed00000 {
            compatible = "mmio-sram";
            reg = <0x0 0x3ed00000 0x0 0x40000>;
        };
        test_r50: zynqmp_r5_rproc@0 {
            compatible = "xlnx,zynqmp-r5-remoteproc-1.0";
            reg = <0x0 0xff9a0100 0x0 0x100>, <0x0 0xff9a0000 0x0 0x100>;
            reg-names = "rpu_base", "rpu_glbl_base";
            dma-ranges;
            core_conf = "split0";
            srams = <&&r5_0_tcm_a &&r5_0_tcm_b &&elf_ddr_0>;
            pd-handle = <&&pd_r5_0>;
        } ;
        /* UIO device node for vring device memory */
        vring: vring@0 {
            compatible = "vring_uio";
            reg = <0x0 0x3ed40000 0x0 0x40000>;
        };
        /* UIO device node for shared memory device memory */
        shm0: shm@0 {
            compatible = "shm_uio";
            reg = <0x0 0x3ed80000 0x0 0x80000>;
        };
        /* UIO device node for IPI device */
        ipi0: ipi@0 {
            compatible = "ipi_uio";
            reg = <0x0 0xff340000 0x0 0x1000>;
            interrupt-parent = <&&gic>;
            interrupts = <0 29 4>;
        };
    };
 
};
 

Build Petalinux with required packages

1. Enable the required packages with petalinux tools. For example, go to your petalinux project's topmost directory and start the rootfs configuration utility:
petalinux-config -c rootfs
2. Enable the required rootfs packages for the application. If you are running the sample applications from UG1186, the packages would be enabled by the following:
Filesystem Packages
  --> libs
      --> libmetal
          --> [ * ] libmetal
    --> openamp
        --> [ * ] open-amp
  --> misc
      --> openamp-fw-echo-testd
          --> [ * ] openamp-fw-echo-testd
      --> openamp-fw-mat-muld
          --> [ * ] openamp-fw-mat-muld
      --> openamp-fw-rpc-demod
          --> [ * ] openamp-fw-rpc-demod
      --> rpmsg-echo-test
          --> [ * ] rpmsg-echo-test
      --> rpmsg-mat-mul
          --> [ * ] rpmsg-mat-mul
      --> rpmsg-proxy-app
          --> [ * ] rpmsg-proxy-app


3. Then build the petalinux project.
petalinux-build
Load firmware and start the firmware step-by-step:
  • Log into Linux, then start RPU firmware, e.g:
echo <fw_name> /sys/class/remoteproc/remoteproc0/firmware
echo start > /sys/class/remoteproc/remoteproc0/state
  • Run the Linux Application
  • Stop firwmare
echo stop > /sys/class/remoteproc/remoteproc0/state

ZynqMP on APU Linux communicate with RPU via Shared Memory without OpenAMP

Overview
The information below is intended to provide guidance to users who wish to set up a Linux + Bare-metal,RTOS, etc. We make the assumption that the Linux and RPU will communicate via Shared Memory. IPI can be used to further coordinate communication between processors. The use of IPI is documented in the section titled
"ZynqMP Linux Master running on APU Linux loads and runs arbitrary RPU Firmware; APU communicate with RPU via RPMsg in Userspace".

Device Tree Settings for Linux
To make the shared memory device accessible to Linux running on APU, there must be some modifications in the device tree.
For example, if configuring the device tree for the OpenAMP echo_test demo found here, a shared memory node is placed in the amba section. E.g.:
/{
amba {
        /* UIO device node for shared memory device memory */
        shm0: shm@0 {
       compatible = "shm_uio";
       reg = <0x0 0x3ed80000 0x0 0x80000>;
        };
   };
};
Configuring the Petalinux project
The OpenAMP applications use Libmetal to access shared memory. Thus libmetal package in your petalinux project must be enabled. This package can be enabled by going into the rootfs by using the petalinux-config utility.

run:
petalinux-config -c rootfs
 
and then in the utility enable the following packages:
Filesystem Packages
  --> libs
      --> libmetal
          --> [ * ] libmetal
    --> openamp
        --> [ * ] open-amp
  --> misc
      --> openamp-fw-echo-testd
          --> [ * ] openamp-fw-echo-testd
      --> openamp-fw-mat-muld
          --> [ * ] openamp-fw-mat-muld
      --> openamp-fw-rpc-demod
          --> [ * ] openamp-fw-rpc-demod
      --> rpmsg-echo-test
          --> [ * ] rpmsg-echo-test
      --> rpmsg-mat-mul
          --> [ * ] rpmsg-mat-mul
      --> rpmsg-proxy-app
          --> [ * ] rpmsg-proxy-app
 


Communicating via Shared memory
The below information is constructed with the assumptoin that the shared memory node is visible in Linux userspace.

Using the Libmetal API, we can read from and write to shared memory with the following functions:
static inline uint64_t metal_io_read(struct metal_io_region *io, unsigned long offset, memory_order order, int width);
int metal_io_block_read(struct metal_io_regoin *io, unsigned long offset, void * restrict dst, int len)l;
and
static inline void metal_io_write(struct metal_io_region *io, unsigned long offset, uint64_t value, memory_order order, int width);
int metal_io_block_write(struct metal_io_region *io, unsigned long offset, const void *restrict src, int len);


An example showing the use of these functions in Linux userspace can be found here. At the link are some examples showing the use of reading from, and writing to shared memory as well as initialization and cleanup of Libmetal resources.


How to Generate BOOT.BIN
This section assumes that your Petalinux project has already run Petalinux-build to build all the necessary components for your embedded Linux solution in addition to firmware to run on an RPU.
Use Petalinux tools to construct the BOOT.BIN that you can then put on an SD card to boot your ZynqMP board.
Below is a sample bootgen.bif file that you can create or modify in the top level directory of your Petalinux project that you can use to help construct the BOOT.BIN:
the_ROM_image:
 {
           [fsbl_config] a53_x64
           [bootloader,  destination_cpu=a53-0] ./images/linux/zynqmp_fsbl.elf
           [pmufw_image, destination_cpu=a53-0] ./images/linux/pmufw.elf
           [destination_cpu=a53-0, exception_level=el-3, trustzone] ./images/linux/bl31.elf
           [destination_cpu=a53-0, exception_level=el-2] ./images/linux/u-boot.elf
   }
Using this .bif file and petalinux tools, we will build a BOOT.BIN file that you can use for your ZynqMP board.
petalinux-package --boot --force --u-boot ./images/linux/u-boot.elf --cpu r5-0 --add /path/to/firmware
Here we have shown a few things:

  • We specify to which RPU the data file (your firmware) will go with the --cpu option and r5-0. You can also use r5-1 or r5-lockstep options.
  • The --add option where the following argument specifies the path to your firmware.
  • The --force option overwrites the existing BOOT.BIN file into the current directory.
  • The --u-boot option that specifies the location of the u-boot.elf

Xen DomU support for OpenAMP running in RPMsg userspace

  • Extract ZCU102 PetaLinux BSP
  • Follow http://www.wiki.xilinx.com/Building+the+Xen+Hypervisor+with+PetaLinux+2017.3 to run DomU
  • Before building the Dom0 device tree added the following to the "project-spec/meta-user/recipes-bsp/device-tree/files/system-user.dtsi" after adding the XEN related instructed from the XEN wiki in the PetaLinux project. Mark the vring, shm and IPI devices which will be used in DomU as "disabled" and "passthrough" :
/{
    reserved-memory {
                #address-cells = <2>;
                #size-cells = <2>;
                ranges;
                rproc_0_reserved: rproc@3ed000000 {
                        no-map;
                        reg = <0x0 0x3ed00000 0x0 0x1000000>;
                };
        };
    amba {
 
                vring: vring@3ed40000 {
                        compatible = "vring_uio";
                        reg = <0x0 0x3ed40000 0x0 0x40000>;
                        status = "disabled";
                        xen,passthrough;
                };
                shm0: shm@3ed80000 {
                        compatible = "shm_uio";
                        status = "disabled";
                        xen,passthrough;
                        reg = <0x0 0x3ed80000 0x0 0x80000>;
                };
                ipi0: ipi@ff340000 {
                        compatible = "ipi_uio";
                        reg = <0x0 0xff340000 0x0 0x1000>;
                        interrupt-parent = <&&gic>;
                        interrupts = <0 29 4>;
                        status = "disabled";
                        xen,passthrough;
                };
        };
 
};
  • Run "petalinux-build" to build Dom0 image
  • Create another PetaLinux project based on the zcu102 BSP for DomU Linux
    • In DomU Linux, add OpenAMP libraries and demo applications but don't include XEN tools as including XEN tools will make the rootfs too big for INITRAMFS, as the default XEN DomU Linux example just use INITRAMFS

  • In the DomU PetaLinux project, enable openamp packages with "petalinux-config -c rootfs":
 
Filesystem Packages -->
    misc -->
        packagegroup-petalinux-openamp   --->
            [*] packagegroup-petalinux-openamp
  • In DomU project, create pass through DTS, (e.g. openamp-passthrough.dtb) to include the vring, shared memory and the IPI device nodes so that DomU Linux can see those devices:
/dts-v1/;
 
/{
        #address-cells = <0x2>;
        #size-cells = <0x2>;
 
        passthrough {
                compatible = "simple-bus";
                ranges;
                #address-cells = <0x2>;
                #size-cells = <0x2>;
 
                vring: vring@0 {
                        compatible = "vring_uio";
                        reg = <0x0 0x3ed40000 0x0 0x40000>;
                };
                shm0: shm@0 {
                        compatible = "shm_uio";
                        reg = <0x0 0x3ed80000 0x0 0x80000>;
                };
                ipi0: ipi@0 {
                        compatible = "ipi_uio";
                        reg = <0x0 0xff340000 0x0 0x1000>;
                        interrupt-parent = <0x1>;
                        interrupts = <0 29 4>;
                };
        };
};

  • In the DomU PetaLinux project, run "petalinux-build" to compile
  • In the RPU baremetal OpenAMP application, as the application is talking to Linux userspace RPMsg instead of RPMsg/Remoteproc in Linux kernel space. Linux side will not write back resource table information as the resource table is not in the "shared memory". In this case, use static information from the resource table. You will need to set the vdev status in the resource table to "VIRTIO_CONFIG_STATUS_DRIVER_OK", otherwise, the RPU will keep waiting for the vdev status in the resource table to be "DRIVER_OK"..e.g:

 
/* Virtio device entry */
{
    RSC_VDEV, VIRTIO_ID_RPMSG_, 0, RPMSG_IPU_C0_FEATURES, 0, 0, VIRTIO_CONFIG_STATUS_DRIVER_OK,
    NUM_VRINGS, {0, 0},
},
  • Create BOOT.BIn with the RPU ELF. HEre is the BIF example:
 
the_ROM_image:
{
        [fsbl_config] a53_x64
        [bootloader] images/linux/zynqmp_fsbl.elf
        [pmufw_image] images/linux/pmufw.elf
        [destination_cpu=r5-0] /ws/jliang/sdk/2017.3/r5-0-echo-test/r5-0-echo-test/Release/r5-0-echo-test.elf
        [destination_cpu=a53-0, exception_level=el-3, trustzone] images/linux/bl31.elf
        [destination_cpu=a53-0, exception_level=el-2] images/linux/u-boot.elf
}

  • Following with Xilinx XEN wiki to boot into Dom0, with the BOOT.BIN
    • Please note that as you can see in the BIF file, the RPU boots from FSBL.

  • Get the DomU Image and OpenAMP passthrough DTB into Dom0 "/boot/" directory.
    • E.g. tftp the "Image", and the "openamp-passthrough.dtb" to the target's "/boot/" directory

  • go to "/etc/xen"
  • use "example-passnet.cfg" as an example for DomU configuration:
# cp example-passnet.cfg openamp.cfg
 
  • Edit the "openamp.cfg" as following:
name = "guest0"
kernel = "/boot/Image"
extra = "console=hvc0 rdinit=/sbin/init"
memory = 256
vcpus = 2
device_tree = "/boot/openamp-passthrough.dtb"
irqs = [ 61 ]
iomem = [ "0xff340,1", "0x3ed40,64", "0x3ed80,128" ]

  • "iomem": [ "START_ADDR,NUM_OF_4K_PAGES", ...]
  • Start the DomU guest from Dom0 inside "/etc/xen" directory:

# xl create -c openamp.cfg
 
  • After it boots into DomU guest run the following to try the echo test, assuming the RPU firmware runs the echo test:



#### DomU guest #####
# echo_test-openamp





Feature Changes

Module NameChangeLink
OpenAMP echo test and matrix multiplication demosRun without manual input from user. Instead, add -n command line option to specify number of iterationsLink to patch
Xen Dom0 and DomU support for OpenAMP running in RPMsg userspacesupport for these two configurations in 2018.1
RPMsg char driver support (experimental)Able to run RPMsg OpenAMP demos using rpmsg char driver. The patch enabling this is not yet in upstream.Link to patch
libmetal MicroBlaze baremetal supportAdd MicroBlaze baremetal support to libmetalLink to source code
Add fix for OpenAMP echo demo running on RPU0 and RPU1 hanging for large payload numberUpdate GIC driver in embeddedsw such that relevant interrupts can be mapped or unmapped to CPU.Link to patch


    • "iomem": [ "START_ADDR,NUM_OF_4K_PAGES", ...]

  • Start the DomU guest from Dom0 inside "/etc/xen" directory:

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy