Linux ZynqMP GQSPI Driver

Introduction

This page provides information about the ZynqMP  and Versal QSPI driver which can be found on Xilinx Git as spi-zynqmp-gqspi.c
Drivers can be found at /drivers/spi/spi-zynqmp-gqspi.c

MTD layer handles all the flash devices used with QSPI. This layer handles flash devices of different makes (Micron/Numonyx, Winbond and Spansion being
the most common)
of different sizes from 128Mbit to 1Gbit. The features of all flash devices are not alike and hence handled through different flags indicating the support.
This layer was customized by xilinx to support parallel and stacked configurations. It can be found at drivers/mtd/spi-nor/core.c on Xilinx Git

HW/IP Features

The GQSPI controller used in Zynqmp and Versal supports the following features.
  • Support Low level (Generic) Access
  • Support Future Commands
  • Supports 3,4,6…N byte addressing
  • Supports Command Queuing (Generic FIFO depth is 32)
  • Supports 4 or 8-bit interface
  • Supports 2 Chip Select Lines
  • Supports 4-Bit Bi-Directional I/O signals
  • Supports x1/x2/x4 Read/Write
  • Supports 44-bit address space on AXI in DMA mode
  • Supports byte stripe when two data buses are connected
  • Supports single interrupt for QSPI/DMA Interrupt status

Driver Features

The Zynqmp and Versal GQSPI supports the following features:
  • Supports DMA for receiving the aligned data from the tx fifo.
  • Supports PIO read for receiving the unaligned data from the rx fifo.
  • Supports PIO write from programing page to the flash.
  • Supports extended addressing.
  • Supports SIngle,Dual Parallel and Dual Stacked configurations.
  • Supports Normal and Quad read modes

Missing Features and known Issues/Limitations in Driver

  • This driver supports GenericQSPI(GQSPI) not Linear QSPI(LQSPI)

Important AR links

  • Kernel boot failed while mounting JFFS2 filesystem in QSPI boot mode - AR-71114

Kernel Configuration Options

The following config options need to be enabled
CONFIG_SPI_ZYNQMP_GQSPI
It depends on SPI_MASTER, SPI_MEM and HAS_DMA

If required, enable MTD block devices support - MTD_BLKDEVS

Device-tree

Refer to spi-zynqmp-qspi.yaml for complete description.
These are some specific points to be noted about the qspi properties:
- parallel-memories - Define if flashes are connected in parallel.
- stacked-memories - Define if flashes are connected in stacked.
For details of above connectin mode refer spi-peripheral-props.yamlspi-controller.yaml 

Example:
The following example shows adding a QSPI node to the devicetree in single mode.
spi@ff0f0000 {
    compatible = "xlnx,zynqmp-qspi-1.0", "xlnx,versal-qspi-1.0";
    status = "okay";
    clock-names = "ref_clk", "pclk";
    interrupts = <0x0 0xf 0x4>;
    interrupt-parent = <0x1>;
    num-cs = <0x1>;
    reg = <0x0 0xff0f0000 0x0 0x1000 0x0 0xc0000000 0x0 0x8000000>;
    #address-cells = <0x1>;
    #size-cells = <0x0>;
    power-domains = <0x13>;
    clocks = <0x14 0x14>;
 
    flash@0 {
        compatible = "m25p80";
        reg = <0x0>;
        #address-cells = <0x1>;
        #size-cells = <0x1>;
        spi-max-frequency = <0x66ff300>;
        spi-tx-bus-width = <0x4>;
        spi-rx-bus-width = <0x4>;
 
        partition@0x00000000 {
            label = "qspi-fsbl-uboot";
            reg = <0x0 0x4000000>;
        };
        partition@0x01e40000 {
            label = "qspi-linux";
            reg = <0x1e40000 0x1a00000>;
        };
        partition@0x03840000 {
            label = "qspi-device-tree";
            reg = <0x3840000 0x10000>;
        };
        partition@qspi-linux {
            label = "qspi-rootfs";
            reg = <0x100000 0x500000>;
        };
           partition@qspi-rootfs {
            label = "qspi-bitstream";
            reg = <0x620000 0x5e0000>;
        };
    };
};
 
* If driver needs to operate only in IO mode, then add has-io-mode in dts.
 
 

Testing Procedure


This section details the common tests using jffs2 and flashcp.
In order to test different flash sizes and configurations (single, parallel, stacked), the above devicetree should be modified and relevant hardware and design should be used.

QSPI flash testing with flashcp


#List the MTD partitions present and select a partition
cat /proc/mtd
dev:    size   erasesize  name
mtd0: 00400000 00020000 "qspi-fsbl-uboot"
mtd1: 01a00000 00020000 "qspi-linux"
mtd2: 00010000 00020000 "qspi-device-tree"
mtd3: 00500000 00020000 "qspi-rootfs"
mtd4: 005e0000 00020000 "qspi-bitstream"
#Creating a file to be written to the flash\
dd if=/dev/urandom of=./sample.bin bs=1024 count=4096
4096+0 records in
4096+0 records out
4194304 bytes (4.0MB) copied, 3.227253 seconds, 1.2MB/s
#Write the file to the partition - this erases the partition, writes the file and verifies
flashcp -v ./smaple.bin /dev/mtd0
Erasing block: 32/32 (100%)
Writing kb: 4088/4096 (99%)
Verifying kb: 4088/4096 (99%)

QSPI flash testing with jffs2

[[code]]
#List the MTD partitions present and select a partition
cat /proc/mtd
dev: size erasesize name


mtd0: 00400000 00020000 "qspi-fsbl-uboot"
mtd1: 01a00000 00020000 "qspi-linux"
mtd2: 00010000 00020000 "qspi-device-tree"
mtd3: 05000000 00020000 "qspi-rootfs"
mtd4: 005e0000 00020000 "qspi-bitstream"


#Erase a the whole partition with jffs2 markers
flash_eraseall -j /dev/mtd3
Erasing 128 Kibyte @ 5c0000 - 97% complete. Cleanmarker written at 5c0000.


Erasing 128 Kibyte @ 5e0000 - 100% complete.


#create a directory
mkdir qspi_flash0
#Mount the partition to spi_flash0
mount -t jffs2 /dev/mtdblock3 /qspi_flash0
#Create a file to be written to the flash
dd if=/dev/urandom of=./sample.bin bs=1024 count=4096
4096+0 records in


4096+0 records out
4194304 bytes (4.0MB) copied, 3.227253 seconds, 1.2MB/s


#Write the file to the flash
cp ./sample.bin /qspi_flash0/
#Check the presence of the file(s) in spi_flash0
ls /qspi_flash0
sample.bin


#Unmount
umount spi_flash0
#Mount again - MTD 0 to spi_flash0
mount -t jffs2 /dev/mtdblock3 /qspi_flash0
#Compare the files - there should be no differences
diff ./sample.bin /qspi_flash0/sample.bin
#Unmount
umount qspi_flash0
The data can be verified again after a power on reset if desired.

Expected Ouput

root@xilinx-zcu102-2017_3:~#
root@xilinx-zcu102-2017_3:~# cat /proc/mtd
dev: size erasesize name
mtd0: 00000000 00001000 "boot"
mtd1: 01e00000 00001000 "bootenv"
mtd2: 01e40000 00001000 "kernel"
mtd3: 04240000 00001000 "spare"
root@xilinx-zcu102-2017_3:~#
root@xilinx-zcu102-2017_3:~# dmesg | grep "spi"
[ 3.251532] m25p80 spi0.0: n25q512a (65536 Kbytes)
[ 3.256268] 4 ofpart partitions found on MTD device spi0.0
[ 3.261723] Creating 4 MTD partitions on "spi0.0":
[ 3.266495] 0x000000000000-0x000001e00000 : "boot"
[ 3.271737] 0x000001e00000-0x000001e40000 : "bootenv"
[ 3.277138] 0x000001e40000-0x000004240000 : "kernel"
[ 3.292288] 0x000004240000-0x000004000000 : "spare"
root@xilinx-zcu102-2017_3:~#


MTD Speed test:
root@xilinx-zcu102-2017_3:~#
root@xilinx-zcu102-2017_3:~# insmod /lib/modules/`uname -r`/kernel/drivers/mtd/tests/mtd_speedtest.ko dev=1
[ 171.224852] =================================================
[ 171.236148] mtd_speedtest: MTD device: 1
[ 171.241983] mtd_speedtest: not NAND flash, assume page size is 512 bytes.
[ 171.250738] mtd_speedtest: MTD device size 262144, eraseblock size 4096, page size 512, count of eraseblocks 64, pages per eraseblock 8, OOB size 0
[ 171.290736] random: crng init done
[ 173.685005] mtd_speedtest: testing eraseblock write speed
[ 174.021975] mtd_speedtest: eraseblock write speed is 778 KiB/s
[ 174.029997] mtd_speedtest: testing eraseblock read speed
[ 174.049392] mtd_speedtest: eraseblock read speed is 23272 KiB/s
[ 176.490258] mtd_speedtest: testing page write speed
[ 176.810197] mtd_speedtest: page write speed is 820 KiB/s
[ 176.817678] mtd_speedtest: testing page read speed
[ 176.865196] mtd_speedtest: page read speed is 6400 KiB/s
[ 179.355572] mtd_speedtest: testing 2 page write speed
[ 179.692173] mtd_speedtest: 2 page write speed is 778 KiB/s
[ 179.699762] mtd_speedtest: testing 2 page read speed
[ 179.732696] mtd_speedtest: 2 page read speed is 10240 KiB/s
[ 179.740320] mtd_speedtest: Testing erase speed
[ 182.248575] mtd_speedtest: erase speed is 102 KiB/s
[ 182.255472] mtd_speedtest: Testing 2x multi-block erase speed
[ 182.524665] mtd_speedtest: 2x multi-block erase speed is 980 KiB/s
[ 182.532856] mtd_speedtest: Testing 4x multi-block erase speed
[ 182.801950] mtd_speedtest: 4x multi-block erase speed is 980 KiB/s
[ 182.810074] mtd_speedtest: Testing 8x multi-block erase speed
[ 183.079170] mtd_speedtest: 8x multi-block erase speed is 980 KiB/s
[ 183.087321] mtd_speedtest: Testing 16x multi-block erase speed
[ 183.356782] mtd_speedtest: 16x multi-block erase speed is 980 KiB/s
[ 183.364980] mtd_speedtest: Testing 32x multi-block erase speed
[ 183.634492] mtd_speedtest: 32x multi-block erase speed is 980 KiB/s
[ 183.642689] mtd_speedtest: Testing 64x multi-block erase speed
[ 183.911867] mtd_speedtest: 64x multi-block erase speed is 980 KiB/s
[ 183.920022] mtd_speedtest: finished
[ 183.925333] =================================================
root@xilinx-zcu102-2017_3:~#

Performance Details

Single

Read: 32000 LB/s
Write: 759 KB/s

Dual

Read: 51200 KB/s
Write: 907 KB/s

Mainline Status

Not Mainlined

Change Log

  • 2023.2

    • Summary
      •  Fix flash lock failure on ISSI flash parts
      • Fix timeout issue with winbond multi die flash
    • Commits
  • 2023.1

    • Summary
      • Replace all spi->chip_select and spi->cs_gpiod references with function call
      • Add stacked memories support as per new DT bindings
      • Add parallel memories support as per new DT bindings
    • Commits
  • 2022.2

    • Summary
      • Handle error for dma_set_mask
      • Fix call trace during QSPI suspend
    • Commits
  • 2022.1

    • Summary
      • Set Upper flash to Quad and 4Byte Mode in stacked connection mode
      • Reset SPI_MASTER_U_PAGE flag after chip erase operation
      • Fix kernel-doc warning
      • Avoid updating the flash_info struct in dual and stacked modes
      • Fix use-after-free in zynqmp_qspi_exec_op
      • Remove txfifo empty check during chip select/deselect
    • Commits

Related Links

© Copyright 2019 - 2022 Xilinx Inc. Privacy Policy