This document covers the setup, building and deployment of QEMU using Yocto/OE for a tertiary 'SDK' platform. Specifically the target platform is Windows via MinGW. This documentation assumes a reasonable amount of prior knowledge regarding Yocto/OE and QEMU.

Setup host for Yocto/OpenEmbedded

Setup your build host to be ready for building with Yocto/OE. Full information regarding dependencies, supported host platforms, etc. is available in the Yocto documentation.
Yocto Project Reference Manual - System Requirements

Preparing OpenEmbedded, bitbake and other layers

This build uses the openembbeded-core, meta-mingw and meta-xilinx layers, additionally bitbake is required.
First clone the openembedded-core layer. And then clone all other layers and bitbake inside the openembedded-core directory.
$ git clone git://git.openembedded.org/openembedded-core
$ cd openembedded-core
$ git clone git://git.openembedded.org/bitbake
$ git clone https://git.yoctoproject.org/git/meta-mingw
$ git clone https://git.yoctoproject.org/git/meta-xilinx
NOTE: These branches and repos above are not release branches and instead are the latest avaliable code. This means that it is possible that the steps here will need to be changed or even break depending on the current upstream state.
Once a release with the relevent code is made we will update the steps to use that.

Setting up build configuration

Prepare a build environment using the oe-init-build-env script from the openembedded-core repository.
$ <path>/openembedded-core/oe-init-build-env <target build directory>
This will have generated some default configuration in the conf/bblayers.conf and conf/local.conf files. First update the conf/bblayers.conf to specify the additional layers e.g.:
...
BBLAYERS ?= " \ <path-to-layers/openembedded-core/meta \
        <path-to-layers/openembedded-core/meta-mingw \
        <path-to-layers/openembedded-core/meta-xilinx \ "
Change/add the following options to the conf/local.conf to configure the target SDK platform (x86_64-mingw32 is 64-bit Windows) as well as which packages to build for the target SDK (e.g. QEMU). In this case nativesdk-qemu-xilinx refers to the Xilinx QEMU fork built for the target SDK (aka SDKMACHINE), nativesdk-qemu would refer to the mainline QEMU.
...
SDKMACHINE ?= "x86_64-mingw32"
...
TOOLCHAIN_HOST_TASK = "nativesdk-qemu-xilinx"

Building

The buildtools-tarball target is configured in the previous step to include the nativesdk QEMU target, building buildtools-tarball will fetch and build all required dependencies and QEMU itself.
$ bitbake buildtools-tarball
This process will take a significant period of time to complete as a number of components will be fetched and built. This includes building a cross compiler which targets the x86_64-mingw32 target.

Deploy

Acquiring binaries from build

The build will collect and generate an archive for the nativesdk target containing the QEMU binaries and all the dependences (*.dll, etc). This archive is deployed to the tmp-glibc/deploy directory, specifically:
tmp-glibc/deploy/sdk/x86_64-buildtools-nativesdk-standalone-nodistro.0.tar.xz
The archive can be deployed directly to a Windows machine, extracting with a tool like 7-Zip. Or ideally it can be straight forwardly converted into something more easily consumed on a Windows system.
The binaries and data files are all located in the sysroots/x86_64-oesdk-mingw32 directory.
Note though that by default the Xilinx QEMU will have its binaries populated into the usr/bin/qemu-xilinx directory as well as the data files populated into the usr/share/qemu-xilinx directory, so as to not interfere with the mainline QEMU binaries/data. The Xilinx QEMU binaries should be moved/copied into the same directory as the *.dll files (or vice versa, or using the PATH environment variable to point at the directory with *.dll).

Notes when using binaries on Windows

If the QEMU data directory is not located as a sub directory of where the QEMU binaries are, the -L option of QEMU will need to be used to point to the location of the data files.
When QEMU is built with support for SDL, two binaries will exist for each system target, e.g. qemu-system-*w.exe and qemu-system-*.exe. The *w.exe binaries are marked as GUI programs and will be executed without using the Windows console. It should be also noted that on older Windows platforms the *w.exe binaries are needed when using graphics features via SDL.

Optional Setup

Using a local QEMU source tree via EXTERNALSRC

When developing changes or working against the QEMU source git repository it can be useful to build using this external source from within the Yocto/OE environment. This can be achieved with the use of the externalsrc.bbclass. Adding the following to conf/local.conf will point the QEMU recipe at the local source. The default for the QEMU recipe is to build with separate source and build directories, so the build output will still be contained in the OE tmpdir.
# Enable use of externalsrc
INHERIT += "externalsrc"
 
# Set up to use external source for only the nativesdk version of Xilinx QEMU
EXTERNALSRC_pn-nativesdk-qemu-xilinx = "<path to qemu git repository>"

Limiting the targets QEMU builds

The target 'guest' architectures that are built for QEMU can be limited to a subset using the QEMU_TARGETS variable. This can be set in conf/local.conf. An example of this would be:
QEMU_TARGETS = "aarch64 microblazeel"

License compliance

Yocto has in-built tooling for this, this is fully documented in the Yocto Project Development Manual:
Maintaining Open Source License Compliance During Your Product's Lifecycle
A simple setup to enable the output of license files and source content can be achieved with the following in conf/local.conf.
# archive the source code
INHERIT += "archiver"
ARCHIVER_MODE[src] = "original"
# populate licenses in output and in the images
COPY_LIC_MANIFEST = "1"
COPY_LIC_DIRS = "1"
LICENSE_CREATE_PACKAGE = "1"
Build or rebuild the target desired, e.g. bitbake buildtool-tarball. And the licenses will be populated into the following directories:

Licenses output

The licenses for each package are generated and output in the deploy/licenses directory, grouped by PN. For each package a recipeinfo file is deployed which contains the license expression and version information about the recipe. Additionally the license files themselves are included. A typical output would look similar to:
* nativesdk-qemu-xilinx
    * COPYING
    * COPYING.LIB
    * generic_GPLv2
    * generic_LGPLv2.1
    * recipeinfo

Sources output

The sources for each build (combination of package and architecture target) are generated and output in the deploy/source directory. Sources are grouped first by target architecture and then by PN. Multiple files exist where there are multiple sources required for a particular package. A typical output would look similar to:
* x86_64-oesdk-mingw32
    * nativesdk-qemu-xilinx-2017.1-r0
        * nativesdk-qemu-xilinx-2017.1-r0.tar.gz

License manifest for SDK deployables

Currently OpenEmbedded-Core (morty/2.2) does not generate license manifests for SDK deployables like buildtools-tarball. However the information that is generated can be used to build a manifest. An example bash command is provided below that when executed from the deploy/ directory will output the licenses used for each host package shipped in the buildtools-tarball deployable.
$ for i in $(cat sdk/x86_64-buildtools-nativesdk-standalone-nodistro.0.host.manifest | cut -d" " -f 1); do \
  if [ -e licenses/$i ]; then \
    echo $i: $(cat licenses/$i/recipeinfo | head -1); \
  fi; \
done

Running the binaries on Windows

To run the binaries on Windows you need to run them from a command line enviroment. This can either be Command Prompt or PowerShell. You will need to pass in all of the command line arguments like usual, but with Windows compatible files patchs and options.

If using multi-arch QEMU then you will also need to add these two extra options. One each to each instance of QEMU. Make sure that the instance of QEMU that is started first has the server property and the second instance does not.
-global remote-port.chardesc=tcp:127.0.0.1:4000,server
-global remote-port.chardesc=tcp:127.0.0.1:4000