From Fedora Project Wiki
(Update links to Koji)
(Add host setup section)
 
(20 intermediate revisions by 7 users not shown)
Line 1: Line 1:
This page describes the steps necessary to get Fedora for RISC-V running, either on emulated or real hardware.
This page describes the steps necessary to get Fedora for RISC-V running, either on emulated or real hardware.


= Obtain and prepare a disk image =
= Quickstart =


== Tested images ==
This section assumes that you have already set up libvirt/QEMU on your machine and you're familiar with them, so it only highlights the details that are specific to RISC-V. It also assumes that you're running Fedora 40 as the host.


You can find them here: https://dl.fedoraproject.org/pub/alt/risc-v/disk-images/fedora/
First of all, you need to download a disk image from https://dl.fedoraproject.org/pub/alt/risc-v/disk_images/Fedora-40/


Download <code>Fedora-Developer-Rawhide-*-sda1.raw.xz</code> as well as <code>bbl-*.riscv64</code> and <code>initramfs-*.img</code>.
As of this writing, the most recent image is <code>Fedora-Minimal-40-20240502.n.0-sda.raw.xz</code> so I will be using that throughout the section. If you're using a different image, you will need to adjust things accordingly.


These disk images:
Once you've downloaded the image, start by uncompressing it:
* contain a single "naked" filesystem (<code>root=/dev/vda</code>);
* have SELinux set to enforcing=1;
* have restored default SELinux security context (<code>restorecon -Rv /</code>);
* kernel, initramfs, config and BBL are available as separate downloads;
* have been booted in QEMU/libvirt a few times to verify.
 
If you are not sure which image to choose, go with this one.
 
== Nightly builds ==
 
You can find them here: http://fedora.riscv.rocks/koji/tasks?state=closed&view=flat&method=createAppliance&order=-id
 
Select the most recent (top) build and download <code>Fedora-Developer-Rawhide-*-sda.raw.xz</code>.
 
These disk images:
* contain a partitioned disk (<code>root=/dev/vda1</code>);
* have SELinux set to enforcing=1;
* have restored default SELinux security context (<code>restorecon -Rv /</code>);
* kernel, initramfs, config and BBL are inside the disk image, in the <code>/boot</code> directory;
* are completely untested.
 
If you choose this image, you'll have to tweak most of the commands below to account for the different names for the root partition and the disk image itself.
 
== Uncompress the image ==
 
Whether you have downloaded a tested image or a nightly build, you'll need to uncompress it before it can be used:


<pre>
<pre>
$ unxz Fedora-Developer-Rawhide-*.raw.xz
$ unxz Fedora-Minimal-40-20240502.n.0-sda.raw.xz
</pre>
</pre>


== Optional: expand the disk image ==
You need to figure out the root filesystem's UUID so that you can later pass this information to the kernel. The <code>virt-filesystems</code> utility, part of the <code>guestfs-tools</code> package, takes care of that:
 
You might want to expand the disk image before setting up the VM. Here is one example:
 
<pre>
truncate -r Fedora-Developer-Rawhide-*-sda.raw expanded.raw
truncate -s 40G expanded.raw
virt-resize -v -x --expand /dev/sda1 Fedora-Developer-Rawhide-*-sda.raw expanded.raw
virt-filesystems --long -h --all -a expanded.raw
virt-df -h -a expanded.raw
</pre>
 
You can only perform this operation if you downloaded a nightly build.
 
The resulting disk image will work with QEMU as well as TinyEMU. Make sure you use <code>expanded.raw</code> instead of <code>Fedora-Developer-Rawhide-*-sda.raw</code> when booting the guest.
 
== Optional: create an overlay ==
 
You can also create <code>qcow2</code> disk image with <code>raw</code> Fedora disk as backing one. This way Fedora <code>raw</code> is unmodified and all changes are written to <code>qcow2</code> layer. You will need to install <code>libguestfs-tools-c</code>.


<pre>
<pre>
qemu-img create -f qcow2 -F raw -b Fedora-Developer-Rawhide-*-sda.raw overlay.qcow2 20G
$ virt-filesystems \
virt-resize -v -x --expand /dev/sda1 Fedora-Developer-Rawhide-*-sda.raw overlay.qcow2
    -a Fedora-Minimal-40-20240502.n.0-sda.raw \
virt-filesystems --long -h --all -a overlay.qcow2
    --long \
    --uuid \
  | grep ^btrfsvol: \
  | awk '{print $7}' \
  | sort -u
ae525e47-51d5-4c98-8442-351d530612c3
</pre>
</pre>


You can only perform this operation if you downloaded a nightly build.
Additionally, you need to extract the kernel and initrd from the disk image. The <code>virt-get-kernel</code> tool automates this step:
 
The resulting disk image will only work with QEMU. Make sure you use <code>overlay.qcow2</code> instead of <code>Fedora-Developer-Rawhide-*-sda.raw</code> when booting the guest.
 
== Optional: set the hostname before booting ==
 
If you want to change hostname before the first boot, install <code>libguestfs-tools-c</code> and then run:


<pre>
<pre>
virt-customize -a Fedora-Developer-Rawhide-*-sda1.raw --hostname fedora-riscv-mymagicbox
$ virt-get-kernel \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw
download: /boot/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 -> ./vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64
download: /boot/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img -> ./initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img
</pre>
</pre>


== Nightly builds only: extracting kernel, config, initramfs and BBL ==
Now move all the files to a directory that libvirt has access to:
 
Fedora/RISC-V does not support BLS (Boot Loader Specification - [https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault more details]).
 
Disk images contain a <code>/boot</code> directory from where you can copy out kernel, config, initramfs and BBL (contains embedded kernel).
 
This is '''only''' necessary for nightly builds, since for tested images these files are provided as separate downloads alongside the image.
 
Example session:


<pre>
<pre>
$ export LIBGUESTFS_BACKEND=direct
$ sudo mv
$ guestfish \
    Fedora-Minimal-40-20240502.n.0-sda.raw \
  add Fedora-Developer-Rawhide-20190126.n.0-sda.raw : run : \
    vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 \
  mount /dev/sda1 / : ls /boot | grep -E '^(bbl|config|initramfs|vmlinuz)' | grep -v rescue
    initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img \
bbl-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64
    /var/lib/libvirt/images/
config-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64
initramfs-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64.img
vmlinuz-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64
$ guestfish \
  add Fedora-Developer-Rawhide-20190126.n.0-sda.raw : run : mount /dev/sda1 / : \
  download /boot/bbl-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 bbl-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 : \
  download /boot/config-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 config-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 : \
  download /boot/initramfs-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64.img initramfs-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64.img : \
  download /boot/vmlinuz-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 vmlinuz-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64
</pre>
</pre>


You can also use <code>guestmount</code> or QEMU/NBD to mount disk image. Examples:
At this point, everything is ready and you can create the libvirt VM:
<pre>
$ mkdir a
$ guestmount -a $PWD/Fedora-Developer-Rawhide-20190126.n.0-sda.raw -m /dev/sda1 $PWD/a
$ cp a/boot/bbl-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64 .
$ guestunmount $PWD/a
</pre>


<pre>
<pre>
$ sudo modprobe nbd max_part=8
$ virt-install \
$ sudo qemu-nbd -f raw --connect=/dev/nbd1 $PWD/Fedora-Developer-Rawhide-20190126.n.0-sda.raw
    --import \
$ sudo fdisk -l /dev/nbd1
    --name fedora-riscv \
Disk /dev/nbd1: 7.5 GiB, 8053063680 bytes, 15728640 sectors
    --osinfo fedora40 \
Units: sectors of 1 * 512 = 512 bytes
    --arch riscv64 \
Sector size (logical/physical): 512 bytes / 512 bytes
    --vcpus 4 \
I/O size (minimum/optimal): 512 bytes / 512 bytes
    --ram 4096 \
Disklabel type: gpt
    --boot uefi,kernel=/var/lib/libvirt/images/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64,initrd=/var/lib/libvirt/images/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img,cmdline='root=UUID=ae525e47-51d5-4c98-8442-351d530612c3 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi' \
Disk identifier: F0F4268F-1B73-46FF-BABA-D87F075DCCA5
    --disk path=/var/lib/libvirt/images/Fedora-Minimal-40-20240502.n.0-sda.raw \
 
    --network default \
Device      Start      End  Sectors  Size Type
    --tpm none \
/dev/nbd1p1  2048 15001599 14999552  7.2G Linux filesystem
    --graphics none
$ sudo mount /dev/nbd1p1 a
$ sudo cp a/boot/initramfs-5.0.0-0.rc2.git0.1.0.riscv64.fc30.riscv64.img .
$ sudo umount a
$ sudo qemu-nbd --disconnect /dev/nbd1
</pre>
</pre>


Note NBD is highly useful if you need to run <code>fdisk</code>,  <code>e2fsck</code> (e.g. after VM crash and filesystem lock up), <code>resize2fs</code>. It might be beneficial to look into <code>nbdkit</code> and <code>nbd</code> packages.
Note how the UUID discovered earlier is included in the kernel command line. Quoting is also very important to get right.


= Boot the image on virtual hardware =
Disabling the TPM with <code>--tpm none</code> is only necessary as a temporary measure due to issues currently affecting swtpm in Fedora 40. If you want to, you can try omitting that option and see whether it works.


There are several options for booting the image on virtual hardware once you've prepared it following the steps above.
You should see a bunch of output coming from edk2 (the UEFI implementation we're using), followed by the usual kernel boot messages and, eventually, a login prompt. Please be patient, as the use of emulation makes everything significantly slower. Additionally, a SELinux relabel followed by a reboot will be performed as part of the import process, which slows things down further. Subsequent boots will be a lot faster.


== Boot with libvirt ==
To shut down the VM, run <code>poweroff</code> inside the guest OS. To boot it up again, use
 
Detailed instructions how to install libvirt: https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/
 
Quick instructions for libvirt installation (tested on Fedora 30):


<pre>
<pre>
dnf group install --with-optional virtualization
$ virsh start fedora-riscv --console
systemctl enable --now libvirtd
</pre>
</pre>


When running RISC-V guests, it's usually a good idea to use the very latest versions of QEMU, libvirt and virt-manager: the <code>virt-preview</code> repository offers just that for Fedora users. To enable it, simply run:


<pre>
= UKI images =
dnf copr enable @virtmaint-sig/virt-preview
</pre>


and update your system.
These can be found in the same location but follow a different naming convention. As of this writing, the most recent image is <code>Fedora.riscv64-40-20240429.n.0.qcow2</code>.


Assuming you have QEMU &ge; 4.0.0, libvirt &ge; 5.3.0 and virt-manager &ge; 2.2.0, the installation will be as simple as:
The steps are similar to those described above, except that instead of dealing with kernel and initrd separately you need to extract a single file:


<pre>
<pre>
# virt-install \
$ virt-copy-out \
  --name fedora-riscv \
    -a Fedora.riscv64-40-20240429.n.0.qcow2 \
  --arch riscv64 \
    /boot/efi/EFI/Linux/6.8.7-300.4.riscv64.fc40.riscv64.efi \
  --vcpus 8 \
    .
  --memory 2048 \
  --os-variant fedora30 \
  --boot kernel=/var/lib/libvirt/images/bbl-*.riscv64,initrd=/var/lib/libvirt/images/initramfs-*.img,kernel_args="console=ttyS0 ro root=/dev/vda" \
  --import --disk path=/var/lib/libvirt/images/Fedora-Developer-Rawhide-*-sda1.raw \
  --network network=default \
  --graphics none
</pre>
</pre>


If you want graphics rather than serial console, simply replace <code>--graphics none</code> with <code>--graphics spice</code>.
The <code>virt-install</code> command line is slightly different too, in particular the <code>--boot</code> option becomes:
 
If you are stuck with older software (QEMU &ge; 2.12.0, libvirt &ge; 4.7.0), then you're going to need a more verbose command line:


<pre>
<pre>
# virt-install \
--boot uefi,kernel=/var/lib/libvirt/images/6.8.7-300.4.riscv64.fc40.riscv64.efi,cmdline='root=UUID=57cbf0ca-8b99-45ae-ae9d-3715598f11c4 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi'
  --name fedora-riscv \
  --arch riscv64 \
  --machine virt \
  --vcpus 8 \
  --memory 2048 \
  --os-variant fedora30 \
  --boot kernel=/var/lib/libvirt/images/bbl-*.riscv64,initrd=/var/lib/libvirt/images/initramfs-*.img,kernel_args="console=ttyS0 ro root=/dev/vda" \
  --import --disk path=/var/lib/libvirt/images/Fedora-Developer-Rawhide-*-sda1.raw,bus=virtio \
  --network network=default,model=virtio \
  --rng device=/dev/urandom,model=virtio \
  --channel name=org.qemu.guest_agent.0 \
  --graphics none
</pre>
</pre>


Additionally, when using older software components you won't get PCI support, and so enabling graphics will not be possible.
These changes are enough to get the image to boot, but there are no passwords set up so you won't be able to log in. In order to address that, it's necessary to create a configuration file for <code>cloud-init</code>, for example with the following contents:


Either one of the commands above will automatically boot you into the console. If you don't want that add <code>--noautoconsole</code> option. You can later use <code>virsh</code> tool to manage your VM and get to console.
<pre>
 
#cloud-config
A quick reference of <code>virsh</code> commands:
* <code>virsh list --all</code> - list all VMs and their states
* <code>virsh console <name></code> - connect to serial console (remember: <code>Escape character is ^]</code>)
* <code>virsh shutdown <name></code> - power down VM (see above for more details)
* <code>virsh start <name></code> - power up VM
* <code>virsh undefine <name></code> - remove VM
* <code>virsh net-list</code> - list network (useful for the next command)
* <code>virsh net-dhcp-leases <network_name></code> - list DHCP leases, <code><network_name></code> most likely will be <code>default</code>. This is useful when you want to get IPv4 and SSH to the VM.
* <code>virsh domifaddr <name></code> - alternative for the above two commands, only shows IPv4 for one VM
* <code>virsh reset <name></code> - hard reset VM
* <code>virsh destroy <name></code> hard power down of VM
 
If you want to use <code>ssh user@virtualMachine</code> you can setup libvirt NSS module. See: https://wiki.libvirt.org/page/NSS_module
 
You might want also to setup logging for serial console (in case kernel panics or something else).
 
For this we will be using two commands: <code>virsh edit <name></code> (modifying VM XML) and <code>virsh dumpxml <name></code> (dump VM XML for backup). You need to modify <code><serial></code> node by adding <code><log file='/var/log/libvirt/qemu/fedora-riscv-mymagicbox.serial.log'/></code>. Then power down and power up the VM.
 
Alternatively you can use <code>--serial log.file=/.../whatever.serial.log</code> with <code>virt-install</code> command.


== Boot under QEMU ==
password: fedora_rocks!
 
chpasswd:
You will get the best results if your QEMU version is 4.0.0 or newer, but any version since 2.12.0 will work.
  expire: false
 
<pre>
qemu-system-riscv64 \
    -nographic \
    -machine virt \
    -smp 8 \
    -m 2G \
    -kernel bbl \
    -initrd initramfs-*.img \
    -object rng-random,filename=/dev/urandom,id=rng0 \
    -device virtio-rng-device,rng=rng0 \
    -append "console=ttyS0 ro root=/dev/vda" \
    -device virtio-blk-device,drive=hd0 \
    -drive file=Fedora-Developer-Rawhide-*-sda1.raw,format=raw,id=hd0 \
    -device virtio-net-device,netdev=usernet \
    -netdev user,id=usernet,hostfwd=tcp::10000-:22
</pre>
</pre>


Once machine is booted you can connect via SSH:
Save this as `user-data.yml`, then add the following options to your <code>virt-install</code> command line:


<pre>
<pre>
ssh -p 10000 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=password -o PubkeyAuthentication=no root@localhost
--controller scsi,model=virtio-scsi \
--cloud-init user-data=user-data.yml
</pre>
</pre>


== Boot under TinyEMU (RISCVEMU) ==
The configuration data should be picked up during boot, setting the default user's password as requested and allowing you to log in.
'''Note (2019 March 10):''' This is not supported anymore until TinyEMU is updated to support external initrd file. Please, use QEMU or libvirt/QEMU.


RISCVEMU recently (2018-09-23) was renamed to TinyEMU (https://bellard.org/tinyemu/).


TinyEMU allow booting Fedora disk images in TUI and GUI modes. You can experiment using JSLinux (no need to download/compile/etc) here: https://bellard.org/jslinux/
= Host setup =


Below are instructions how to boot Fedora into X11/Fluxbox GUI mode.
The steps outlined above assume that your machine is already set up for running RISC-V VMs. If that's not the case, read on.


'''Step 1'''. Compile TinyEMU:
At the very least, the following package will need to be installed:


<pre>
<pre>
wget https://bellard.org/tinyemu/tinyemu-2018-09-23.tar.gz
$ sudo dnf install \
tar xvf tinyemu-2018-09-23.tar.gz
    libvirt-daemon-driver-qemu \
cd tinyemu-2018-09-23
    libvirt-daemon-driver-network \
make
    libvirt-daemon-config-network \
    libvirt-client \
    virt-install \
    qemu-system-riscv-core \
    edk2-riscv64
</pre>
</pre>


'''Step 2'''. Setup for booting Fedora:
This will result in a fairly minimal install, suitable for running headless VMs. If you'd rather have a fully-featured install, add <code>libvirt-daemon-qemu</code> and <code>libvirt-daemon-config-nwfilter</code> to the list. Be warned though: doing so will result in significantly more packages being dragged in, some of which you might not care about (e.g. support for several additional architectures).
 
In order to grant your user access to libvirt and allow it to manage VMs, it needs to be made a member of the corresponding group:


<pre>
<pre>
mkdir fedora
$ sudo usermod -a -G libvirt $(whoami)
cd fedora
cp ../temu .
 
# Download pre-built BBL with embedded kernel
wget https://bellard.org/jslinux/bbl64-4.15.bin
 
# Create configuration file for TinyEMU
cat <<EOF > root-riscv64.cfg
/* VM configuration file */
{
    version: 1,
    machine: "riscv64",
    memory_size: 1400,
    bios: "bbl64-4.15.bin",
    cmdline: "loglevel=3 console=tty0 root=/dev/vda rw TZ=${TZ}",
    drive0: { file: "Fedora-Developer-Rawhide-*-sda1.raw" },
    eth0: { driver: "user" },
    display0: {
        device: "simplefb",
        width: 1920,
        height: 1080,
    },
    input_device: "virtio",
}
EOF
 
# Download disk image and unpack in the same directory
</pre>
</pre>


'''Step 3'''. Boot it.
Finally, the default libvirt URI needs to be configured:


<pre>
<pre>
./temu -rw root-riscv64.cfg
$ mkdir -p ~/.config/libvirt && \
  echo 'uri_default = "qemu:///system"' >~/.config/libvirt/libvirt.conf
</pre>
</pre>


We need to use <code>-rw</code> if we want our changes to persist in disk image. Otherwise disk image will be loaded as read-only and all changes will not persist after reboot.
Now reboot the host. This is necessary because the changes to group membership won't be effective until the next login, and because the libvirt services are not automatically started during package installation.


= Boot the image on physical hardware =
After rebooting and logging back in, <code>virsh</code> should work and the default network should be up:


== Install on the HiFive Unleashed SD card ==
<pre>
$ virsh uri
qemu:///system


These are instructions for the [https://www.sifive.com/products/hifive-unleashed/ HiFive Unleashed board].
$ virsh net-list
 
  Name      State    Autostart  Persistent
The disk image (above) is partitioned, but usually we need an unpartitioned ("naked") filesystem. There are several ways to get this, but the easiest is:
--------------------------------------------
 
default  active  yes        yes
<pre>
$ guestfish -a Fedora-Developer-Rawhide-*-sda.raw \
    run : download /dev/sda1 Fedora-Developer-Rawhide-*-sda1.raw
</pre>
</pre>


This creates a naked ext4 filesystem called <code>*-sda1.raw</code>.  The naked ext4 filesystem can be copied over the second partition of the SD card.
All done! You can now start creating RISC-V VMs.
 
You can also build a custom bbl+kernel+initramfs to boot directly into the SD card using [https://github.com/rwmjones/fedora-riscv-kernel these sources].
 
== Install on the HiFive Unleashed using NBD server ==
 
Look at https://github.com/rwmjones/fedora-riscv-kernel in the <code>sifive_u540</code> branch.  This is quite complex to set up so it's best to ask on the <code>#fedora-riscv</code> IRC channel.
 
== Install Fedora GNOME Desktop on SiFive HiFive Unleashed + Microsemi HiFive Unleashed Expansion board ==
 
Detailed instructions are provided by Atish Patra from Western Digital Corporation (WDC). See their GitHub page for details and pictures: https://github.com/westerndigitalcorporation/RISC-V-Linux
 
So far two GPUs are confirmed to be working: Radeon HD 6450 and Radeon HD 5450.
 
= Use the image =
 
Once the system is booted, login as <code>root</code> with <code>riscv</code> as password.
 
X11 with Fluxbox can be started using: <code>startx /usr/bin/startfluxbox</code>. The disk image also includes awesome and i3 for testing. Dillo is available as a basic web browser (no javascript support) and pcmanfm as file manager.
 
To gracefully shutdown just type <code>poweroff</code> into console.
 
If you want less information being displayed during boot then add <code>quiet</code> into <code>kernel_args</code> line, e.g.: <code>console=ttyS0 ro quiet root=/dev/vda</code>

Latest revision as of 16:54, 31 May 2024

This page describes the steps necessary to get Fedora for RISC-V running, either on emulated or real hardware.

Quickstart

This section assumes that you have already set up libvirt/QEMU on your machine and you're familiar with them, so it only highlights the details that are specific to RISC-V. It also assumes that you're running Fedora 40 as the host.

First of all, you need to download a disk image from https://dl.fedoraproject.org/pub/alt/risc-v/disk_images/Fedora-40/

As of this writing, the most recent image is Fedora-Minimal-40-20240502.n.0-sda.raw.xz so I will be using that throughout the section. If you're using a different image, you will need to adjust things accordingly.

Once you've downloaded the image, start by uncompressing it:

$ unxz Fedora-Minimal-40-20240502.n.0-sda.raw.xz

You need to figure out the root filesystem's UUID so that you can later pass this information to the kernel. The virt-filesystems utility, part of the guestfs-tools package, takes care of that:

$ virt-filesystems \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw \
    --long \
    --uuid \
  | grep ^btrfsvol: \
  | awk '{print $7}' \
  | sort -u
ae525e47-51d5-4c98-8442-351d530612c3

Additionally, you need to extract the kernel and initrd from the disk image. The virt-get-kernel tool automates this step:

$ virt-get-kernel \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw
download: /boot/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 -> ./vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64
download: /boot/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img -> ./initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img

Now move all the files to a directory that libvirt has access to:

$ sudo mv
    Fedora-Minimal-40-20240502.n.0-sda.raw \
    vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 \
    initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img \
    /var/lib/libvirt/images/

At this point, everything is ready and you can create the libvirt VM:

$ virt-install \
    --import \
    --name fedora-riscv \
    --osinfo fedora40 \
    --arch riscv64 \
    --vcpus 4 \
    --ram 4096 \
    --boot uefi,kernel=/var/lib/libvirt/images/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64,initrd=/var/lib/libvirt/images/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img,cmdline='root=UUID=ae525e47-51d5-4c98-8442-351d530612c3 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi' \
    --disk path=/var/lib/libvirt/images/Fedora-Minimal-40-20240502.n.0-sda.raw \
    --network default \
    --tpm none \
    --graphics none

Note how the UUID discovered earlier is included in the kernel command line. Quoting is also very important to get right.

Disabling the TPM with --tpm none is only necessary as a temporary measure due to issues currently affecting swtpm in Fedora 40. If you want to, you can try omitting that option and see whether it works.

You should see a bunch of output coming from edk2 (the UEFI implementation we're using), followed by the usual kernel boot messages and, eventually, a login prompt. Please be patient, as the use of emulation makes everything significantly slower. Additionally, a SELinux relabel followed by a reboot will be performed as part of the import process, which slows things down further. Subsequent boots will be a lot faster.

To shut down the VM, run poweroff inside the guest OS. To boot it up again, use

$ virsh start fedora-riscv --console


UKI images

These can be found in the same location but follow a different naming convention. As of this writing, the most recent image is Fedora.riscv64-40-20240429.n.0.qcow2.

The steps are similar to those described above, except that instead of dealing with kernel and initrd separately you need to extract a single file:

$ virt-copy-out \
    -a Fedora.riscv64-40-20240429.n.0.qcow2 \
    /boot/efi/EFI/Linux/6.8.7-300.4.riscv64.fc40.riscv64.efi \
    .

The virt-install command line is slightly different too, in particular the --boot option becomes:

--boot uefi,kernel=/var/lib/libvirt/images/6.8.7-300.4.riscv64.fc40.riscv64.efi,cmdline='root=UUID=57cbf0ca-8b99-45ae-ae9d-3715598f11c4 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi'

These changes are enough to get the image to boot, but there are no passwords set up so you won't be able to log in. In order to address that, it's necessary to create a configuration file for cloud-init, for example with the following contents:

#cloud-config

password: fedora_rocks!
chpasswd:
  expire: false

Save this as user-data.yml, then add the following options to your virt-install command line:

--controller scsi,model=virtio-scsi \
--cloud-init user-data=user-data.yml

The configuration data should be picked up during boot, setting the default user's password as requested and allowing you to log in.


Host setup

The steps outlined above assume that your machine is already set up for running RISC-V VMs. If that's not the case, read on.

At the very least, the following package will need to be installed:

$ sudo dnf install \
    libvirt-daemon-driver-qemu \
    libvirt-daemon-driver-network \
    libvirt-daemon-config-network \
    libvirt-client \
    virt-install \
    qemu-system-riscv-core \
    edk2-riscv64

This will result in a fairly minimal install, suitable for running headless VMs. If you'd rather have a fully-featured install, add libvirt-daemon-qemu and libvirt-daemon-config-nwfilter to the list. Be warned though: doing so will result in significantly more packages being dragged in, some of which you might not care about (e.g. support for several additional architectures).

In order to grant your user access to libvirt and allow it to manage VMs, it needs to be made a member of the corresponding group:

$ sudo usermod -a -G libvirt $(whoami)

Finally, the default libvirt URI needs to be configured:

$ mkdir -p ~/.config/libvirt && \
  echo 'uri_default = "qemu:///system"' >~/.config/libvirt/libvirt.conf

Now reboot the host. This is necessary because the changes to group membership won't be effective until the next login, and because the libvirt services are not automatically started during package installation.

After rebooting and logging back in, virsh should work and the default network should be up:

$ virsh uri
qemu:///system

$ virsh net-list
 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes

All done! You can now start creating RISC-V VMs.