From Fedora Project Wiki
m (Update libvirt NSS module URL)
(Add host setup section)
 
(6 intermediate revisions by 2 users not shown)
Line 1: Line 1:
This page describes the steps necessary to get Fedora for RISC-V running, either on emulated or real hardware.
This page describes the steps necessary to get Fedora for RISC-V running, either on emulated or real hardware.


= Obtain a disk image =
= Quickstart =


== Tested images ==
This section assumes that you have already set up libvirt/QEMU on your machine and you're familiar with them, so it only highlights the details that are specific to RISC-V. It also assumes that you're running Fedora 40 as the host.


These images have undergone some testing and thus are more likely to work without issues. If you are not sure which image to choose, go with one of these.
First of all, you need to download a disk image from https://dl.fedoraproject.org/pub/alt/risc-v/disk_images/Fedora-40/


=== Download using virt-builder ===
As of this writing, the most recent image is <code>Fedora-Minimal-40-20240502.n.0-sda.raw.xz</code> so I will be using that throughout the section. If you're using a different image, you will need to adjust things accordingly.


This is the recommended way to obtain disk images.
Once you've downloaded the image, start by uncompressing it:
 
To install <code>virt-builder</code>, run <code>dnf install libguestfs-tools-c</code>.
 
Before you can start using <code>virt-builder</code> for this task, a one-time setup is necessary:


<pre>
<pre>
$ mkdir -p ~/.config/virt-builder/repos.d/
$ unxz Fedora-Minimal-40-20240502.n.0-sda.raw.xz
$ cat <<EOF >~/.config/virt-builder/repos.d/fedora-riscv.conf
[fedora-riscv]
uri=https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/index
EOF
</pre>
</pre>


With that out of the way, you can get a list of available RISC-V templates with:
You need to figure out the root filesystem's UUID so that you can later pass this information to the kernel. The <code>virt-filesystems</code> utility, part of the <code>guestfs-tools</code> package, takes care of that:


<pre>
<pre>
$ virt-builder --list | grep riscv64
$ virt-filesystems \
fedora-rawhide-developer-20191123.n.0  riscv64   Fedora® Rawhide Developer 20191123.n.0
    -a Fedora-Minimal-40-20240502.n.0-sda.raw \
fedora-rawhide-minimal-20191123.n.1    riscv64  Fedora® Rawhide Minimal 20191123.n.1
    --long \
    --uuid \
  | grep ^btrfsvol: \
   | awk '{print $7}' \
   | sort -u
ae525e47-51d5-4c98-8442-351d530612c3
</pre>
</pre>


Then tell <code>virt-builder</code> to build a custom disk image based on one of the templates:
Additionally, you need to extract the kernel and initrd from the disk image. The <code>virt-get-kernel</code> tool automates this step:


<pre>
<pre>
$ virt-builder \
$ virt-get-kernel \
  --arch riscv64 \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw
  --size 20G \
download: /boot/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 -> ./vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64
  --format raw \
download: /boot/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img -> ./initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img
  --output Fedora-Developer-Rawhide-20191123.n.0-sda.raw \
  fedora-rawhide-developer-20191123.n.0
[  3.6] Downloading: https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/Fedora-Developer-Rawhide-20191123.n.0-sda.raw.xz
[  6.1] Planning how to build this image
6.1] Uncompressing
[  32.8] Opening the new disk
[  38.4] Setting a random seed
[  38.4] Setting the machine ID in /etc/machine-id
[  38.4] Setting passwords
virt-builder: Setting random password of root to 5PegnZtGMP47bXnw
[  39.9] Finishing off
                  Output file: Fedora-Developer-Rawhide-20191123.n.0-sda.raw
                  Output size: 20.0G
                Output format: raw
            Total usable space: 19.7G
                    Free space: 14.7G (75%)
</pre>
</pre>


<code>virt-builder</code> has reasonable defaults, such as generating a random root password for you, but if you want more control you can pass additional arguments to customize the image further: for example, using <code>--format qcow2</code> will cause the output image to be in QCOW2 format. See <code>virt-builder --help</code> for more information.
Now move all the files to a directory that libvirt has access to:
 
In addition to the disk image, a firmware image (which has to be downloaded separately) is needed as well. To find out the download URL, use:


<pre>
<pre>
$ virt-builder --arch riscv64 --notes fedora-rawhide-developer-20191123.n.0 | grep fw_payload
$ sudo mv
https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/Fedora-Developer-Rawhide-20191123.n.0-fw_payload-uboot-qemu-virt-smode.elf
    Fedora-Minimal-40-20240502.n.0-sda.raw \
https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/Fedora-Developer-Rawhide-20191123.n.0-fw_payload-uboot-qemu-virt-smode.elf.CHECKSUM
    vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 \
    initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img \
    /var/lib/libvirt/images/
</pre>
</pre>


Then download it using your favorite HTTP client, for example:
At this point, everything is ready and you can create the libvirt VM:


<pre>
<pre>
$ wget https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/Fedora-Developer-Rawhide-20191123.n.0-fw_payload-uboot-qemu-virt-smode.elf
$ virt-install \
    --import \
    --name fedora-riscv \
    --osinfo fedora40 \
    --arch riscv64 \
    --vcpus 4 \
    --ram 4096 \
    --boot uefi,kernel=/var/lib/libvirt/images/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64,initrd=/var/lib/libvirt/images/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img,cmdline='root=UUID=ae525e47-51d5-4c98-8442-351d530612c3 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi' \
    --disk path=/var/lib/libvirt/images/Fedora-Minimal-40-20240502.n.0-sda.raw \
    --network default \
    --tpm none \
    --graphics none
</pre>
</pre>


=== Download manually ===
Note how the UUID discovered earlier is included in the kernel command line. Quoting is also very important to get right.
 
If using <code>virt-builder</code> is not an option, you can download disk images manually from: https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/
 
Download <code>Fedora-Developer-Rawhide-*.raw.xz</code> as well as the matching <code>Fedora-Developer-Rawhide-*-fw_payload-uboot-qemu-virt-smode.elf</code>.
 
== Nightly builds ==
 
You can find them here: http://fedora.riscv.rocks/koji/tasks?state=closed&view=flat&method=createAppliance&order=-id
 
Select the most recent (top) build and download <code>Fedora-Developer-Rawhide-*.raw.xz</code>.
 
These disk images differ from the ones above in that:
* the firmware is inside the disk image, in the <code>/boot</code> directory;
* they are completely untested.


= Prepare the disk image =
Disabling the TPM with <code>--tpm none</code> is only necessary as a temporary measure due to issues currently affecting swtpm in Fedora 40. If you want to, you can try omitting that option and see whether it works.


These steps are only necessary if you haven't used <code>virt-builder</code>.
You should see a bunch of output coming from edk2 (the UEFI implementation we're using), followed by the usual kernel boot messages and, eventually, a login prompt. Please be patient, as the use of emulation makes everything significantly slower. Additionally, a SELinux relabel followed by a reboot will be performed as part of the import process, which slows things down further. Subsequent boots will be a lot faster.


== Uncompress the image ==
To shut down the VM, run <code>poweroff</code> inside the guest OS. To boot it up again, use
 
Whether you have downloaded a tested image or a nightly build, you'll need to uncompress it before it can be used:


<pre>
<pre>
$ unxz Fedora-Developer-Rawhide-*.raw.xz
$ virsh start fedora-riscv --console
</pre>
</pre>


== Optional: expand the disk image ==


You might want to expand the disk image before setting up the VM. Here is one example:
= UKI images =


<pre>
These can be found in the same location but follow a different naming convention. As of this writing, the most recent image is <code>Fedora.riscv64-40-20240429.n.0.qcow2</code>.
$ truncate -r Fedora-Developer-Rawhide-*.raw expanded.raw
$ truncate -s 40G expanded.raw
$ virt-resize -v -x --expand /dev/sda4 Fedora-Developer-Rawhide-*.raw expanded.raw
$ virt-filesystems --long -h --all -a expanded.raw
$ virt-df -h -a expanded.raw
</pre>


The resulting disk image will work with QEMU as well as TinyEMU. Make sure you use <code>expanded.raw</code> instead of <code>Fedora-Developer-Rawhide-*.raw</code> when booting the guest.
The steps are similar to those described above, except that instead of dealing with kernel and initrd separately you need to extract a single file:
 
== Optional: create an overlay ==
 
You can also create <code>qcow2</code> disk image with <code>raw</code> Fedora disk as backing one. This way Fedora <code>raw</code> is unmodified and all changes are written to <code>qcow2</code> layer. You will need to install <code>libguestfs-tools-c</code>.


<pre>
<pre>
$ qemu-img create -f qcow2 -F raw -b Fedora-Developer-Rawhide-*.raw overlay.qcow2 20G
$ virt-copy-out \
$ virt-resize -v -x --expand /dev/sda4 Fedora-Developer-Rawhide-*.raw overlay.qcow2
    -a Fedora.riscv64-40-20240429.n.0.qcow2 \
$ virt-filesystems --long -h --all -a overlay.qcow2
    /boot/efi/EFI/Linux/6.8.7-300.4.riscv64.fc40.riscv64.efi \
    .
</pre>
</pre>


The resulting disk image will only work with QEMU. Make sure you use <code>overlay.qcow2</code> instead of <code>Fedora-Developer-Rawhide-*.raw</code> when booting the guest.
The <code>virt-install</code> command line is slightly different too, in particular the <code>--boot</code> option becomes:
 
== Optional: set the hostname before booting ==
 
If you want to change hostname before the first boot, install <code>libguestfs-tools-c</code> and then run:


<pre>
<pre>
$ virt-customize -a Fedora-Developer-Rawhide-*.raw --hostname fedora-riscv-mymagicbox
--boot uefi,kernel=/var/lib/libvirt/images/6.8.7-300.4.riscv64.fc40.riscv64.efi,cmdline='root=UUID=57cbf0ca-8b99-45ae-ae9d-3715598f11c4 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi'
</pre>
</pre>


== Nightly builds only: extracting firmware (OpenSBI) ==
These changes are enough to get the image to boot, but there are no passwords set up so you won't be able to log in. In order to address that, it's necessary to create a configuration file for <code>cloud-init</code>, for example with the following contents:
 
Fedora/RISC-V does not support BLS (Boot Loader Specification - [https://fedoraproject.org/wiki/Changes/BootLoaderSpecByDefault more details]).
 
Disk images contain a <code>/boot</code> directory from where you can copy out the firmware.
 
This is '''only''' necessary for nightly builds, since for tested images these files are provided as separate downloads alongside the image.
 
Example session:


<pre>
<pre>
$ guestfish \
#cloud-config
  add Fedora-Developer-Rawhide-*.raw : \
  run : \
  mount /dev/sda1 / : \
  ls /opensbi/unstable
fw_jump.elf
fw_payload-5.2.0-0.rc7.git0.1.0.riscv64.fc31.riscv64.elf
fw_payload-uboot-qemu-virt-smode.elf
$ guestfish \
  add Fedora-Developer-Rawhide-*.raw : run : mount /dev/sda1 / : \
  download /opensbi/unstable/fw_payload-uboot-qemu-virt-smode.elf fw_payload-uboot-qemu-virt-smode.elf
</pre>
 
You can also use <code>guestmount</code> or QEMU/NBD to mount disk image. Examples:
<pre>
$ mkdir a
$ guestmount -a $PWD/Fedora-Developer-Rawhide-*.raw -m /dev/sda1 $PWD/a
$ cp a/opensbi/unstable/fw_payload-uboot-qemu-virt-smode.elf .
$ guestunmount $PWD/a
</pre>


<pre>
password: fedora_rocks!
$ sudo modprobe nbd max_part=8
chpasswd:
$ sudo qemu-nbd -f raw --connect=/dev/nbd1 $PWD/Fedora-Developer-Rawhide-*.raw
  expire: false
$ sudo fdisk -l /dev/nbd1
Disk /dev/nbd1: 7.5 GiB, 8053063680 bytes, 15728640 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F0F4268F-1B73-46FF-BABA-D87F075DCCA5
 
Device      Start      End  Sectors  Size Type
/dev/nbd1p1  2048 15001599 14999552  7.2G Linux filesystem
$ sudo mount /dev/nbd1p1 a
$ sudo cp a/opensbi/unstable/fw_payload-uboot-qemu-virt-smode.elf .
$ sudo umount a
$ sudo qemu-nbd --disconnect /dev/nbd1
</pre>
</pre>


Note NBD is highly useful if you need to run <code>fdisk</code>,  <code>e2fsck</code> (e.g. after VM crash and filesystem lock up), <code>resize2fs</code>. It might be beneficial to look into <code>nbdkit</code> and <code>nbd</code> packages.
Save this as `user-data.yml`, then add the following options to your <code>virt-install</code> command line:
 
= Boot the image on virtual hardware =
 
There are several options for booting the image on virtual hardware once you've prepared it following the steps above.
 
== Boot with libvirt ==
 
Detailed instructions how to install libvirt: https://docs.fedoraproject.org/en-US/quick-docs/getting-started-with-virtualization/
 
Quick instructions for libvirt installation (tested on Fedora 30):


<pre>
<pre>
# dnf group install --with-optional virtualization
--controller scsi,model=virtio-scsi \
# systemctl enable --now libvirtd
--cloud-init user-data=user-data.yml
</pre>
</pre>


When running RISC-V guests, it's usually a good idea to use the very latest versions of QEMU, libvirt and virt-manager: the <code>virt-preview</code> repository offers just that for Fedora users. To enable it, simply run:
The configuration data should be picked up during boot, setting the default user's password as requested and allowing you to log in.


<pre>
# dnf copr enable @virtmaint-sig/virt-preview
</pre>


and update your system.
= Host setup =


The steps below assume you have copied both the disk image and the firmware into <code>/var/lib/libvirt/images</code>: this is the location where libvirt usually expects to find disk images, so while using a different location might work it's a good idea to do this and reduce the possibility of issues caused by filesystem permissions and such.
The steps outlined above assume that your machine is already set up for running RISC-V VMs. If that's not the case, read on.


Assuming you have QEMU &ge; 4.0.0, libvirt &ge; 5.3.0 and virt-manager &ge; 2.2.0, the installation will be as simple as:
At the very least, the following package will need to be installed:


<pre>
<pre>
# virt-install \
$ sudo dnf install \
  --qemu-commandline='-bios none' \
    libvirt-daemon-driver-qemu \
  --name fedora-riscv \
    libvirt-daemon-driver-network \
  --arch riscv64 \
    libvirt-daemon-config-network \
  --vcpus 8 \
    libvirt-client \
  --memory 2048 \
    virt-install \
  --os-variant fedora30 \
    qemu-system-riscv-core \
  --boot kernel=/var/lib/libvirt/images/Fedora-Developer-Rawhide-*-fw_payload-uboot-qemu-virt-smode.elf \
    edk2-riscv64
  --import --disk path=/var/lib/libvirt/images/Fedora-Developer-Rawhide-*.raw \
  --network network=default \
  --graphics none
</pre>
</pre>


If you are stuck with older software (QEMU &ge; 2.12.0, libvirt &ge; 4.7.0), then you're going to need a more verbose command line:
This will result in a fairly minimal install, suitable for running headless VMs. If you'd rather have a fully-featured install, add <code>libvirt-daemon-qemu</code> and <code>libvirt-daemon-config-nwfilter</code> to the list. Be warned though: doing so will result in significantly more packages being dragged in, some of which you might not care about (e.g. support for several additional architectures).
 
<pre>
# virt-install \
  --name fedora-riscv \
  --arch riscv64 \
  --machine virt \
  --vcpus 8 \
  --memory 2048 \
  --os-variant fedora30 \
  --boot kernel=/var/lib/libvirt/images/Fedora-Developer-Rawhide-*-fw_payload-uboot-qemu-virt-smode.elf \
  --import --disk path=/var/lib/libvirt/images/Fedora-Developer-Rawhide-*.raw,bus=virtio \
  --network network=default,model=virtio \
  --rng device=/dev/urandom,model=virtio \
  --channel name=org.qemu.guest_agent.0 \
  --graphics none
</pre>
 
Additionally, when using older software components you won't get PCI support, and so enabling graphics will not be possible.
 
Either one of the commands above will automatically boot you into the console. If you don't want that add <code>--noautoconsole</code> option. You can later use <code>virsh</code> tool to manage your VM and get to console.
 
A quick reference of <code>virsh</code> commands:
* <code>virsh list --all</code> - list all VMs and their states
* <code>virsh console <name></code> - connect to serial console (remember: <code>Escape character is ^]</code>)
* <code>virsh shutdown <name></code> - power down VM (see above for more details)
* <code>virsh start <name></code> - power up VM
* <code>virsh undefine <name></code> - remove VM
* <code>virsh net-list</code> - list network (useful for the next command)
* <code>virsh net-dhcp-leases <network_name></code> - list DHCP leases, <code><network_name></code> most likely will be <code>default</code>. This is useful when you want to get IPv4 and SSH to the VM.
* <code>virsh domifaddr <name></code> - alternative for the above two commands, only shows IPv4 for one VM
* <code>virsh reset <name></code> - hard reset VM
* <code>virsh destroy <name></code> hard power down of VM
 
If you want to use <code>ssh user@virtualMachine</code> you can setup libvirt NSS module. See: https://libvirt.org/nss.html
 
You might want also to setup logging for serial console (in case kernel panics or something else).
 
For this we will be using two commands: <code>virsh edit <name></code> (modifying VM XML) and <code>virsh dumpxml <name></code> (dump VM XML for backup). You need to modify <code><serial></code> node by adding <code><log file='/var/log/libvirt/qemu/fedora-riscv-mymagicbox.serial.log'/></code>. Then power down and power up the VM.


Alternatively you can use <code>--serial log.file=/.../whatever.serial.log</code> with <code>virt-install</code> command.
In order to grant your user access to libvirt and allow it to manage VMs, it needs to be made a member of the corresponding group:
 
== Boot under QEMU ==
 
Here is the last tested configuration for the instructions below:
* Fedora 37
* QEMU emulator version 7.2.0 (qemu-7.2.0-4.fc37)
* Fedora-Developer-37-20221130.n.0.SiFive.Unmatched


<pre>
<pre>
# Download disk image
$ sudo usermod -a -G libvirt $(whoami)
wget https://dl.fedoraproject.org/pub/alt/risc-v/disk_images/Fedora-Developer-37-20221130.n.0.SiFive.Unmatched/Fedora-Developer-37-20221130.n.0-nvme.raw.img.xz
wget https://dl.fedoraproject.org/pub/alt/risc-v/disk_images/Fedora-Developer-37-20221130.n.0.SiFive.Unmatched/Fedora-Developer-37-20221130.n.0-nvme.raw.img.xz.sha512sum
 
# Verify
sha512sum -c *.sha512sum
 
# Uncompress
unxz Fedora-Developer-37-20221130.n.0-nvme.raw.img.xz
 
# Download the latest U-Boot (we need U-Boot SPL and U-Boot ITB files)
mkdir u
pushd u
rpm2cpio http://fedora.riscv.rocks/kojifiles/packages/uboot-tools/2023.01/2.4.riscv64.fc37/noarch/uboot-images-riscv64-2023.01-2.4.riscv64.fc37.noarch.rpm | cpio -dvim
popd
 
# We need to modify extlinux.conf for QEMU
mkdir -p /tmp/disk_img
sudo kpartx -a -v Fedora-Developer-37-20221130.n.0-nvme.raw.img
# Output example
add map loop2p1 (253:4): 0 1433600 linear 7:2 34
add map loop2p2 (253:5): 0 19537853 linear 7:2 1433634
# We need to mount the 1st partition (or /boot partition)
sudo mount /dev/mapper/loop2p1 /tmp/disk_img
# Edit extlinux.conf
sudo nvim /tmp/disk_img/extlinux/extlinux.conf
 
# Existing boot entry
  9 label Fedora-Developer-37-20221130.n.0 (6.0.10-300.0.riscv64.fc37.riscv64)     
10    kernel /vmlinuz-6.0.10-300.0.riscv64.fc37.riscv64                         
11    append ro root=UUID=d9334329-e2ad-4b72-8f5f-b61406e9d461 rhgb quiet LANG=en_US.UTF-8 console=ttySIF0,115200 earlycon
12    fdtdir /dtb-6.0.10-300.0.riscv64.fc37.riscv64/                                                                                                                                                                                                                           
13    initrd /initramfs-6.0.10-300.0.riscv64.fc37.riscv64.img
 
# Modified
  9 label Fedora-Developer-37-20221130.n.0 (6.0.10-300.0.riscv64.fc37.riscv64)     
10    kernel /vmlinuz-6.0.10-300.0.riscv64.fc37.riscv64                         
11    append ro root=UUID=d9334329-e2ad-4b72-8f5f-b61406e9d461 rhgb quiet LANG=en_US.UTF-8 console=ttyS0 earlycon                                                                                                                                                             
12    initrd /initramfs-6.0.10-300.0.riscv64.fc37.riscv64.img 
 
# We want to remove fdtdir because we will get DTB from the QEMU itself
# Set console to ttyS0
 
# Unmount
sudo sync
sudo umount /tmp/disk_img
sudo kpartx -d -v Fedora-Developer-37-20221130.n.0-nvme.raw.img
 
# Launch QEMU
qemu-system-riscv64 \
  -bios u/usr/share/uboot/qemu-riscv64_spl/u-boot-spl.bin \
  -nographic \
  -machine virt \
  -smp 4 \
  -m 4G \
  -device loader,file=u/usr/share/uboot/qemu-riscv64_spl/u-boot.itb,addr=0x80200000 \
  -object rng-random,filename=/dev/urandom,id=rng0 \
  -device virtio-rng-device,rng=rng0 \
  -device virtio-blk-device,drive=hd0 \
  -drive file=Fedora-Developer-37-20221130.n.0-nvme.raw.img,format=raw,id=hd0 \
  -device virtio-net-device,netdev=usernet \
  -netdev user,id=usernet,hostfwd=tcp::10000-:22
</pre>
 
Once machine is booted you can connect via SSH:
 
<pre>
$ ssh -p 10000 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o PreferredAuthentications=password -o PubkeyAuthentication=no riscv@localhost
</pre>
 
If need you can get DTB/DTS from QEMU:
<pre>
qemu-system-riscv64 -nographic -smp 8 -m 2G -machine virt,dumpdtb=qemu-riscv.dtb
dtc -I dtb -O dts qemu-riscv.dtb -o qemu-riscv.dts
</pre>
 
This can run up to 32 CPUs.
 
== Boot under TinyEMU (RISCVEMU) ==
'''Note (2019 March 10):''' This is not supported anymore until TinyEMU is updated to support external initrd file. Please, use QEMU or libvirt/QEMU.
 
RISCVEMU recently (2018-09-23) was renamed to TinyEMU (https://bellard.org/tinyemu/).
 
TinyEMU allow booting Fedora disk images in TUI and GUI modes. You can experiment using JSLinux (no need to download/compile/etc) here: https://bellard.org/jslinux/
 
Below are instructions how to boot Fedora into X11/Fluxbox GUI mode.
 
'''Step 1'''. Compile TinyEMU:
 
<pre>
wget https://bellard.org/tinyemu/tinyemu-2018-09-23.tar.gz
tar xvf tinyemu-2018-09-23.tar.gz
cd tinyemu-2018-09-23
make
</pre>
 
'''Step 2'''. Setup for booting Fedora:
 
<pre>
mkdir fedora
cd fedora
cp ../temu .
 
# Download pre-built BBL with embedded kernel
wget https://bellard.org/jslinux/bbl64-4.15.bin
 
# Create configuration file for TinyEMU
cat <<EOF > root-riscv64.cfg
/* VM configuration file */
{
    version: 1,
    machine: "riscv64",
    memory_size: 1400,
    bios: "bbl64-4.15.bin",
    cmdline: "loglevel=3 console=tty0 root=/dev/vda rw TZ=${TZ}",
    drive0: { file: "Fedora-Developer-Rawhide-*-sda1.raw" },
    eth0: { driver: "user" },
    display0: {
        device: "simplefb",
        width: 1920,
        height: 1080,
    },
    input_device: "virtio",
}
EOF
 
# Download disk image and unpack in the same directory
</pre>
 
'''Step 3'''. Boot it.
 
<pre>
./temu -rw root-riscv64.cfg
</pre>
</pre>


We need to use <code>-rw</code> if we want our changes to persist in disk image. Otherwise disk image will be loaded as read-only and all changes will not persist after reboot.
Finally, the default libvirt URI needs to be configured:
 
= Boot the image on physical hardware =
 
== Install on the HiFive Unleashed SD card ==
 
The disk image can be copied directly to an SD card and run on the [https://www.sifive.com/products/hifive-unleashed/ HiFive Unleashed board].
 
A good way to create the SD card is with this virt-builder command:
 
<pre>
sudo virt-builder \
    --source https://dl.fedoraproject.org/pub/alt/risc-v/repo/virt-builder-images/images/index \
    --no-check-signature \
    --arch riscv64 \
    --format raw \
    --hostname testing.example.com \
    --output /dev/sdXXX \
    --root-password password:riscv \
    fedora-rawhide-developer-20200108.n.0
</pre>
 
== Install on the HiFive Unleashed using NBD server ==
 
You will need a separate machine on the same network which can act as an NBD server.  It can be any Linux machine with [https://github.com/qemu/qemu/blob/master/qemu-nbd.c qemu-nbd] or [https://github.com/libguestfs/nbdkit nbdkit] installed.
 
You will first need to get it working on the SD card using the instructions in the previous section.


On the HiFive board, install <code>dracut-network</code> and <code>nbd</code>, and rebuild the initramfs with NBD support using these commands:
<pre>
<pre>
dnf install dracut-network nbd
$ mkdir -p ~/.config/libvirt && \
dracut -m "nbd network base" -v --force
  echo 'uri_default = "qemu:///system"' >~/.config/libvirt/libvirt.conf
</pre>
</pre>


Shut down the HiFive board, remove the SD card, and copy the 4th partition from the SD card into a file or partition on the NBD server.
Now reboot the host. This is necessary because the changes to group membership won't be effective until the next login, and because the libvirt services are not automatically started during package installation.


You will need to change the UUID on this filesystem to a new random one (else it will conflict with the SD-card root filesystem), eg:
After rebooting and logging back in, <code>virsh</code> should work and the default network should be up:
<pre>
tune2fs -U random /dev/VG/jive
</pre>


Start up the NBD server (in this example using <code>qemu-nbd</code> with the 4th partition having been copied to <code>/dev/VG/jive</code> listening on port 10810):
<pre>
<pre>
qemu-nbd -t -f raw -x "" -p 10810 /dev/VG/jive
$ virsh uri
</pre>
qemu:///system


Place the SD card back in the HiFive board and boot it. At this point you have still booted into the SD card's root filesystem (not NBD).  Go and edit <code>/boot/extlinux/extlinux.conf</code>.  To the <code>addappend</code> line you will need to set:
$ virsh net-list
 
  Name      State    Autostart  Persistent
<pre>
--------------------------------------------
root=UUID=...uuid... netroot=nbd:192.168.0.220:10810 rootfstype=ext4 ro rootdelay=5 ip=dhcp rootwait
default  active  yes        yes
</pre>
</pre>


I would also advise removing <code>rhgb quiet</code> if they are present.  Note you need to set <code>uuid</code> and the address and port number of the NBD server correctly.  Immediately reboot and it should come up with an NBD root.  Check the boot messages carefully to see whether it used the SD card or NBD root.
All done! You can now start creating RISC-V VMs.
 
== Install Fedora GNOME Desktop on SiFive HiFive Unleashed + Microsemi HiFive Unleashed Expansion board ==
 
Detailed instructions are provided by Atish Patra from Western Digital Corporation (WDC). See their GitHub page for details and pictures: https://github.com/westerndigitalcorporation/RISC-V-Linux
 
So far two GPUs are confirmed to be working: Radeon HD 6450 and Radeon HD 5450.
 
= Use the image =
 
Once the system is booted, login as <code>root</code> with <code>riscv</code> as password.
 
X11 with Fluxbox can be started using: <code>startx /usr/bin/startfluxbox</code>. The disk image also includes awesome and i3 for testing. Dillo is available as a basic web browser (no javascript support) and pcmanfm as file manager.
 
To gracefully shutdown just type <code>poweroff</code> into console.
 
If you want more information being displayed during boot, remove <code>quiet</code> from the <code>append</code> line in <code>/boot/extlinux/extlinux.conf</code>.

Latest revision as of 16:54, 31 May 2024

This page describes the steps necessary to get Fedora for RISC-V running, either on emulated or real hardware.

Quickstart

This section assumes that you have already set up libvirt/QEMU on your machine and you're familiar with them, so it only highlights the details that are specific to RISC-V. It also assumes that you're running Fedora 40 as the host.

First of all, you need to download a disk image from https://dl.fedoraproject.org/pub/alt/risc-v/disk_images/Fedora-40/

As of this writing, the most recent image is Fedora-Minimal-40-20240502.n.0-sda.raw.xz so I will be using that throughout the section. If you're using a different image, you will need to adjust things accordingly.

Once you've downloaded the image, start by uncompressing it:

$ unxz Fedora-Minimal-40-20240502.n.0-sda.raw.xz

You need to figure out the root filesystem's UUID so that you can later pass this information to the kernel. The virt-filesystems utility, part of the guestfs-tools package, takes care of that:

$ virt-filesystems \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw \
    --long \
    --uuid \
  | grep ^btrfsvol: \
  | awk '{print $7}' \
  | sort -u
ae525e47-51d5-4c98-8442-351d530612c3

Additionally, you need to extract the kernel and initrd from the disk image. The virt-get-kernel tool automates this step:

$ virt-get-kernel \
    -a Fedora-Minimal-40-20240502.n.0-sda.raw
download: /boot/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 -> ./vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64
download: /boot/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img -> ./initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img

Now move all the files to a directory that libvirt has access to:

$ sudo mv
    Fedora-Minimal-40-20240502.n.0-sda.raw \
    vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64 \
    initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img \
    /var/lib/libvirt/images/

At this point, everything is ready and you can create the libvirt VM:

$ virt-install \
    --import \
    --name fedora-riscv \
    --osinfo fedora40 \
    --arch riscv64 \
    --vcpus 4 \
    --ram 4096 \
    --boot uefi,kernel=/var/lib/libvirt/images/vmlinuz-6.8.7-300.4.riscv64.fc40.riscv64,initrd=/var/lib/libvirt/images/initramfs-6.8.7-300.4.riscv64.fc40.riscv64.img,cmdline='root=UUID=ae525e47-51d5-4c98-8442-351d530612c3 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi' \
    --disk path=/var/lib/libvirt/images/Fedora-Minimal-40-20240502.n.0-sda.raw \
    --network default \
    --tpm none \
    --graphics none

Note how the UUID discovered earlier is included in the kernel command line. Quoting is also very important to get right.

Disabling the TPM with --tpm none is only necessary as a temporary measure due to issues currently affecting swtpm in Fedora 40. If you want to, you can try omitting that option and see whether it works.

You should see a bunch of output coming from edk2 (the UEFI implementation we're using), followed by the usual kernel boot messages and, eventually, a login prompt. Please be patient, as the use of emulation makes everything significantly slower. Additionally, a SELinux relabel followed by a reboot will be performed as part of the import process, which slows things down further. Subsequent boots will be a lot faster.

To shut down the VM, run poweroff inside the guest OS. To boot it up again, use

$ virsh start fedora-riscv --console


UKI images

These can be found in the same location but follow a different naming convention. As of this writing, the most recent image is Fedora.riscv64-40-20240429.n.0.qcow2.

The steps are similar to those described above, except that instead of dealing with kernel and initrd separately you need to extract a single file:

$ virt-copy-out \
    -a Fedora.riscv64-40-20240429.n.0.qcow2 \
    /boot/efi/EFI/Linux/6.8.7-300.4.riscv64.fc40.riscv64.efi \
    .

The virt-install command line is slightly different too, in particular the --boot option becomes:

--boot uefi,kernel=/var/lib/libvirt/images/6.8.7-300.4.riscv64.fc40.riscv64.efi,cmdline='root=UUID=57cbf0ca-8b99-45ae-ae9d-3715598f11c4 ro rootflags=subvol=root rhgb LANG=en_US.UTF-8 console=ttyS0 earlycon=sbi'

These changes are enough to get the image to boot, but there are no passwords set up so you won't be able to log in. In order to address that, it's necessary to create a configuration file for cloud-init, for example with the following contents:

#cloud-config

password: fedora_rocks!
chpasswd:
  expire: false

Save this as user-data.yml, then add the following options to your virt-install command line:

--controller scsi,model=virtio-scsi \
--cloud-init user-data=user-data.yml

The configuration data should be picked up during boot, setting the default user's password as requested and allowing you to log in.


Host setup

The steps outlined above assume that your machine is already set up for running RISC-V VMs. If that's not the case, read on.

At the very least, the following package will need to be installed:

$ sudo dnf install \
    libvirt-daemon-driver-qemu \
    libvirt-daemon-driver-network \
    libvirt-daemon-config-network \
    libvirt-client \
    virt-install \
    qemu-system-riscv-core \
    edk2-riscv64

This will result in a fairly minimal install, suitable for running headless VMs. If you'd rather have a fully-featured install, add libvirt-daemon-qemu and libvirt-daemon-config-nwfilter to the list. Be warned though: doing so will result in significantly more packages being dragged in, some of which you might not care about (e.g. support for several additional architectures).

In order to grant your user access to libvirt and allow it to manage VMs, it needs to be made a member of the corresponding group:

$ sudo usermod -a -G libvirt $(whoami)

Finally, the default libvirt URI needs to be configured:

$ mkdir -p ~/.config/libvirt && \
  echo 'uri_default = "qemu:///system"' >~/.config/libvirt/libvirt.conf

Now reboot the host. This is necessary because the changes to group membership won't be effective until the next login, and because the libvirt services are not automatically started during package installation.

After rebooting and logging back in, virsh should work and the default network should be up:

$ virsh uri
qemu:///system

$ virsh net-list
 Name      State    Autostart   Persistent
--------------------------------------------
 default   active   yes         yes

All done! You can now start creating RISC-V VMs.