Fedora People

Fedora Workstation 40 – what are we working on

Posted by Christian F.K. Schaller on March 28, 2024 06:56 PM
So Fedora Workstation 40 Beta has just come out so I thought I share a bit about some of the things we are working on for Fedora Workstation currently and also major changes coming in from the community.

Flatpak

Flatpaks has been a key part of our strategy for desktop applications for a while now and we are working on a multitude of things to make Flatpaks an even stronger technology going forward. Christian Hergert is working on figuring out how applications that require system daemons will work with Flatpaks, using his own Sysprof project as the proof of concept application. The general idea here is to rely on the work that has happened in SystemD around sysext/confext/portablectl trying to figure out who we can get a system service installed from a Flatpak and the necessary bits wired up properly. The other part of this work, figuring out how to give applications permissions that today is handled with udev rules, that is being worked on by Hubert Figuière based on earlier work by Georges Stavracas on behalf of the GNOME Foundation thanks to the sponsorship from the Sovereign Tech Fund. So hopefully we will get both of these two important issues resolved soon. Kalev Lember is working on polishing up the Flatpak support in Foreman (and Satelitte) to ensure there are good tools for managing Flatpaks when you have a fleet of systems you manage, building on the work of Stephan Bergman. Finally Jan Horak and Jan Grulich is working hard on polishing up the experience of using Firefox from a fully sandboxed Flatpak. This work is mainly about working with the upstream community to get some needed portals over the finish line and polish up some UI issues in Firefox, like this one.

Toolbx

Toolbx, our project for handling developer containers, is picking up pace with Debarshi Ray currently working on getting full NVIDIA binary driver support for the containers. One of our main goals for Toolbx atm is making it a great tool for AI development and thus getting the NVIDIA & CUDA support squared of is critical. Debarshi has also spent quite a lot of time cleaning up the Toolbx website, providing easier access to and updating the documentation there. We are also moving to use the new Ptyxis (formerly Prompt) terminal application created by Christian Hergert, in Fedora Workstation 40. This both gives us a great GTK4 terminal, but we also believe we will be able to further integrate Toolbx and Ptyxis going forward, creating an even better user experience.

Nova

So as you probably know, we have been the core maintainers of the Nouveau project for years, keeping this open source upstream NVIDIA GPU driver alive. We plan on keep doing that, but the opportunities offered by the availability of the new GSP firmware for NVIDIA hardware means we should now be able to offer a full featured and performant driver. But co-hosting both the old and the new way of doing things in the same upstream kernel driver has turned out to be counter productive, so we are now looking to split the driver in two. For older pre-GSP NVIDIA hardware we will keep the old Nouveau driver around as is. For GSP based hardware we are launching a new driver called Nova. It is important to note here that Nova is thus not a competitor to Nouveau, but a continuation of it. The idea is that the new driver will be primarily written in Rust, based on work already done in the community, we are also evaluating if some of the existing Nouveau code should be copied into the new driver since we already spent quite a bit of time trying to integrate GSP there. Worst case scenario, if we can’t reuse code, we use the lessons learned from Nouveau with GSP to implement the support in Nova more quickly. Contributing to this effort from our team at Red Hat is Danilo Krummrich, Dave Airlie, Lyude Paul, Abdiel Janulgue and Phillip Stanner.

Explicit Sync and VRR

Another exciting development that has been a priority for us is explicit sync, which is critical for especially the NVidia driver, but which might also provide performance improvements for other GPU architectures going forward. So a big thank you to Michel Dänzer , Olivier Fourdan, Carlos Garnacho; and Nvidia folks, Simon Ser and the rest of community for working on this. This work has just finshed upstream so we will look at backporting it into Fedora Workstaton 40. Another major Fedora Workstation 40 feature is experimental support for Variable Refresh Rate or VRR in GNOME Shell. The feature was mostly developed by community member Dor Askayo, but Jonas Ådahl, Michel Dänzer, Carlos Garnacho and Sebastian Wick have all contributed with code reviews and fixes. In Fedora Workstation 40 you need to enable it using the command

gsettings set org.gnome.mutter experimental-features "['variable-refresh-rate']"

PipeWire

Already covered PipeWire in my post a week ago, but to quickly summarize here too. Using PipeWire for video handling is now finally getting to the stage where it is actually happening, both Firefox and OBS Studio now comes with PipeWire support and hopefully we can also get Chromium and Chrome to start taking a serious look at merging the patches for this soon. Whats more Wim spent time fixing Firewire FFADO bugs, so hopefully for our pro-audio community users this makes their Firewire equipment fully usable and performant with PipeWire. Wim did point out when I spoke to him though that the FFADO drivers had obviously never had any other consumer than JACK, so when he tried to allow for more functionality the drivers quickly broke down, so Wim has limited the featureset of the PipeWire FFADO module to be an exact match of how these drivers where being used by JACK. If the upstream kernel maintainer is able to fix the issues found by Wim then we could look at providing a more full feature set. In Fedora Workstation 40 the de-duplication support for v4l vs libcamera devices should work as soon as we update Wireplumber to the new 0.5 release.

To hear more about PipeWire and the latest developments be sure to check out this interview with Wim Taymans by the good folks over at Destination Linux.

Remote Desktop

Another major feature landing in Fedora Workstation 40 that Jonas Ådahl and Ray Strode has spent a lot of effort on is finalizing the remote desktop support for GNOME on Wayland. So there has been support for remote connections for already logged in sessions already, but with these updates you can do the login remotely too and thus the session do not need to be started already on the remote machine. This work will also enable 3rd party solutions to do remote logins on Wayland systems, so while I am not at liberty to mention names, be on the lookout for more 3rd party Wayland remoting software becoming available this year.

This work is also important to help Anaconda with its Wayland transition as remote graphical install is an important feature there. So what you should see there is Anaconda using GNOME Kiosk mode and the GNOME remote support to handle this going forward and thus enabling Wayland native Anaconda.

HDR

Another feature we been working on for a long time is HDR, or High Dynamic Range. We wanted to do it properly and also needed to work with a wide range of partners in the industry to make this happen. So over the last year we been contributing to improve various standards around color handling and acceleration to prepare the ground, work on and contribute to key libraries needed to for instance gather the needed information from GPUs and screens. Things are coming together now and Jonas Ådahl and Sebastian Wick are now going to focus on getting Mutter HDR capable, once that work is done we are by no means finished, but it should put us close to at least be able to start running some simple usecases (like some fullscreen applications) while we work out the finer points to get great support for running SDR and HDR applications side by side for instance.

PyTorch

We want to make Fedora Workstation a great place to do AI development and testing. First step in that effort is packaging up PyTorch and making sure it can have working hardware acceleration out of the box. Tom Rix has been leading that effort on our end and you will see the first fruits of that labor in Fedora Workstation 40 where PyTorch should work with GPU acceleration on AMD hardware (RockEM) out of the box. We hope and expect to be able to provide the same for NVIDIA and Intel graphics eventually too, but this is definitely a step by step effort.

Dors/Cluc and DevConf.cz: Two open source events worth visiting

Posted by Bogomil Shopov - Bogo on March 28, 2024 11:45 AM

I hit 10k reads (posted on three platforms) on my article about the three events you should visit in Bulgaria focused on Free and open-source software (FOSS). I decided to expand your knowledge with two more that I can recommend.

I know that 10k hits are nothing, but I am proud of the results for such a niche topic.

Here are my next two proposals:

Dors/Cluc

Zagreb, Croatia
15-19 May, 2024

Let me start by introducing you to a great event that has been organized for over 30 years—yes, 30! The organizers are proud that this is Europe’s oldest conference on Gnu/Linux and free software.

It’s held in Zagreb, Croatia, and offers many ways to learn new stuff – Sessions, workshops, small mini-events on a particular topic, and many ways to network and meet people.

Why don’t you combine your thirst for knowledge with a trip to Zagreb where, apart from the food, drinks, and history, you will understand why there are chandeliers from a Las Vegas Casino in a Cathedral?

The team is running a 30% discount campaign for the next few days.

DevConf

Brno, Czechia
13-15 June, 2024

When I started living in the Czech Republic, the people from Prague tried to convince me that the city of Brno was a hoax and that it didn’t exist. I am still not convinced, and I plan to go and visit the DevConf this year to change my mind :)

Apart from the obvious joke, the DevConf in Brno has been held almost every year since 2009. The topics might vary throughout the years, but the hero is always the open source. The primary sponsor is Redhat, and you might see more focus on technologies and principles related to the software company, but this is usually okay.

This year’s conference will last three days and include ten different themes, including the good ol’ AI.

Attendance is free of charge, and no registration is required. Visit Brno, ensure it’s real, meet many new people, and learn something new.


P.S. I am not associated with any of the events; I just want to support their enormous effort

A new provisioning tool built with mgmt

Posted by James Just James on March 27, 2024 08:58 PM
Today I’m announcing a new type of provisioning tool. This is both the culmination of a long road, and the start of a new era. Please read on for all of the details. Feel free to skip to the relevant sections you’re interested in if you don’t want all of the background. Ten years: The vision for this specific tool started around ten years ago. Previously, as a sysadmin, I spent a lot of my time using a configuration management tool called puppet.

Alerting on One Identity Cloud PAM Essentials logs using syslog-ng

Posted by Peter Czanik on March 27, 2024 01:22 PM

One Identity Cloud PAM Essentials is the latest security product by One Identity. It provides asset management as well as secure and monitored remote access for One Identity Cloud users to hosts on their local network. I had a chance to test PAM Essentials while still in development. While there, I also integrated it with syslog-ng.

From my previous blog, you could learn what PAM Essentials is, and how you can collect its logs using syslog-ng. This blog will show you how to work with the collected log messages and create alerts when somebody connects to a host on your local network using PAM Essentials.

https://www.syslog-ng.com/community/b/blog/posts/alerting-on-one-identity-cloud-pam-essentials-logs-using-syslog-ng

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Build custom images for Testing Farm

Posted by Fedora Magazine on March 27, 2024 08:00 AM

You may know the Testing Farm from the article written by David Kornel and Jakub Stejskal. That article highlighted the primary advantages of this testing system. You should review the earlier article, as this article will not go through the basics of the Testing Farm usage. It will only delve into the reasons for utilizing your custom images and explore potential automation methods with Hashicorp Packer.

AWS images

The Testing Farm automatically deploys Amazon Web Services (AWS) machines using a default set of Amazon Machine Images (AMIs) available to users. This curated subset includes popular community images such as CentOS and Fedora Linux. While these AMIs typically represent bare operating systems, they don’t have to remain in that state.

Think of these AMIs as analogous to container images. You have the flexibility to embed all installation and configuration steps directly into the image itself. By doing so, you can preconfigure the environment, ensuring that everything is ready before the actual Testing Farm job begins.

The Trade-Off

However, there’s a trade-off. While customizing AMIs streamlines the process, building them manually can be challenging and time-consuming. The effort involved in creating a well-prepared AMI is substantial.

In an upcoming section of this article, we’ll delve into a practical solution. We’ll explore how to use Hashicorp Packer, a powerful tool for creating machine images, and illustrate its application in the context of the Debezium project.

Benefits of custom images for Testing Farm

There might be some confusion surrounding the rationale for creating custom images, especially considering the investment of time, effort, and resources. However, this question is highly relevant, and the answer lies in a straightforward concept: time efficiency.

Imagine you are testing web applications within containers. You must deploy the database, web server, and other supporting systems each time you perform testing. For instance, when testing against an Oracle database, the container image alone can be nearly 10 GB. Pulling this image for every pull request (PR) takes several minutes.

By building a custom Amazon Machine Image (AMI) that includes this giant image, you eliminate the need to pull it repeatedly. This initial investment pays off significantly in the long run. Additionally, there’s another advantage: reducing unnecessary information exposure to developers. With a preconfigured system, developers can focus solely on the tests without being burdened by extraneous details.

In summary, custom images streamline the testing process, enhance efficiency, and provide a cleaner development experience for your team. Of course, this solution might not be ideal for all use cases and should be used only if it adds value to your testing scenarios. For example, if you are testing packages for Fedora Linux or CentOS and integration with it, you should always use the latest available image on Testing Farm to mitigate the risks associated with a custom image being outdated.

Automate the process with Packer

The trade-off when considering using custom images is that you must create them. This requirement might discourage some developers from pursuing this route. However, there’s good news: Packer significantly improves this experience.

Initially developed by HashiCorp, Packer is a powerful tool for creating consistent Virtual Machine Images for various platforms. AWS (Amazon Web Services) is one of the supported platforms. Virtual Machine images used in an AWS environment are called AMI (Amazon Machine Images).

Descriptions of image builds are written in HCL format and provide a rich set of provisioners. These provisioners act as plugins, allowing developers to execute specific tools within the machine from which Packer generates the image snapshot.

Among the most interesting provisioners are:

  • File — Copies files from the current machine to the EC2 instance.
  • Shell — Executes shell scripts within the machine.
  • Ansible — Enables direct execution of Ansible playbooks on the machine.

In the sections that follow, we’ll explore practical examples and how Packer can enhance your image-building process.

Debezium use-case

So far, we have discussed the reasons for using custom images and why you should automate the build, but how can you do that? Let’s showcase this on the actual project! We onboarded Testing Farm with the Debezium project last year. Debezium is the de facto industrial standard for CDC (Change Data Capture) streaming. Debezium currently supports about fourteen databases, each with a different setup and hardware needs, but if there is one common feature, it is memory consumption. Suppose those databases run in the container with a minimal amount of RAM (Random Access Memory). In that case, they tend to do things like flushing to disk, etc., and those things are very annoying for testing because you need to rerun the tests after the failures with longer wait times or other workarounds.

Because of that, we have moved part of the testing to the Testing Farm, where we ask for sufficient hardware to ensure databases have enough space and RAM so tests are “stable”. One of the supported databases for Debezium is Oracle DBMS. As was pointed out earlier, Oracle’s container images are quite large, so we had to build the AMI image to give our community the fastest feedback on the PRs.

Firstly, we have started working on the Ansible playbooks, installing everything necessary to run the database and our test suite. This Ansible playbook looks like this:

# oracle_docker.yml

---

- name: Debezium testing environment playbook
  hosts: 'all'
  become: yes
  become_method: sudo

  tasks:
  - name: Add Docker-ce repository
    yum_repository:
      name: docker-ce
      description: Repository from Docker
      baseurl: https://download.docker.com/linux/centos/8/x86_64/stable
      gpgcheck: no

  - name: Update all packages
    yum:
      name: "*"
      state: latest
      exclude: ansible*

  - name: Install dependencies
    yum:
      name: ['wget', 'java-17-openjdk-devel', 'make', 'git', 'zip', 'coreutils', 'libaio']
      state: present

  - name: Install Docker dependencies
    yum:
      name: ['docker-ce', 'docker-ce-cli', 'containerd.io', 'docker-buildx-plugin']
      state: present

  - name: Unzip oracle libs
    unarchive:
      src: /tmp/oracle-libs.zip
      dest: /root/
      remote_src: true

  - name: Install Oracle sqlplus
    shell: |
      wget https://download.oracle.com/otn_software/linux/instantclient/2113000/oracle-instantclient-basic-21.13.0.0.0-1.el8.x86_64.rpm -O sqlplus-basic.rpm
      wget https://download.oracle.com/otn_software/linux/instantclient/2113000/oracle-instantclient-sqlplus-21.13.0.0.0-1.el8.x86_64.rpm -O sqlplus-adv.rpm
      rpm -i sqlplus-basic.rpm
      rpm -i sqlplus-adv.rpm

  - name: Prepare Oracle script
    copy:
      src: /tmp/install-oracle-driver.sh
      dest: /root/install-oracle-driver.sh
      remote_src: true

  - name: Make executable
    shell: chmod +x /root/install-oracle-driver.sh

  - name: Install maven
    shell: |
      mkdir -p /usr/share/maven /usr/share/maven/ref
      curl -fsSL -o /tmp/apache-maven.tar.gz https://apache.osuosl.org/maven/maven-3/3.8.8/binaries/apache-maven-3.8.8-bin.tar.gz
      tar -xzf /tmp/apache-maven.tar.gz -C /usr/share/maven --strip-components=1
      rm -f /tmp/apache-maven.tar.gz
      ln -s /usr/share/maven/bin/mvn /usr/bin/mvn

  - name: Start docker daemon
    systemd:
      name: docker
      state: started
      enabled: true

  - name: Pull Oracle images from quay
    shell: |
      docker pull custom.repo/oracle:{{ oracle_tag }}
      when: use_custom|bool == true

  - name: Pull Oracle images from official repository
    shell: |
      docker pull container-registry.oracle.com/database/free:23.3.0.0
    when: use_custom|bool == false

  - name: Logout from registries
    shell: |
      docker logout quay.io
    when: use_quay|bool == true

As you can see, this playbook does everything:

  • Docker installation
  • SQLPlus installation
  • Running some side Oracle init script
  • Installing Maven and all other test suite dependencies
  • Pulling the image

Once all those steps are finished, the machine should be fully prepared to run the test suite and start the database. We can create an image from this machine snapshot. OK, now it’s time to look at the packer descriptor.

# ami-build.pkr.hcl
packer {
required_plugins {
amazon = {
source = "github.com/hashicorp/amazon"
version = "~> 1.2.6"
}
ansible = {
source = "github.com/hashicorp/ansible"
version = "~> 1"
}
}
}

variable "aws_access_key" {
type = string
sensitive = true
}

variable "aws_secret_key" {
type = string
sensitive = true
}

variable "aws_region" {
type = string
default = "us-east-2"
}

variable "aws_instance_type" {
type = string
default = "t3.small"
}

variable "aws_ssh_username" {
type = string
default = "centos"
}

variable "image_name" {
type = string
}

variable "oracle_image" {
type = string
}

variable source_ami {
type = string
default = "ami-080baaeff069b7464"
}

variable "aws_volume_type" {
type = string
default = "gp3"
}

source "amazon-ebs" "debezium" {
access_key = var.aws_access_key
secret_key = var.aws_secret_key
source_ami = var.source_ami
region = var.aws_region
force_deregister = true
force_delete_snapshot = true
instance_type = var.aws_instance_type
ssh_username = var.aws_ssh_username
ami_name = var.image_name
ami_users = ["125523088429"]


# choose the most free subnet which matches the filters
# https://www.packer.io/plugins/builders/amazon/ebs#subnet_filter
subnet_filter {
filters = {
"tag:Class": "build"
}
most_free = true
random = false
}

launch_block_device_mappings {
device_name = "/dev/sda1"
delete_on_termination = "true"
volume_type = var.aws_volume_type
volume_size = 30
}

}

build {
sources = ["source.amazon-ebs.debezium"]
name = "debezium-oracle-packer"

provisioner "file" {
source = "./provisioners/files/oracle-libs.zip"
destination = "/tmp/oracle-libs.zip"
}

provisioner "file" {
source = "./provisioners/files/install-oracle-driver.sh"
destination = "/tmp/install-oracle-driver.sh"
}

provisioner "shell" {
script = "./provisioners/scripts/bootstrap.sh"
}

provisioner "ansible" {
playbook_file = "./provisioners/ansible/oracle_docker.yml"
extra_arguments = [ "-vv", "-e oracle_tag="${var.oracle_image}"" ]
# Required workaround for Ansible 2.8+
# https://www.packer.io/docs/provisioners/ansible/ansible#troubleshooting
use_proxy = false
}
}

The descriptor above contains all the information necessary for the Packer to build the AMI image. At the start you can see the definitions of all the variables. These are mostly just configuration or sensitive information. Next, you find the configuration of the Amazon plugin (this allows the AMI build). You can see that besides casual configurations like secrets and regions, you also must pass source_ami. This field defines the base image for our build. For Debezium, we are using CentOS Stream 8.

The next important field is ssh_username. That field can be very tricky because you can find more variants of the username for some distros. For CentOS, it is usually centos or ec2-user. Be careful setting this because debugging during the build process is challenging.

The last important thing, specifically regarding Testing Farm, is the ami_users field. This field contains an array of users with whom Packer will share the new AMI. This step is necessary to use your image in the Testing Farm environment.

The last part of the descriptor contains all the provisioners you want to run before the AMI build starts. For Debezium, we just copy some libraries and init scripts, run the bootstrap script (this script installs initial dependencies – EPEL and Ansible; you can find it below), and trigger the ansible-playbook (showcased above).

# bootstrap.sh
#!/bin/bash

set -ex

sudo yum install -y epel-release
sudo yum install -y ansible

Once all provisioners and descriptors are complete, you put those in the correct file structure. On Debezium, we are use the following:

testing-farm-build
├── ami-build.pkr.hcl
└── provisioners
├── ansible
│ ├── oracle_docker.yml
│ └── variables.yml
├── files
│ └── install-oracle-driver.sh
└── scripts
└── bootstrap.sh

Then, you just have to step into the root directory (testing-farm-build) and start the build. You can begin the packer build with the following command:

packer build -var="aws_secret_key=${AWS_SECRET_ACCESS_KEY}" -var="aws_access_key=${AWS_ACCESS_KEY_ID}" -var="image_name=${AMI_NAME}" -var="aws_ssh_username=centos" . 

You can pass whatever variables you want directly into the command. If you do not want to export some information as an environment variable, do not include it here, and the packer will automatically ask you for it during the build process.

Once your AMI is built, you are only one step away before you can use your image in the Testing Farm environment. You have to open the PR on the Testing Farm infrastructure repository and make the following additions: Add a new regex matcher for your AMI names into the image map – for example
Add your AWS account ID as the new image owner from which Testing Farm will gather images – for example.

After Testing Farm maintainers merge those PRs, your images will be available for provisioning in a couple of minutes. Once they are ready, you should be able to see them here.

Conclusion

Building your custom image for the Testing Farm unlocks a world of possibilities for enhancing your testing workflow. Creating a tailored image can accelerate test runs and provide targeted feedback to your community. And best of all, the entire image build process can be seamlessly automated using Packer with minimal effort. This article should be a helpful guide for fellow Testing Farm users looking to optimize their experience. If you have any questions or need assistance during setup, feel free to reach out — I’m here to help!

Fedora Linux 40 Beta est disponible pour les tests

Posted by Charles-Antoine Couret on March 26, 2024 02:28 PM

En ce mardi 26 mars, la communauté du Projet Fedora sera ravie d'apprendre la disponibilité de la version Beta de Fedora Linux 40.

Malgré les risques concernant la stabilité d’une version Beta, il est important de la tester ! En rapportant les bogues maintenant, vous découvrirez les nouveautés avant tout le monde, tout en améliorant la qualité de Fedora Linux 40 et réduisant du même coup le risque de retard. Les versions en développement manquent de testeurs et de retours pour mener à bien leurs buts.

La version finale est pour le moment fixée pour le 16 ou 23 avril.

Expérience utilisateur

  • Passage à GNOME 46 ;
  • L'environnement de bureau KDE Plasma change de version majeure avec sa nouvelle version 6 ;
  • Le fichier firefox.desktop est renommé en org.mozilla.firefox.desktop pour permettre son utilisation dans la barre de recherche de GNOME.

Gestion du matériel

  • Fourniture de ROCm 6 pour améliorer la prise en charge de l'IA et le calcul haute performance pour les cartes graphiques AMD ;
  • Passage à l'étape 2 de la prise en charge du noyau unifié nommée UKI (donc unifiant noyau, initrd, ligne de commande du noyau et signature) pour les plateformes avec UEFI mais rien ne change par défaut à ce sujet.

Internationalisation

  • Le gestionnaire d'entrée de saisie IBus passe à la version 1.5.30 ;
  • Mise à jour de ibus-anthy 1.5.16 pour la saisie du japonais.

Administration système

  • NetworkManager tente de détecter par défaut les conflits d'usage d'adresse IPv4 avec le protocole Address Conflict Detection avant de l'attribuer à la machine ;
  • NetworkManager va utiliser une adresse MAC aléatoire par défaut pour chaque réseau Wifi différent, et cette adresse sera stable pour un réseau donné. Cela permet de concilier vie privée et confort d'utilisation ;
  • Les unités système de systemd vont utiliser par défaut beaucoup d'options pour améliorer la sécurité des services ;
  • Les entrées des politiques SELinux qui font référence au répertoire /var/run font maintenant référence au répertoire /run ;
  • L'outil SSSD ne prend plus en charge les fichiers permettant de gérer les utilisateurs locaux ;
  • DNF ne téléchargera plus par défaut la liste des fichiers fournie par les différents paquets ;
  • L'outil fwupd pour mettre à jour les firmwares va utiliser passim comme cache pour partager sur le réseau local les métadonnées liées aux mises à jour disponibles pour les firmwares ;
  • Les systèmes Fedora Silverblue et Kinoite disposent de bootupd pour la mise à jour du chargeur de démarrage ;
  • Le paquet libuser est marqué en voie de suppression pour Fedora 41 alors que le paquet passwd est supprimé ;
  • Le paquet cyrus-sasl-ntlm a été supprimé ;
  • La gestion des droits utilisateurs pam_userdb passe de la base de données BerkeleyDB à GDBM ;
  • Le filtre antispam bogofilter utilise SQLite au lieu de BerkeleyDB pour gérer sa base de données interne ;
  • Le serveur LDAP 389 passe de la version 2.4.4 à la version 3.0.0 ;
  • Le paquet iotop est remplacé par iotop-c ;
  • L'orchestrateur de conteneurs Kubernetes évolue de la version 1.28 à la version 1.29 ;
  • Par ailleurs ses paquets sont restructurés ;
  • Pendant que podman est mis à jour vers la version 5 ;
  • Le paquet wget2 remplace le paquet wget en fournissant une nouvelle version ;
  • Le gestionnaire de base de données PostgreSQL migre vers sa 16e version ;
  • Les paquets MySQL et MariaDB sont remaniés et mis à jour vers la version 10.11.

Développement

  • Mise à jour de la suite de compilation GNU : GCC 14.0, binutils 2,41, glibc 2.39 et gdb 14.1 ;
  • La suite de compilateurs LLVM est mise à jour à la version 18 ;
  • Mise à jour de la bibliothèque C++ Boost à la version 1.83 ;
  • Le langage Go passe à la version 1.22 ;
  • Le JDK de référence pour Java passe de la version 17 à 21 ;
  • Mise à jour du langage Ruby 3.3 ;
  • Le langage PHP utilise la version 8.3 ;
  • La boîte à outils pour le machine learning PyTorch fait son entrée dans Fedora ;
  • Le paquet python-sqlalchemy utilise la nouvelle branche majeure 2.x du projet, le paquet python-sqlalchemy1.4 est proposé pour garder la compatibilité ;
  • La bibliothèque de validation des données Pydantic utilise dorénavant la version 2 ;
  • La bibliothèque Thread Building Blocks passe du fil 2020.3 au fil 2021.8 ;
  • La bibliothèque OpenSSL 1.1 est supprimée ne laissant que la dernière version de la branche 3.x ;
  • Les bibliothèques zlib et minizip utilisent leur variante zlib-ng et minizip-ng dorénavant ;
  • Le langage Python ne bénéficie plus de la version 3.7.

Projet Fedora

  • L'édition Cloud sera construite avec l'utilitaire Kiwi dans Koji ;
  • Tandis que l'édition Workstation aura son ISO générée avec l'outil Image Builder ;
  • L'image minimale ARM sera construite avec l'outil OSBuild ;
  • Fedora IoT bénéficiera d'images Bootable Containers ;
  • Il bénéficiera également des images Simplified Provisioning ;
  • Et le tout sera construit en utilisant rpm-ostree unified core ;
  • Fedora sera construit avec DNF 5 en interne ;
  • Les macros forge passent du paquet redhat-rpm-config à forge-srpm-macros ;
  • La construction des paquets échouera si l'éditeur de lien détecte certaines classes de vulnérabilité dans le binaire en construction ;
  • Phase 3 de l'usage généralisé des noms abrégés de licence provenant du projet SPDX pour la licence des paquets plutôt que des noms du projet Fedora ;
  • Clap de fin pour la construction des mises à jour au format Delta RPM ;
  • Suite du projet de ne générer les JDKs qu'une fois, et les rempaqueter ainsi à toutes les variantes du système ;
  • Compilation des paquets en convertissant plus d'avertissements comme erreurs lors de la compilation des projets avec le langage C ;
  • Les images immuables comme Silverblue seront nommées sous la dénomination Atomic pour éviter la référence au terme immuable qui est confus pour les utilisateurs.

Tester

Durant le développement d'une nouvelle version de Fedora Linux, comme cette version Beta, quasiment chaque semaine le projet propose des journées de tests. Le but est de tester pendant une journée une fonctionnalité précise comme le noyau, Fedora Silverblue, la mise à niveau, GNOME, l’internationalisation, etc. L'équipe d'assurance qualité élabore et propose une série de tests en général simples à exécuter. Suffit de les suivre et indiquer si le résultat est celui attendu. Dans le cas contraire, un rapport de bogue devra être ouvert pour permettre l'élaboration d'un correctif.

C'est très simple à suivre et requiert souvent peu de temps (15 minutes à une heure maximum) si vous avez une Beta exploitable sous la main.

Les tests à effectuer et les rapports sont à faire via la page suivante. J'annonce régulièrement sur mon blog quand une journée de tests est planifiée.

Si l'aventure vous intéresse, les images sont disponibles par Torrent ou via le site officiel.

Si vous avez déjà Fedora Linux 39 ou 38 sur votre machine, vous pouvez faire une mise à niveau vers la Beta. Cela consiste en une grosse mise à jour, vos applications et données sont préservées.

Nous vous recommandons dans les deux cas de procéder à une sauvegarde de vos données au préalable.

En cas de bogue, n'oubliez pas de relire la documentation pour signaler les anomalies sur le BugZilla ou de contribuer à la traduction sur Weblate. N'oubliez pas de consulter les bogues déjà connus pour Fedora 40.

Bons tests à tous !

Announcing Fedora Linux 40 Beta

Posted by Fedora Magazine on March 26, 2024 02:00 PM

The Fedora Project is pleased to announce the immediate availability of Fedora Linux 40 Beta, the next step towards our planned Fedora Linux 40 release at the end of April.

Get the the prerelease of any of our editions from our project website:

Or, try one of our many different desktop variants (like KDE Plasma, Xfce, or Cinnamon) from Fedora Linux Spins.

You can also update an existing system to the beta using DNF system-upgrade.

Beta release highlights

Some key things to try in this release!

PyTorch is a popular open-source machine learning framework. We want to make using this tool in Fedora Linux as easy as possible, and it’s now available for you to install with one easy command: sudo dnf install python3-torch

Note that for this release, we’ve only included CPU support, but this lays the groundwork for future updates with support for accelerators like GPUs and NPUs. For now, this is suitable for playing around with the technology, and possibly for some light inference loads.

Fedora IoT now uses ostree native containers, or “bootable containers”. This showcases the next generation of the ostree technology for operating system composition. Read more in the documentation from ostree and bootc.

Also on the immutable OS front, we’ve revived the “Atomic Desktop” brand for the growing collection of desktop spins based on ostree. An ever-expanding collection of obscure mineral names was fun, but hard to keep straight. We’re keeping well-known Silverblue and Kinoite, and other desktop environments will be, for example, Fedora Sway Atomic and Fedora Budgie Atomic.

Other notable updates

Fedora KDE Desktop now ships with Plasma 6, thanks to a lot of hard work from the Fedora KDE Special Interest Group and the upstream KDE project, is Wayland-only. (Don’t worry — X11-native apps will still run under Wayland.)

Fedora Workstation 40 Beta brings us GNOME 46. We’re bringing you Podman 5 for container management. The AMD ROCm accelerator framework is updated to version 6. And, we’ve got the updated language stacks you expect from a new release: LLVM 18 (that’s clang and friends), as well as GCC 14 (with newer glibc, binutils, and gdb).

There are many other changes big and small across the release. See the official Fedora Linux 40 Change Set for more, and check your favorite software for improvements — and, since this is a beta… possibly bugs!

Testing needed

As with any beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora Quality team via the test mailing list or in the #quality channel on Fedora Chat. As testing progresses, common issues are tracked in the “Common Issues” category on Ask Fedora.

For tips on reporting a bug effectively, read how to file a bug.

What is the beta release?

A beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora Linux users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora Linux, but the Linux ecosystem and free software as a whole.

[Short Tip] Get all columns in a table

Posted by Roland Wolters on March 25, 2024 10:21 PM
<figure class="alignright size-thumbnail"></figure>

When working with larger data structures in Nushell, there are often tables that are wider than the terminal has width, resulting in some columns truncated, indicated by the three dots .... But how can we expand the dots?

❯ ls -la
╭───┬──────────────────┬──────┬────────┬──────────┬─────╮
│ # │ name │ type │ target │ readonly │ ... │
├───┼──────────────────┼──────┼────────┼──────────┼─────┤
│ 0 │ 213-3123-43432.p │ file │ │ false │ ... │
│ │ df │ │ │ │ │
│ 1 │ barcode-picture. │ file │ │ false │ ... │
│ │ jpg │ │ │ │ │
│ 2 │ print-me-by-tomo │ file │ │ false │ ... │
│ │ rrow.pdf │ │ │ │ │
╰───┴──────────────────┴──────┴────────┴──────────┴─────╯

The answer is simple, but surprisingly, not easily found. The “Working with tables” documentation of Nushell weirdly doesn’t tell, for example. The trick is to use the command columns to get a list of all column names:

❯ ls -la|columns
╭────┬───────────╮
│ 0 │ name │
│ 1 │ type │
│ 2 │ target │
│ 3 │ readonly │
│ 4 │ mode │
│ 5 │ num_links │
│ 6 │ inode │
│ 7 │ user │
│ 8 │ group │
│ 9 │ size │
│ 10 │ created │
│ 11 │ accessed │
│ 12 │ modified │
╰────┴───────────╯

And once you know that command, you can easily find the corresponding Nushell documentation: nushell.sh/commands/docs/columns.html

Fedora Ops Architect Weekly

Posted by Fedora Community Blog on March 25, 2024 04:33 PM

Hi folks, welcome to the weekly from your Fedora operations architect. This is an exciting week in the project as our Fedora Linux 40 Beta goes live tomorrow! Have a read on for more information.

Fedora Linux 40

Beta is GO!

Tomorrow, March 26th, our Fedora Linux 40 Beta will release, and I cannot thank our wonderful community enough for all the hard work they have been putting in the last few months to create it. When it lands, testing how the release behaves and filing bugs and posting fixes would be hugely appreciated as our Beta is what we will polish and refine for our official final release in a few weeks. You can learn how to and where to file a bug on our docs page.

Reminder: Final Freeze is due to start in one week – 2nd April 2024. Please try to prioritize F40 Beta testing and fixes this week in order to get any fixes submitted and applied before we enter the freeze period. This really helps our QA and release engineering teams on the far side of the freeze build and test our final release candidate compose(s) in good time to find any pesky bugs.

Save the Dates!

Flock to Fedora is returning this year from August 7th – 10th in Rochester, New York, USA and the call for proposals has officially opened! The deadline is April 21st and check out the blog post for more details on tracks, themes and venue details.Open Source Summit Europe has a call for proposals currently open – deadline is April 30th and the conference is set for September 14th – 18th in Vienna, Austria.The deadline for devconf.cz has now closed. Their schedule will be live towards the end of April, and the conference itself will take place from Thursday 13th – Saturday 15th June. The event is free to attend once you register for tickets, so keep an eye on their website for when registration becomes live.

Fedora Linux 41 Release

Fedora Linux 41 Changes

Announced Changes

Accepted Changes

Help Wanted

Lots of Test Days! Check them out on the QA calendar in fedocal for component-specific days. Help is always greatly appreciated.We also have some packages needing some new maintainers and others needing reviews. See below links to adopt and review packages!

The post Fedora Ops Architect Weekly appeared first on Fedora Community Blog.

Next Open NeuroFedora meeting: 25 March 1300 UTC

Posted by The NeuroFedora Blog on March 25, 2024 09:17 AM
Photo by William White on Unsplash

Photo by William White on Unsplash.


Please join us at the next regular Open NeuroFedora team meeting on Monday 25 March at 1300 UTC. The meeting is a public meeting, and open for everyone to attend. You can join us in the Fedora meeting channel on chat.fedoraproject.org (our Matrix instance). Note that you can also access this channel from other Matrix home severs, so you do not have to create a Fedora account just to attend the meeting.

You can use this link to convert the meeting time to your local time. Or, you can also use this command in the terminal:

$ date -d 'Monday, March 25, 2024 13:00 UTC'

The meeting will be chaired by @Penguinpee. The agenda for the meeting is:

We hope to see you there!

[Spanish] MTProxy on Fedora/CentOS Stream/RHEL

Posted by Álex Sáez on March 25, 2024 08:37 AM

While I usually for no specific reason write in English, this post is written in Spanish due to the absurd precautionary measure of blocking Telegram.


Si vais a usar Ubuntu, esta guia es genial, además incluye alguna cosa como registrar el proxy que yo he obviado a proposito.

Hay muchas formas de saltarse un bloqueo a Telegram. Desde cambiar los DNS, hasta usar una VPN pero, en mi opinión, la mejor es MTProxy. Aunque el proyecto parece parado desde hace unos años, todavía funciona y para salir de un apuro, es una solución ideal.

Todavía no se sabe como van a bloquear el uso de la plataforma (aunque viendo como se ha hecho hasta ahora este tipo de cosas, van a bloquear los dominios con casi total seguridad). Si este es el caso, yo personalmente uso NextDNS para bloquear ciertas páginas.

El bloqueo no va a suceder.

Sin embargo, cambiar los DNS no me parece una medida adecuada de cara a seguir usando Telegram. No siempre es factible cambiarlos. Algunos routers suministrados por proveedores de servidores de Internet no permiten hacer estas modificaciones.

¿Y una VPN? Esta solución es un poco drástica. Por funcionar, funciona. Pero, salvo que sepas lo que estas haciendo, estarías pasando todo el trafico del dispositivo por la VPN. E igual no es algo que te interesa. Igual no quieres estar saliendo por Francia las 24 horas del dia. O igual no puedes tener todos tus dispositivos en una VPN como por ejemplo el ordenador del trabajo. Además de que los servicios de VPN son de dudosa fiabilidad. Si vas a seguir esta ruta, mi recomendación es que te montes tu mismo la VPN, ya sea usando OpenVPN o WireGuard, o que uses un servicio por el que estás pagando y del que te fías. Yo personalmente uso la primera opción pero ProtonVPN me ha dado buen resultado en el pasado.

¿Por qué un proxy y en particular MTProxy? Pues bastante sencillo. Telegram soporta esto de manera nativa en todas las aplicaciones oficiales. Soporta SOCKS5 y MTProto y una vez tengas montado el servicio, puedes compartir el enlace con la gente que te importa. El tráfico de Telegram es lo único que va a través del proxy, no afectando en absoluto a la conexión del resto de aplicaciones del dispositivo.

Así que si tienes una máquina con Fedora, CentOS Stream 9 o RHEL 9 (puede que funcione con versiones anteriores pero no lo he probado), sigue estos pasos. Yo por el momento uso Linode y un servidor en Amsterdam. El proveedor de la maquina virtual es lo de menos. Asegúrate también de que sabes como proteger y mantener un equipo expuesto públicamente.

Se asume que la máquina esta limpia, si es una máquina que ya tenias, puede que los pasos del firewall te den problemas. Pero si ya tienes una máquina es que sabes lo que haces :)

Vamos a rodar todas las instrucciones como root y están muy basadas en lo que dice la propia documentación, con algunos pequeños cambios para ponerla al día. El servicio no va a estar rodando como root, tranquilo :P

Vamos a instalar las dependencias que necesitamos para compilar MTProxy.

dnf install openssl-devel zlib-devel
dnf groupinstall "Development tools"

Ahora necesitamos bajar el proyecto.

git clone https://github.com/TelegramMessenger/MTProxy
cd MTProxy

Hay un problema actualmente en MTProxy si intentamos compilarlo, hace un tiempo un usuario abrió una pull request con la solución, así que vamos a aplicar su parche:

curl -L -O https://patch-diff.githubusercontent.com/raw/TelegramMessenger/MTProxy/pull/531.patch
git am 531.patch
rm -f 531.patch

Para compilarlo, solo tenemos que hacer:

make

Puede que durante la compilación veas algunos warnings, y como cualquier buena instalación, vamos a obviarlos :P (En CentOS Stream 9 no los he visto pero si en Fedora 39, así que puede que se deba a diferentes ajustes en GCC, no le he dedicado mucho tiempo a esto).

En general, una aplicación instalada a mano suele ponerse en /opt y ahí es donde vamos a ponerla (lee man hier si tienes curiosidad).

mkdir /opt/MTProxy
cp objs/bin/mtproto-proxy /opt/MTProxy/
cd /opt/MTProxy/

Ahora tenemos que hacernos con el secreto y la configuración que nos da Telegram. La configuración puede cambiar así que aconsejan renovarla a diario (candidato ideal para un cron :))

curl -s https://core.telegram.org/getProxySecret -o proxy-secret
curl -s https://core.telegram.org/getProxyConfig -o proxy-multi.conf

El contenido de /opt/MTProxy debería ser: mtproto-proxy, proxy-multi.conf y proxy-secret.

Ahora necesitamos el secreto que usaremos para validar nuestros clientes contra nuestro servidor. Puede ser cualquier cosa que te inventes, pero lo que sugiere el proyecto es ideal: simplemente guarda en algún lado la salida de esta instrucción:

head -c 16 /dev/urandom | xxd -ps

Necesitamos asegurarnos de que el firewall que suele venir activado de serie no nos de problemas.

firewall-cmd --permanent --new-service=MTProxy
firewall-cmd --permanent --service=MTProxy --add-port=4242/tcp
firewall-cmd --permanent --add-service=MTProxy
firewall-cmd --reload

Si todo ha ido bien, deberías haber visto varios success. Pero siempre puedes comprobar que MTProxy esta listo en el el firewall con:

firewall-cmd --list-all | grep services

Aunque ya tenemos todo lo que necesitamos, la guinda en el pastel es configurar MTProxy como servicio que ruede como un usuario no privilegiado sin shell y sin home.

Primero creamos el usuario, y le damos permisos sobre /opt/MTProxy.

useradd -M -s /sbin/nologin mtproxy
chown -R mtproxy:mtproxy /opt/MTProxy

Ahora, necesitamos crear el servicio. Fíjate que necesitas modificar las lineas un poco para incluir el secreto que generamos antes de pegarlo en la terminal.

cat <<EOF > /etc/systemd/system/MTProxy.service
[Unit]
Description=MTProxy
After=network.target

[Service]
Type=simple
User=mtproxy
Group=mtproxy
WorkingDirectory=/opt/MTProxy
ExecStart=/opt/MTProxy/mtproto-proxy -H 4242 -S <EL SECRETO VA AQUI> --aes-pwd proxy-secret proxy-multi.conf --log mtproxy.log -M 1
Restart=on-failure

[Install]
WantedBy=multi-user.target
EOF

Tras esto, ya estamos listos para rodar el servicio:

systemctl daemon-reload
systemctl enable MTProxy.service
systemctl start MTProxy.service
systemctl status MTProxy.service

Para configurarlo fácilmente, adapta el siguiente enlace, o compártelo a quien quieras:

https://t.me/proxy?server=<LA IP PUBLICA>&port=<EL PUERTO>&secret=<EL SECRETO>

Si encuentras algún fallo, por favor, dímelo para que pueda actualizarlo.

Why did I choose Fedora Server?

Posted by Fedora Magazine on March 25, 2024 08:00 AM

I thought it would be a good idea to share my experience implementing servers for personal use. It wasn’t easy to know the best fit for my workload and it has been a moving target, so it was critical to understand and update my needs before taking one route or another.

There are plenty of articles discussing which OS is more appropriate, some will warn against Fedora Server or even CentOS Stream regarding stability, but all it comes down to the use case. So the context is what makes the difference.

RHEL is predictable with Insights as a bonus

I started using a RHEL (developer license) to implement my services. At the time it was the obvious choice, because I needed a predictable package versioning. I implemented various services using PHP and Databases which needed a consistent versioning of dependencies.

RHEL also gave me the additional bonus of Insights which is a convenient tool to see CVE’s, patches, and other interesting data. But apart from the initial hype while learning its capabilities, I stopped using it almost completely because my server was always up to date, and there wasn’t anything to see in the dashboard. I concluded, therefore, that despite the potential of Insights, it wasn’t something I really needed.

CentOS Stream brings you upgrades ahead of time

RHEL versioning helps cases where staying in one minor version for a long time is paramount to keep applications working. But it wasn’t my case. I was always upgrading through minor versions as soon as they were available. So I looked at CentOS Stream as an appealing alternative. It would give me the same stability with the additional benefit of getting the upgrades ahead of time. I made the move and migrated to CentOS Stream.

I was reluctant to use containers, in those days, thinking that having my workload installed directly in the server was more efficient. The presumption was that I was running Apache, MariaDB, Postgress, PHP, etc. only once. But there is a caveat with this simplified view because some of those services fork different instances to support the various requests anyway.

Moving services into containers

A realization came when one of the services needed a newer PHP version which wasn’t available in the standard repo. So I reverted to Modules to install the newer version. Unfortunately, I encountered some issues with dependencies on EPEL that wouldn’t work with some of the newer PHP packages. Also, not all my services worked well with the latest PHP, and so on, and so forth. Long story short; I scraped all services and implement them as containers.

It was like being born again. Each service running in their own optimized environment happened to be the perfect solution and I just kept Apache Server as reverse proxy.

I run my server in the cloud so resources were pretty limited. Surprisingly, the CPU and RAM consumption didn’t jump as I thought they would meaning that I didn’t have to upgrade my cloud service plan.

Fedora Server turned out to be the best fit

Everything was good until I started using Quadlet to implement my containers. Quadlet is an amazing tool that replaces the deprecated podman-generate-systemd to create systemd units to handle containers’ lifecycle.

Quadlet is available starting with Podman 4.6, but it is limited to single containers. The implementation of Pods will be only available starting with Podman 5.0. The plan is to include Podman 5.0 in Fedora 40 which in turn will branch out CentOS Stream 10. This means that if I stay on CentOS Stream, I will need to wait approximately 8 months to enjoy this new feature.

I was also looking for DNF5, but unfortunately it didn’t make on time to Fedora 40. This means it will only be available on CentOS Stream 11 in another 4 years. Who knows what other cool upgrades I may be missing or will miss in the future.

After the move to CentOS Stream, I came to another realization. I didn’t need a server with a predictable package versioning anymore. So you see where I’m going. On one hand, I’m not getting any particular benefit running CentOS Stream (or RHEL), because all my workload is containerized. On the other hand, I’m missing the latest software to make my life easier and more enjoyable. So moving to Fedora Server is a no brainier.

Another factor I didn’t think of before is the upgraded workflow. Staying with CentOS Stream doesn’t guarantee a pathway between major versions, so it is likely that I will need to do a fresh install. Whereas using Fedora Server guarantees a pathway and workflow to move between major releases.

So, all and all, the change makes so much sense for my use case. And I’m assuming this is a common scenario.

Week 12 in Packit

Posted by Weekly status of Packit Team on March 25, 2024 12:00 AM

Week 12 (March 19th – March 25th)

  • Packit no longer shows status checks for not yet triggered manual tests. (packit-service#2375)
  • packit validate-config now checks whether upstream_project_url is set if pull_from_upstream job is configured. (packit#2254)
  • We have fixed an issue in %prep section processing. For instance, if the %patches macro appeared there, it would have been converted to %patch es, causing failure when executing %prep later. (specfile#356)

Episode 421 – CISA’s new SSDF attestation form

Posted by Josh Bressers on March 25, 2024 12:00 AM

Josh and Kurt talk about the new SSDF attestation form from CISA. The current form isn’t very complicated, and the SSDF has a lot of room for interpretation. But this is the start of something big. It’s going to take a long time to see big changes in supply chain security, but we’re confident they will come.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3345-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_421_CISA_new_SSDF_attestation_form.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_421_CISA_new_SSDF_attestation_form.mp3</audio>

Show Notes

Contribute at the Fedora Linux Test Week for Kernel 6.8

Posted by Fedora Magazine on March 23, 2024 05:14 PM

The kernel team is working on final integration for Linux kernel 6.8. This version was just recently released, and will arrive soon in Fedora Linux. As a result, the Fedora Linux kernel and QA teams have organized a test week from Sunday, March 24, 2024 to Sunday, March 31, 2024. The wiki page in this article contains links to the test images you’ll need to participate. Please continue reading for details.

How does a test week work?

A test week is an event where anyone can help ensure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test week has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test week web application. If you’re available on or around the days of the event, please do some testing and report your results. We have a document which provides all the necessary steps.

Happy testing, and we hope to see you on one of the test days.

Cloning Drives - Data Recovery with Open-Source Tools (part 5)

Posted by Steven Pritchard on March 22, 2024 01:06 PM

This is part 5 of a multi-part series. See part 1 for the beginning of the series.

Cloning hard drives with dd_rescue

In cases where a hard drive is failing, often simply cloning the drive is all that is required to recover data. There are many other situations where cloning a drive is important though, such as when attempting to recover from a broken partition table or major filesystem corruption.

The primary tool for cloning drives is called dd_rescue. Running dd_rescue -h or simply dd_rescue with no options will give you a summary of the various command-line options:

dd_rescue Version 1.14, garloff@suse.de, GNU GPL
 ($Id: dd_rescue.c,v 1.59 2007/08/26 13:42:44 garloff Exp $)
dd_rescue copies data from one file (or block device) to another.
USAGE: dd_rescue [options] infile outfile
Options: -s ipos start position in input file (default=0),
	     -S opos start position in output file (def=ipos),
	     -b softbs block size for copy operation (def=65536),
	     -B hardbs fallback block size in case of errs (def=512),
	     -e maxerr exit after maxerr errors (def=0=infinite),
	     -m maxxfer maximum amount of data to be transfered (def=0=inf),
	     -y syncfrq frequency of fsync calls on outfile (def=512*softbs),
	     -l logfile name of a file to log errors and summary to (def=""),
	     -o bbfile name of a file to log bad blocks numbers (def=""),
	     -r reverse direction copy (def=forward),
	     -t truncate output file (def=no),
	     -d/D use O_DIRECT for input/output (def=no),
	     -w abort on Write errors (def=no),
	     -a spArse file writing (def=no),
	     -A Always write blocks, zeroed if err (def=no),
	     -i interactive: ask before overwriting data (def=no),
	     -f force: skip some sanity checks (def=no),
	     -p preserve: preserve ownership / perms (def=no),
	     -q quiet operation,
	     -v verbose operation,
	     -V display version and exit,
	     -h display this help and exit.
Note: Sizes may be given in units b(=512), k(=1024), M(=1024^2) or G(1024^3) bytes
This program is useful to rescue data in case of I/O errors, because
 it does not necessarily abort or truncate the output.

Note that there is also a GNU ddrescue with a similar feature set, but with entirely incompatible command-line arguments.

In the simplest of cases, dd_rescue can be used to copy infile (let's say, for example, /dev/sda) to outfile (again, for example, /dev/sdb).

dd_rescue /dev/sda /dev/sdb

In most cases, you'll want a little more control over how dd_rescue behaves though. For example, to clone failing /dev/sda to /dev/sdb:

dd_rescue -d -D -B 4k /dev/sda /dev/sdb

(to use the default 64k block size) or, for really bad drives, to force only one read attempt:

dd_rescue -d -D -B 4k -b 4k /dev/sda /dev/sdb

Adding the -r option to read backwards also helps sometimes.

Changing block sizes

By default, dd_rescue uses a block size of 64k (overridden with -b). In the event of a read error, it tries to read again in 512-byte chunks (overridden with -B). If a drive is good (or only beginning to fail), a larger block size (usually in the 512kB-1MB range) will give you significantly better performance.

If a drive is failing, forcing the default block size to the same value as the fall-back size will keep dd_rescue from re-reading (and therefore possibly damaging) failed blocks.

Direct I/O

The -d and -D options turn on direct I/O for the input and output files respectively. Direct I/O turns off all OS caching, both read-ahead and write-behind. This is much more efficient (and safer) when reading from and writing to hard drives, but should generally be avoided when using regular files.

Other useful options

-r        Read backwards. Sometimes works more reliably. (Very handy trick...)

-s num    Start position in input file.

-S num    Start position in output file. (Defaults to the same as -s.)

-e num    Stop after num errors.

-m num    Maximum amount of data to read.

-l file   Write a log to file.

Copying partitions

Let's say you have a drive with a MS-DOS partition table.  The drive has two partitions.  The first is a NTFS partition that seems to be intact.  The second partition is an unknown type.  Rather than copying every block using dd_rescue, you want to copy only the blocks that are in use to a drive that is the same size.

To do this, first copy the boot sector and partition table from /dev/sda to /dev/sdb using dd:

dd if=/dev/sda of=/dev/sdb count=1

The default block size of dd is 512 bytes, which, conveniently, is the size of boot sector + partition table at the beginning of the drive.

Note: This trick doesn't quite work on MS-DOS partition tables with extended partitions! In that case, use sfdisk to copy the partition table (after running the above command to pick up the boot sector):

sfdisk -d /dev/sda | sfdisk /dev/sdb

Next, re-read the partition table on /dev/sdb using hdparm:

hdparm -z /dev/sdb

Next we can clone the NTFS filesystem on /dev/sda1 to /dev/sdb1 using the ntfsclone command from ntfsprogs:

ntfsclone --rescue -O /dev/sdb1 /dev/sda1

Finally we clone /dev/sda2 to /dev/sdb2 using dd_rescue using a 1MB block size (for speed):

dd_rescue -d -D -B 4k -b 1M /dev/sda2 /dev/sdb2

To be continued in part 6.

Wiping Drives - Data Recovery with Open-Source Tools (part 6)

Posted by Steven Pritchard on March 22, 2024 01:05 PM

This is part 6 of a multi-part series.  See part 1 for the beginning of the series.

Wiping drives

To properly wipe a drive so it is effectively unrecoverable, the best solution is to use DBAN. It can be downloaded from https://sourceforge.net/projects/dban/.

Note from 2024: The DBAN project is mostly dead. Currently I would recommend nwipe, which is available in the standard package repositories for a number of Linux distributions, from source at https://github.com/martijnvanbrummelen/nwipe, or on bootable media like SystemRescue.  In fact, SystemRescue has a page in their documentation on this very topic.

In many cases, it is sufficient to simply zero out the entire drive. This can be done using dd_rescue.

To zero out /dev/sda, you can use the following command:

dd_rescue -D -b 1M -B 4k -m $(( $( blockdev --getsz /dev/sda ) / 2 ))k /dev/zero /dev/sda

This uses a bit of a shell scripting trick to avoid multiple commands and copy & paste, but it is still fairly simple. The output of blockdev --getsz gives us the size of the device in 512-byte blocks, so we divide that number by 2 to get the size in 1kB blocks, which we pass to the -m option (with a trailing k) to denote kB) to specify the maximum amount of data to transfer. Using a default block size of 1MB (-b) with a fallback of 4kB (-B, to match the host page size, which is required for direct I/O) should give us decent throughput.

Note that we're using -D to turn on direct I/O to the destination drive (/dev/sda), but we're not using direct I/O (-d) to read /dev/zero since /dev/zero is a character device that does not support direct I/O.

To just clear the MS-DOS partition table (and boot sector) on /dev/sda, you could do the following:

dd if=/dev/zero of=/dev/sda count=1

To be continued in part 7.

Infra & RelEng Update – Week 12 2024

Posted by Fedora Community Blog on March 22, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you with both infographic and text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in depth details look below the infographic.

Week: 18 March – 22 March 2024

Read more: Infra & RelEng Update – Week 12 2024 <figure class="wp-block-image size-full wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/03/Weekly-Report_week12-scaled.jpg", "imageCurrentSrc": "", "targetWidth": "2560", "targetHeight": "1956", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">I&R infographic </figure>

Infrastructure & Release Engineering

The purpose of this team is to take care of day-to-day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high-quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS, Scientific Linux (SL) and Oracle Linux (OL).

Updates

  • EPEL community engagement at CentOS booth at SCALE
  • Provided overview of EPEL during CentOS Classroom session at SCALE

List of new releases of apps maintained by CPE

Minor update of Fedora Messaging from 3.4.1 to 3.5.0 on 2024-03-20: https://github.com/fedora-infra/fedora-messaging/releases/tag/v3.5.0

If you have any questions or feedback, please respond to this report or contact us on -cpe channel on matrix.

The post Infra & RelEng Update – Week 12 2024 appeared first on Fedora Community Blog.

Flock 2024 CfP open now until April 21st

Posted by Fedora Magazine on March 21, 2024 08:00 AM

Apply now for the Flock to Fedora 2024 Call for Proposals (CfP) at cfp.fedoraproject.org. This year, Flock is using Pretalx as our CfP system. If you submitted a proposal to DevConf CZ this year, it will feel familiar. The submission deadline for the Flock 2024 CfP is Sunday, April 21st, 2024.

For more details on what the Flock 2024 CfP reviews are looking for this year, read the full announcement on the Fedora Community Blog. You can also read the original Flock 2024 announcement on the Fedora Magazine.

Fedora Ops Architect Weekly – 19th March

Posted by Fedora Community Blog on March 19, 2024 11:29 PM

Lá Fhéile Pádraig sona duit! I hope you all had a great weekend and if you celebrate with us Irish, you enjoyed some St Patricks Day celebrations ☘ This weeks report is a little late coming to you, I promise its not because of a pub-related hangover…entirely…but you will now get to enjoy two reports this week instead, so you must have the luck of the Irish 😉 Read on for important information about our release and upcoming events.

Save the Dates!

Flock to Fedora is returning this year from August 7th – 10th in Rochester, New York, USA and the call for proposals has officially opened! The deadline is April 21st and check out the blog post for more details on tracks, themes and venue details.

Open Source Summit Europe has a call for proposals currently open – deadline is April 30th and the conference is set for September 14th – 18th in Vienna, Austria.

The deadline for devconf.cz has now closed. Their schedule will be live towards the end of April, and the conference itself will take place from Thursday 13th – Saturday 15th June. The event is free to attend once you register for tickets, so keep an eye on their website for when registration becomes live.

Fedora Linux 40 Release

Beta Go/No-Go Meeting

The Fedora Linux 40 Beta release is targeting Tuesday 26th March. There is a Go/No-Go meeting scheduled for Thursday 21st March to determine if we have a suitable release candidate or not.

For more information on the Go/No-Go meetings you can visit the wiki page, and for current release targets and other key milestone dates for F40, please refer to the release schedule.

Beta Blockers

There are a few beta blocker bugs active right now. If you can spare some time to try to reproduce the bug to verify it is a bug, and/or even try to find a fix, it would be greatly appreciated. A summary report has gone out to the devel-list this week, and you can also find all blocker bugs, both proposed and accepted, in the blockerbugs app.

Fedora Linux 41 Release

Fedora Linux 41 Changes

Announced Changes

Changes Awaiting FESCo Votes

Help Wanted

Lots of Test Days! Check them out on the QA calendar in fedocal for component-specific days. Help is always greatly appreciated.

We also have some packages needing some new maintainers and others needing reviews. See below links to adopt and review packages!

The post Fedora Ops Architect Weekly – 19th March appeared first on Fedora Community Blog.

When to Ansible? When to Shell?

Posted by Adam Young on March 19, 2024 06:00 PM

Any new technology requires a mental effort to understand. When trying to automate the boring stuff, one decision I have to make is whether to use straight shell scripting or whether to perform that operation using Ansible. What I want to do is look at a simple Ansible playbook I have written, and then compare what the comparable shell script would look like to determine if it would help my team to use Ansible or not in this situation.

The activity is building a Linux Kernel that comes from a series of topic branches applied on top of a specific upstream version. The majority of the work is done by a pre-existing shell script, so what we mostly need to do is git work.
Here is an annotated playbook. After each play, I note what it would take to do that operation in shell.

---
- name: Build a kernel out of supporting branches
  hosts: servers
  remote_user: root
  vars:
    #defined in an external vars file so we can move ahead
    #kernel_version: 
    #soc_id: 
    test_dir: /root/testing
    linux_dir: "{{ test_dir }}/linux"
    tools_dir: "{{ test_dir }}/packaging"

  tasks:

  - name:  ssh key forwarding for gitlab
    ansible.builtin.copy:
      src: files/ssh.config
      dest: /root/.ssh/config
      owner: root
      group: root
      mode: '0600'
      backup: no
  #scp $SRCDIR/files/ssh.config $SSH_USER@SSH_HOST:/root/.ssh/ssh.config
  #ssh $SSH_USER@SSH_HOST chmod 600/root/.ssh
  #ssh $SSH_USER@SSH_HOST chown root:root /root/.ssh/ssh.config

  - name: create testing dir
    ansible.builtin.file:
      path: /root/testing
      state: directory
      #ssh $SSH_USER@SSH_HOST mkdir -p /root/testing


  - name: Install Build Tools
    ansible.builtin.yum:
      name: make, gcc, git, dwarves, openssl, grubby, rpm-build, perl
      state: latest
    #ssh $SSH_USER@SSH_HOST yum -y install make, gcc, git, dwarves, openssl, grubby, rpm-build, perl

  - name: git checkout Linux Kernel
    ansible.builtin.git:
      repo: git@gitlab.com:{{ GIT_REPO_URL }}/linux.git
      dest: /root/testing/linux
      version: v6.5
#This one is a bit more complex, as it needs to check if the repo already 
#exists, and, if so, do a pull, otherwise do a clone.

  - name: add stable stable Linux Kernel repo
    ansible.builtin.command: git remote add stable https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git
    changed_when: false
    args:
      chdir: /root/testing/linux
    ignore_errors: true
#there should be and Ansible git command for adding an additional remote.  
#I could not find it, so I resported to command.  
#This is identical to running via ssh

  - name: fetch stable stable Linux Kernel repo
    ansible.builtin.command: git fetch stable
    args:
      chdir: /root/testing/linux
#Same issue as above.  This shows that, when an Ansible command is
#well crafted, it can link multiple steps into a single command, reduce the #need for an additional ssh-based command.

  - name: git checkout gen-early-patches
    ansible.builtin.git:
      repo: git@gitlab.com:{{ GIT_REPO_URL }}/packaging.git
      dest: "{{ tools_dir }}"
      version: main
#same issue as with the clone for  the Linux Kernel repository

  - name: generate early kernel patches
    ansible.builtin.shell:
      cmd: "{{tools_dir }}/git-gen-early-patches.sh {{ tools_dir }}/gen-{{ soc_id }}-{{ kernel_version }}-general.conf"
      chdir: /root/testing/linux
#One benefit to running with Ansible is that it will automatically
#wrap a shell call like this with an error check.  This cuts dowm
#on boilerplate code and the potential to miss one.

  - name: determine current patch subdir
    ansible.builtin.find:
      paths: /root/testing/linux
      use_regex: true
      #TODO build this pattern from the linux kernel version
      pattern: ".*-v6.7.6-.*"
      file_type: directory
    register: patch_directory
#This would probably be easier to do in shell:
#BUILD_CMD=$( find . -name ".*-v6.7.6-.*/build.sh" | sort | tail -1 )
#

  - ansible.builtin.debug:
      msg: "{{ patch_directory.files | last }}"

  - set_fact:
      patch_dir: "{{ patch_directory.files | last }}"

  - ansible.builtin.debug:
      msg: "{{ patch_dir.path }}/build.sh"


  - name: build kernel
    ansible.builtin.shell:
      cmd: "{{ patch_dir.path }}/build.sh"
      chdir: /root/testing/linux
#Just execute the  value of BUILD_CMD
#ssh $SSH_USER@SSH_HOST /root/testing/linux/$BUILD_CMD

So, should this one be in Ansible or shell? It is a close call. Ansible makes it hard to do shell things, and this needs a bunch of shell things. But Ansible is cleaner in doing stuff on a remote machine from a known starting machine, which is how this is run: I keep the Ansible playbook on my Laptop, connect via VPN, and run the playbook on a newly provisioned machine, or rerun it on a machine while we are in the process of updating the Kernel version, etc.

This use case does not make use of one of the primary things that Ansible is very good at doing: running the same thing on a bunch of machines at the same time. Still, it shows that Ansible is at least worth evaluating if you are running a workflow that spans two machines, and has to synchronize state between them. For most tasks, the Ansible play will be sufficient, and falling back to Shell is not difficult for most other tasks.

Cleaning a machine

Posted by Adam Young on March 19, 2024 05:10 PM

After you get something working, you find you might have missed a step in documenting how you got that working. You might have installed a package that you didn’t remember. Or maybe you set up a network connection. In my case, I find I have often brute-forced the SSH setup for later provisioning. Since this is done once, and then forgotten, often in the push to “just get work done” I have had to go back and redo this (again usually manually) when I get to a new machine.

To avoid this, I am documenting what I can do to get a new machine up and running in a state where SSH connections (and forwarding) can be reliably run. This process should be automatable, but at a minimum, it should be understood.

To start with, I want to PXE boot the machine, and reinstall the OS. Unless you are using a provisioning system like Ironic or Cobbler, this is probably a manual process. But you still can automate a good bit of it. The first step is to tell the IPMI based (we are not on Red Fish yet) BMC to boot to PXE upon the next reboot.

This is the first place to introduce some automation. All of our ipmitool commands are going to take the majority of the same parameters. So we can take the easy step of creating a variable for this command, and use environmental variables to fill in the repeated values.

export CMD_IPMITOOL="ipmitool -H $DEV_BMCIP -U $IPMI_USER -I lanplus -P $IPMI_PASS"

One benefit of this is that you can now extract the variables into an environment variable file that you source separate from the function. That makes this command reusable for other machines.

To require PXE booting on the next pass we also make use of a function that can be used to power cycle the system. Note that I include a little bit of feedback in the commands I use so that the user does not get impatient.

dev_power_on(){
        CMD_="$CMD_IPMITOOL power on"
        echo $CMD_
        $CMD_
}

dev_power_off(){
        CMD_="$CMD_IPMITOOL power off"
        echo $CMD_
        $CMD_
}
dev_power_cycle(){
        dev_power_off
        dev_power_on
        dev_power_status
}

dev_pxe(){
       CMD_="$CMD_IPMITOOL  chassis bootdev pxe" 
        echo $CMD_
        $CMD_
        dev_power_cycle
}

Once this function is executed, the machine will boot to PXE mode. What this looks like is very dependent on your setup. There are two things that tend to vary. One is how you connect to the machine in order to handle the PXE setup. If you are lucky, you have a simple interface. We have a serial console concentrator here, so I can connect to the machine using a telnet command: I get this command from our lab manager. IN other stages of life, I have had to use minicom to connect to a physical UART (serial port) to handle PXE boot configuration. I highly recommend the serial concentrator route if you can swing it.

But usually you have an IPMI based option to open the serial console. Just be ware that this might conflict with, and thus disable, a UART based way of connecting. For me, I can do this using:

$CMD_IPMITOOL sol activate

The other thing that varies is your PXE set up. We have a PXE menu that allows us to select between many different Linux distributions with various configurations. My usual preference is to do a minimal install, just enough to get the machine up and on the network accessible via SSH. This is because I will almost always do an upgrade of all packages (.deb/.rpm) on the system once it is booted. I also try to make sure I don’t have any major restrictions on disk space. Some of the automated provisioning approaches make the Root filesystem or the home filesystem arbitrarily small. For development, I need to be able to build a Linux Kernel and often a lot of other software. I don’t want to run out of disk space. A partitioning scheme that is logical for a production system may not work for me. My Ops team provides and option that has Fedora 39 Server GA + Updates, Minimal, Big Root. This serves my needs.

I tend to reuse the same machines, and thus have ssh information in the files under ~/.ssh/known_hosts. After a reprovision, this information is no longer accurate, and needs to be replaced. In addition, the newly provisioned machine will not have an ssh public key on it that corresponds with my private Key. If only they used FreeIPA…but I digress…

If I try to connect to the reprovisioned machine:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:EcPh9oazsjRaC9q8fJqJc8OjHPoF4vtXQljrHJhKDZ8.
Please contact your system administrator.
Add correct host key in /home/ayoung/.ssh/known_hosts to get rid of this message.
Offending ED25519 key in /home/ayoung/.ssh/known_hosts:186
  remove with:
  ssh-keygen -f "/home/ayoung/.ssh/known_hosts" -R "10.76.111.73"
Host key for 10.76.111.73 has changed and you have requested strict checking.
Host key verification failed.

The easiest way to wipe the information is:

ssh keygen -R $DEV_SYSTEMIP

Coupling this with provisioning the public key makes sense. And, as I wrote in the past, I need to set up ssh-key forwarding for gitlab access. Thus, this is my current ssh prep function:

#Only run this once per provisioning
dev_prep_ssh(){
        ssh-keygen -R $DEV_SYSTEMIP
        ssh-copy-id -o StrictHostKeyChecking=no root@$DEV_SYSTEMIP
        ssh-keyscan gitlab.com 2>&1  | grep -v \# | ssh root@$DEV_SYSTEMIP "cat >> .ssh/known_hosts"
}

The first two could be done via Ansible as well. I need to find a better way to do the last step via Ansible (line_in_file seems to be broken by this), or to bash script it so that it is idempotent.

Firefox 124 supports GNOME titlebar actions

Posted by Martin Stransky on March 19, 2024 02:13 PM

On GNOME Firefox runs with disabled system titlebar by default. It saves horizontal space on wide screens but also removes control over window, traditionally provided by Window manager and desktop environment.

GNOME allows to set titlebar actions by gnome-tweaks tool, you can define window actions for double click by first mouse button, middle click and secondary button. These choices are not followed by Firefox if system titlebar is off because Firefox integrates titlebar with browser tab strip and performs build-in tasks like open/close new tab or toggle maximize.

However Firefox 124 improves it and follows mouse button double click action defined by GNOME so you change it as you wish.

<figure class="wp-block-embed is-type-rich is-provider-embed-handler wp-block-embed-embed-handler wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="422" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox" src="https://www.youtube.com/embed/PFsxDEn2psc?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="750"></iframe>
</figure>

You also can define titlebar action for middle mouse button click, which opens a new tab by default. Set widget.gtk.titlebar-action-middle-click-enabled at about:config and it should work then.

<figure class="wp-block-embed is-type-rich is-provider-embed-handler wp-block-embed-embed-handler wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="422" sandbox="allow-scripts allow-same-origin allow-popups allow-presentation allow-popups-to-escape-sandbox" src="https://www.youtube.com/embed/nP2OkcR1omk?version=3&amp;rel=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;fs=1&amp;hl=en&amp;autohide=2&amp;wmode=transparent" style="border:0;" width="750"></iframe>
</figure>

Collecting One Identity Cloud PAM Essentials logs using syslog-ng

Posted by Peter Czanik on March 19, 2024 11:30 AM

One Identity Cloud PAM Essentials is the latest security product by One Identity. It provides asset management as well as secure and monitored remote access for One Identity Cloud users to hosts on their local network. I had a chance to test PAM Essentials while still in development. While there, I also integrated it with syslog-ng.

From this blog, you can learn what PAM Essentials is, and how you can collect its logs using syslog-ng. My next blog will show you how to work with the collected log messages and create alerts when somebody connects to a host on your local network using PAM Essentials.

https://www.syslog-ng.com/community/b/blog/posts/collecting-one-identity-cloud-pam-essentials-logs-using-syslog-ng

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Flock 2024 CfP open now until April 21st

Posted by Fedora Community Blog on March 19, 2024 08:00 AM

Apply now for the Flock to Fedora 2024 Call for Proposals (CfP) at cfp.fedoraproject.org. This year, Flock is using Pretalx as our CfP system. If you submitted a proposal to DevConf CZ this year, it will feel familiar. The submission deadline for the Flock 2024 CfP is Sunday, April 21st, 2024.

What are Flock 2024 CfP reviewers looking for?

Flock 2024 will continue to have three tracks like last year, and we are also introducing themes as optional descriptions for proposals. These tracks and themes help the Flock 2024 CfP reviewers select diverse programming for the conference schedule.

Flock 2024 tracks

Main Track
The usual main track of Flock. Everything and anything to do with the Fedora contributor community. Presentations, talks, workshops, and more.

Tip: At Flock, interactive sessions that include attendee participation receive extra preference over traditional lecture-style talks.

CentOS and Friends
A dedicated track for the CentOS community and other downstream friends. This focuses more to an Enterprise Linux audience, including topics about EPEL.

Fedora Mentor Summit
(Saturday, August 10th) Half-day event focused on mentoring best practices. Mentor Summit programming focuses on workshops and sessions to promote mentorship best practices and to connect mentors and mentees across the Fedora community.

Flock 2024 themes

New this year, the Flock reviewer committee is introducing themes for submitters to choose from. These themes tie into focus areas of the Fedora Strategy 2028. The strategy was co-created with the community on Fedora Discussion and we are excited to share an overview and first comprehensive look at the Fedora Strategy 2028 this year.

When submitting, you’ll be asked to associate your proposal with one or more of these themes:

  • Accessibility (a11y): Fedora websites and docs use the current best-practices for a11y. Fedora Linux Editions use the best-available open source a11y tech. Our project tooling follows best a11y practices.
  • Reaching the World: Fedora Linux is available pre-installed on more systems from more vendors. Fedora Linux is widely available in cloud providers and CI services. Fedora maintains a strong network of thriving local communities around the world.
  • Community Sustainability: Everyone in Fedora can have a mentor, and everyone in Fedora can be a mentor. Modernize our communications tooling.
  • Technology Innovation and Leadership: Fedora is a popular source for containers and Flatpaks. Atomic Desktops are the majority of Fedora Linux desktops in daily use. We integrate programming language stack ecosystems.
  • Editions, Spins, and Interests: Each Edition has a story for each release. It’s trivial to create and maintain a new Fedora Spin or Remix. More (active) SIGs, fewer images.
  • Ecosystem Connections: Better collaborative workflow with CentOS Stream. Get people working on downstreams directly involved in Fedora as an upstream. Collaborate on tooling, practices, and offerings with peer distros and upstream projects.

Associating your proposal with these themes will help reviewers understand how your session fits into the broader conference topics. If you do not see your work perfectly represented here, don’t be discouraged.

About the Flock 2024 venue

Flock is the Fedora Project’s annual contributor conference, bringing together our global community. The conference provides a venue for face-to-face meetings and conversations. It is also a place to celebrate our community. This year, Flock will be held from Wednesday, August 7th to Saturday, August 10th at the Hyatt Regency Rochester in Rochester, New York.

See the original Flock 2024 announcement and the Flock 2024 website for more details about the venue and the forthcoming hotel reservation block.

Submitting to the Flock 2024 CfP

We are using a new CfP system called Pretalx this year. Visit the Flock 2024 CfP site to create an account and submit. Choose the Flock to Fedora 2024 event in order to make an account. The first reviewing round begins on Sunday, April 21st, so submit early!

We look forward to seeing your submissions for Flock 2024. Reply with comments to this post if you have any questions about the Flock 2024 CfP. We hope to see you in Rochester this August!

The post Flock 2024 CfP open now until April 21st appeared first on Fedora Community Blog.

A Look at 2023: Successes and Challenges of the Python Community Panama

Posted by Arnulfo Reyes on March 19, 2024 05:34 AM

In this post, we’ll explore the achievements, challenges, and exciting events that marked the year 2023 for the entire Python community in Panama. From significant milestones to moments of learning and growth, get ready to dive into an exciting journey through 2023!

<figure><figcaption>Vol 35</figcaption></figure>

Resilient Community

The year 2023 was a period of notable achievements and challenges for the Python Community Panama. As we faced the ongoing evolution of the local and global technological landscape, along with the challenges presented post-COVID-19 pandemic, the community demonstrated exceptional resilience in adapting and thriving in a changing environment.

<figure><figcaption>Vol 37</figcaption></figure>

Growth and Participation

One of the highlights of the year was the impressive growth and participation we experienced. From increased followers on our social media platforms to record attendance at our events, the Python Community Panama attracted a wide range of Python enthusiasts, from beginners to experienced professionals. This growth not only reflects the growing interest in Python in Panama but also the value that our community brings to its members.

<figure><figcaption>Vol 38</figcaption></figure>

Milestones and Achievements

The year 2023 was filled with significant milestones: We successfully organized various events and activities, including our first conference, PyConnect Panama, which brought together experts and Python enthusiasts from across the country, with international friends participating as speakers from all over Latin America through virtual and in-person means.

<figure></figure>

We also offered basic Python and data analysis courses, providing our members with unique opportunities to improve their skills and explore new areas of interest.

<figure><figcaption>Python Basic</figcaption></figure><figure><figcaption>Análisis de Datos</figcaption></figure>

Challenges and Opportunities

Despite our successes, we also faced unique challenges, such as adapting to a virtual environment and managing community growth. These challenges also provided us with opportunities to learn and grow together, strengthening the community and preparing us for future challenges.

<figure><figcaption>Vol 39</figcaption></figure>

Looking Towards the Future

With ambitious goals in mind, we’re excited about the future of the Python Community Panama. We will continue to promote Python in Panama, collaborate with other communities and businesses, and create exciting open-source projects. With each new challenge and opportunity, we are ready to face them together and continue advancing on our journey towards excellence in Python.

Thank You for Joining Us Through 2023!

Stay tuned for future updates and events from the Python Community Panama.

Social Media Links:

https://www.instagram.com/pythonpanama

https://www.meetup.com/es-ES/Python-Panama

https://twitter.com/PythonPanama

Un Vistazo al Año 2023: Éxitos y Desafíos de la Comunidad Python Panamá

Posted by Arnulfo Reyes on March 19, 2024 05:27 AM

En esta publicación, exploraremos los logros, desafíos y emocionantes acontecimientos que marcaron el año 2023 para toda la comunidad de Python en Panamá. Desde hitos significativos hasta momentos de aprendizaje y crecimiento, ¡prepárense para sumergirse en un emocionante recorrido por el 2023!

<figure><figcaption>Vol 35</figcaption></figure>

Comunidad Resiliente:

El año 2023 fue un período de notables logros y desafíos para la Comunidad Python Panamá. A medida que nos enfrentábamos a la continua evolución del panorama tecnológico local y mundial, además de los desafíos presentados post-pandemia de COVID-19, la comunidad demostró una resiliencia excepcional al adaptarse y prosperar en un entorno cambiante.

<figure><figcaption>Vol 37</figcaption></figure>

Crecimiento y Participación:

Uno de los aspectos más destacados del año fue el impresionante crecimiento y participación que experimentamos. Desde el aumento en el seguimiento en nuestras redes sociales hasta la asistencia récord a nuestros eventos, la Comunidad Python Panamá atrajo a una amplia gama de Pythonistas, desde principiantes hasta profesionales experimentados. Este crecimiento no solo refleja el creciente interés en Python en Panamá, sino también el valor que nuestra comunidad aporta a sus miembros.

<figure><figcaption>Vol 38</figcaption></figure>

Hitos y Logros:

El año 2023 estuvo lleno de hitos significativos: Organizamos con éxito varios eventos y actividades, incluyendo nuestra primera conferencia, PyConnect Panamá, que reunió a expertos y entusiastas de Python de todo el país, participaron como conferencistas amigos Internacionales de toda Latinoamerica a traves de manera virtual y presencial.

<figure><figcaption>PyConnect 2023</figcaption></figure>

Ofrecimos cursos de Python básico y análisis de datos, brindando a nuestros miembros oportunidades únicas para mejorar sus habilidades y explorar nuevas áreas de interés.

<figure><figcaption>Python Basic</figcaption></figure><figure><figcaption>Análisis de Datos</figcaption></figure>

Desafíos y Oportunidades:

A pesar de nuestros éxitos, también enfrentamos desafíos únicos, como la adaptación a un entorno virtual y el manejo del crecimiento de la comunidad. Estos desafíos también nos brindaron oportunidades para aprender y crecer juntos, fortaleciendo la comunidad y preparándonos para futuros desafíos.

<figure><figcaption>Vol 39</figcaption></figure>

Mirando hacia el Futuro:

Con un enfoque ambicioso en mente, estamos emocionados por el futuro de la Comunidad Python Panamá. Seguiremos promoviendo Python en Panamá, colaborando con otras comunidades y empresas, y creando proyectos emocionantes de código abierto. Con cada nuevo desafío y oportunidad, estamos listos para enfrentarnos juntos y seguir avanzando en nuestro viaje hacia la excelencia en Python.

<figure><figcaption>Universidad Interamericana de Panamá</figcaption></figure>

¡Gracias por acompañarnos a través del año 2023!

Manténganse atentos a futuras actualizaciones y eventos de la Comunidad Python Panamá.

Redes Sociales

https://www.instagram.com/pythonpanama

https://www.meetup.com/es-ES/Python-Panama

https://twitter.com/PythonPanama

Contribute at the Fedora CoreOS, Podman Desktop, Podman 5, and Toolbx test days

Posted by Fedora Magazine on March 18, 2024 08:41 PM

Fedora test days are events where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are four upcoming test periods in the next two weeks covering four topics:

  • Wed 20 March through Tuesday 26 March, is to test the Podman Desktop.
  • Thursday 21 March through Wed 26 March , is to test the Podman 5
  • Monday 01 April through Sunday 07 April , is to test Fedora CoreOS
  • Wed 27 March, is to test Toolbx

Come and test with us to make Fedora 40 even better. Read more below on how to do it.

Podman Desktop

Podman Desktop is an open source graphical tool enabling you to seamlessly work with containers and Kubernetes from your local environment. This is the first time we will be testing Podman Desktop and we will be testing this not just for Fedora but also Windows and Mac OS X. During this test week, from Wed 20 March through Tuesday 26 March, one can start learning about containers and interact with the community by allocating merely a few hours. For advanced testers, we are looking forward to having reports on what may be hampering your regular container workflow with the advent of Podman 5 Changeset

This wiki page sums up all the details one needs to know about this developer tool. Results for the test week can be submitted on the test days app.

Podman 5

Podman is a daemon-less, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images. Podman provides a command line interface (CLI) familiar to anyone who has used the Docker Container Engine. During this test week, from Thursday 21 March through Wed 26 March, the focus will be on testing the changes that might be breaking, as Fedora 40 moves ahead with Podman 5. This test week is an opportunity for anyone to learn and interact with the Podman Community and container tools in general.

This wiki page sums up all that one needs to know. The results can be submitted in the test day app.

Fedora CoreOS

The Fedora 40 CoreOS Test Week focuses on testing FCOS based on Fedora 40. The FCOS next stream is already rebased on Fedora 40 content, which is coming soon to testing and stable. To prepare for the content being promoted to other streams the Fedora CoreOS and QA teams are organizing test days from Monday 01 April through Sunday 07 April.

Refer to this wiki page for links to the test cases and materials you’ll need to participate. The FCOS and QA team will meet and communicate with the community sync on a Google Meet at the beginning of test week and async over multiple matrix/element channels. Stay Tuned for more updates!

Toolbx

Recently, Toolbx has been made a release-blocking deliverable and now has release-blocking test criteria. Since Toolbx is very popular and has a variety of usage, we would like to run a test day on Wed 27 March to ensure nothing is broken. This test day encourages people to use containers and run apps in them across all platforms i.e. Workstation , KDE , Silverblue and CoreOS. Toolbx is also affected by the Podman 5 changeset hence we urge all testers to report things which might be breaking for them when they test.

The details for testing are available on this wiki page and results can be submitted in the events page.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days is available on the wiki pages mentioned above. If you’re available on or around the days of the events, please do some testing and report your results. All the test day pages receive some final touches which complete about 24 hrs before the test day begins. We urge you to be patient about resources that are, in most cases, uploaded hours before the test day starts.

Come and test with us to make the upcoming Fedora Linux 40 even better.

Untitled Post

Posted by Zach Oglesby on March 18, 2024 02:36 PM

I am going to Japan next week and will be posting most of my pictures on https://zachstravels.com so that I don’t flood my normal site with posts. I will still post a few here, but that site will be the home for my travel pictures from now on.

Week 11 in Packit

Posted by Weekly status of Packit Team on March 18, 2024 12:00 AM

Week 11 (March 12th – March 18th)

  • Don't have time to set up Packit? Or, do you want to see what it would look like on your package? Starting now, you can ask the Packit team to prepare a config file for you. (packit-service#2369)
  • A trailing newline is no longer added to spec files without one upon saving. (specfile#353)

Episode 420 – What’s going on at NVD

Posted by Josh Bressers on March 18, 2024 12:00 AM

Josh and Kurt talk about what’s going on at the National Vulnerability Database. NVD suddenly stopped enriching vulnerabilities, and it’s sent shock-waves through the vulnerability management space. While there are many unknowns right now, the one thing we can count on is things won’t go back to the way they were.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3341-2" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_420_Whats_going_on_at_NVD.mp3?_=2" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_420_Whats_going_on_at_NVD.mp3</audio>

Show Notes

SSH access to Copr builders

Posted by Jakub Kadlčík on March 18, 2024 12:00 AM

Sometimes it can be hard to debug failed Copr builds. Maybe they fail only on a specific architecture and you don’t have an s390x mainframe in your spare bedroom, maybe there are Copr-specific conditions in your package, or maybe the Copr builders aren’t beefy enough to build it. To make the debugging process as pain-free as possible, Copr now allows connecting to the builder virtual machines using SSH and running any commands you need.

Wait! What?

It may be hard to believe but it is true. You can simply click a button to resubmit a build with enabled SSH access to the builder, specify your public SSH key, and then connect as root. No bureaucracy, no special permissions, and no prerequisites. As far as I know, this is an unprecedented feature in the build-system world.

Please let us know your thoughts once you try it.

How it works

Submit a build any way you want and wait until it finishes. It doesn’t matter whether it fails or succeeds. If something went wrong and it requires debugging within the Copr infrastructure, click the Resubmit and allow SSH button.


You will be redirected to the familiar page for resubmitting builds which has been in Copr for years. Upon closer inspection, you will notice some changes. At the top, there are basic instructions on how to interact with the builder, and in the form, you can specify your public SSH keys. Multiple keys are allowed, just separate them with a new line.

If you don’t know what your or your coworker’s keys are, there are a few ways to find out.

If your project provides multiple chroots, ideally submit this build only for one of them. Then wait until the build starts running and the following text appears in the backend.log.

Deployed user SSH key for frostyx
The owner of this build can connect using:
ssh root@44.203.44.242
Unless you connect to the builder and prolong its expiration, it will be shut-down in 2024-03-12 14:40
After connecting, run `copr-builder help' for complete instructions

The instructions are self-explanatory. From your computer, run:

ssh root@44.203.44.XXX

You are greeted with a MOTD, please make sure to read it.

You have been entrusted with access to a Copr builder.
Please be responsible.

This is a private computer system, unauthorized access is strictly
prohibited. It is to be used only for Copr-related purposes,
not as your personal computing system.

Please be aware that the legal restrictions for what you can build
in Copr apply here as well.
https://docs.pagure.org/copr.copr/user_documentation.html#what-i-can-build-in-copr

You can display more help on how to use the builder by running copr-builder:
...

If for some reason you can’t see the message, please manually run copr-builder help.

You are now root. Remember, with great power comes great responsibility – Uncle Ben

Limitations

  • For security reasons, once the build finishes, no results other than spec file and logs are fetched to the backend storage and the project repository. The builder is also assigned to a unique sandbox preventing it from being re-used by any other build, even from the same user
  • To avoid wasting resources, only two builders with SSH access can be allocated for one user at the same time
  • Because of the previous two points, Copr cannot automatically enable SSH access when the build fails. The build needs to be manually resubmitted with SSH access enabled
  • The builder machine is automatically terminated after 1 hour unless you prolong its lifespan. The maximum limit is 48 hours
  • Some builders are available only through an IPv6 address and you can’t choose which one you get. If you can’t connect, cancel the build and try again, or use a machine with working IPv6 as a proxy. To check if IPv6 works on your machine, use https://test-ipv6.com
  • It is not possible to resubmit a build that failed during the SRPM build phase. This is only an implementation detail and might change in the future

Future

It is obvious that this feature is still in its infancy and there is a big room for improvement. Ideally, your public key should automatically fetched from FAS, and the form input should support special syntax for allowing keys based on FAS or GitHub username, e.g. FAS:msuchy GitHub:praiskup. There is currently no support for this feature in the API, python3-copr, and copr-cli but it is definitely on the roadmap. We also want to soften some hard edges around searching the builder IP address and integrate it into the user interface.

As always, happy building. Or should I say debugging?

Burn-in Testing for Spinning Disks - Data Recovery with Open-Source Tools (part 4)

Posted by Steven Pritchard on March 15, 2024 06:45 PM

This is part 4 of a multi-part series.  See part 1 for the beginning of the series.

Note that this was written long before solid state drives were common (or possibly before they existed), so when I say "drive", I mean traditional spinning hard drives.  Burn-in testing like this on SSDs makes a lot less sense and will likely only reduce their useful lifespan.

Burn-in testing

A good way to do a burn-in test on a new drive is to use a combination of SMART self-tests and the badblocks utility.  An example of how to do this can be found at https://github.com/silug/drivetest.

This script does the following:

  1. Enables SMART on the drive
  2. Checks for existing SMART health problems
  3. Runs a SMART conveyance or short test if the drive advertises that capability
  4. Uses badblocks to do a non-destructive read/write test of the whole drive
  5. Checks for resulting SMART errors
  6. Runs an extended SMART test
Depending on the size of the drive, this can take many hours, but the result will be a drive that should be past any early failures.

To be continued in part 5.

PipeWire camera handling is now happening!

Posted by Christian F.K. Schaller on March 15, 2024 04:30 PM

We hit a major milestones this week with the long worked on adoption of PipeWire Camera support finally starting to land!

Not long ago Firefox was released with experimental PipeWire camera support thanks to the great work by Jan Grulich.

Then this week OBS Studio shipped with PipeWire camera support thanks to the great work of Georges Stavracas, who cleaned up the patches and pushed to get them merged based on earlier work by himself, Wim Taymans and Colulmbarius. This means we now have two major applications out there that can use PipeWire for camera handling and thus two applications whose video streams that can be interacted with through patchbay applications like Helvum and qpwgraph.
These applications are important and central enough that having them use PipeWire are in itself useful, but they will now also provide two examples of how to do it for application developers looking at how to add PipeWire camera support to their own applications; there is no better documentation than working code.

The PipeWire support is also paired with camera portal support. The use of the portal also means we are getting closer to being able to fully sandbox media applications in Flatpaks which is an important goal in itself. Which reminds me, to test out the new PipeWire support be sure to grab the official OBS Studio Flatpak from Flathub.

PipeWire camera handling with OBS Studio, Firefox and Helvum.

PipeWire camera handling with OBS Studio, Firefox and Helvum.


Let me explain what is going on in the screenshot above as it is a lot. First of all you see Helvum there on the right showning all the connections made through PipeWire, both the audio and in yellow, the video. So you can see how my Logitech BRIO camera is feeding a camera video stream into both OBS Studio and Firefox. You also see my Magewell HDMI capture card feeding a video stream into OBS Studio and finally gnome-shell providing a screen capture feed that is being fed into OBS Studio. On the left you see on the top Firefox running their WebRTC test app capturing my video then just below that you see the OBS Studio image with the direct camera feed on the top left corner, the screencast of Firefox just below it and finally the ‘no signal’ image is from my HDMI capture card since I had no HDMI device connected to it as I was testing this.

For those wondering work is also underway to bring this into Chromium and Google Chrome browsers where Michael Olbrich from Pengutronix has been pushing to get patches written and merged, he did a talk about this work at FOSDEM last year as you can see from these slides with this patch being the last step to get this working there too.

The move to PipeWire also prepared us for the new generation of MIPI cameras being rolled out in new laptops and helps push work on supporting those cameras towards libcamera, the new library for dealing with the new generation of complex cameras. This of course ties well into the work that Hans de Goede and Kate Hsuan has been doing recently, along with Bryan O’Donoghue from Linaro, on providing an open source driver for MIPI cameras and of course the incredible work by Laurent Pinchart and Kieran Bingham from Ideas on board on libcamera itself.

The PipeWire support is of course fresh and I am sure we will find bugs and corner cases that needs fixing as more people test out the functionality in both Firefox and OBS Studio and there are some interface annoyances we are working to resolve. For instance since PipeWire support both V4L and libcamera as a backend you do atm get double entries in your selection dialogs for most of your cameras. Wireplumber has implemented de-deplucation code which will ensure only the libcamera listing will show for cameras supported by both v4l and libcamera, but is only part of the development version of Wireplumber and thus it will land in Fedora Workstation 40, so until that is out you will have to deal with the duplicate options.

Camera selection dialog

Camera selection dialog


We are also trying to figure out how to better deal with infraread cameras that are part of many modern webcams. Obviously you usually do not want to use an IR camera for your video calls, so we need to figure out the best way to identify them and ensure they are clearly marked and not used by default.

Another recent good PipeWire new tidbit that became available with the PipeWire 1.0.4 release PipeWire maintainer Wim Taymans also fixed up the FireWire FFADO support. The FFADO support had been in there for some time, but after seeing Venn Stone do some thorough tests and find issues we decided it was time to bite the bullet and buy some second hand Firewire hardware for Wim to be able to test and verify himself.

Focusrite firewire device

Focusrite firewire device

.
Once the Focusrite device I bought landed at Wims house he got to work and cleaned up the FFADO support and make it both work and be performant.
For those unaware FFADO is a way to use Firewire devices without going through ALSA and is popular among pro-audio folks because it gives lower latencies. Firewire is of course a relatively old technology at this point, but the audio equipment is still great and many audio engineers have a lot of these devices, so with this fixed you can plop a Firewire PCI card into your PC and suddenly all those old Firewire devices gets a new lease on life on your Linux system. And you can buy these devices on places like ebay or facebook marketplace for a fraction of their original cost. In some sense this demonstrates the same strength of PipeWire as the libcamera support, in the libcamera case it allows Linux applications a way to smoothly transtion to a new generation of hardware and in this Firewire case it allows Linux applications to keep using older hardware with new applications.

So all in all its been a great few weeks for PipeWire and for Linux Audio AND Video, and if you are an application maintainer be sure to look at how you can add PipeWire camera support to your application and of course get that application packaged up as a Flatpak for people using Fedora Workstation and other distributions to consume.

Infra & RelEng Update – Week 11, 2024

Posted by Fedora Community Blog on March 15, 2024 10:00 AM

This is a weekly report from the I&R (Infrastructure & Release Engineering) Team. It also contains updates for the CPE (Community Platform Engineering) Team as the CPE initiatives are in most cases tied to I&R work.

We provide you both an infographic and a text version of the weekly report. If you just want to quickly look at what we did, just look at the infographic. If you are interested in more in-depth details look at the infographic.

Week: March 11-15, 2024

<figure class="wp-block-image size-full wp-lightbox-container" data-wp-context="{ "core": { "image": { "imageLoaded": false, "initialized": false, "lightboxEnabled": false, "hideAnimationEnabled": false, "preloadInitialized": false, "lightboxAnimation": "zoom", "imageUploadedSrc": "https://communityblog.fedoraproject.org/wp-content/uploads/2024/03/Weekly-Report-Template-22-scaled.jpg", "imageCurrentSrc": "", "targetWidth": "2560", "targetHeight": "1958", "scaleAttr": "", "dialogLabel": "Enlarged image" } } }" data-wp-interactive="data-wp-interactive">I&R infographic </figure>

Infrastructure & Release Engineering

The purpose of this team is to take care of day-to-day business regarding CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS infrastructure and preparing things for the new Fedora release (mirrors, mass branching, new namespaces, etc.).
List of planned/in-progress issues

Fedora Infra

CentOS Infra including CentOS CI

Release Engineering

CPE Initiatives

EPEL

Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special Interest Group that creates, maintains, and manages a high quality set of additional packages for Enterprise Linux, including, but not limited to, Red Hat Enterprise Linux (RHEL), CentOS, Scientific Linux (SL) and Oracle Linux (OL).

Updates

  • Work on enabling fedpkg to work with EPEL10 changes.

If you have any questions or feedback, please respond to this report or contact us on -cpe channel on matrix.

The post Infra & RelEng Update – Week 11, 2024 appeared first on Fedora Community Blog.

PHP version 8.2.17 and 8.3.4

Posted by Remi Collet on March 15, 2024 06:54 AM

RPMs of PHP version 8.3.4 are available in the remi-modular repository for Fedora ≥ 37 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php83 repository for EL 7.

RPMs of PHP version 8.2.17 are available in the remi-modular repository for Fedora ≥ 37 and Enterprise Linux ≥ 8 (RHEL, Alma, CentOS, Rocky...) and in the remi-php82 repository for EL 7.

emblem-notice-24.png The Fedora 39, 40, EL-8 and EL-9 packages (modules and SCL) are available for x86_64 and aarch64.

emblem-notice-24.pngThere is no security fix this month, so no update for version 8.1.27.

emblem-important-2-24.pngPHP version 8.0 has reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation: use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 8.3 installation (simplest):

dnf module switch-to php:remi-8.3/common

or, the old EL-7 way:

yum-config-manager --enable remi-php83
yum update php\*

Parallel installation of version 8.3 as Software Collection

yum install php83

Replacement of default PHP by version 8.2 installation (simplest):

dnf module switch-to php:remi-8.2/common

or, the old EL-7 way:

yum-config-manager --enable remi-php82
yum update

Parallel installation of version 8.2 as Software Collection

yum install php82

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL-9 RPMs are built using RHEL-9.3
  • EL-8 RPMs are built using RHEL-8.9
  • EL-7 RPMs are built using RHEL-7.9
  • intl extension now uses libicu73 (version 73.2)
  • mbstring extension (EL builds) now uses oniguruma5php (version 6.9.9, instead of the outdated system library)
  • oci8 extension now uses the RPM of Oracle Instant Client version 21.12 on x86_64, 19.19 on aarch64
  • a lot of extensions are also available, see the PHP extensions RPM status (from PECL and other sources) page

emblem-notice-24.pngInformation:

Base packages (php)

Software Collections (php81 / php82 / php83)

The syslog-ng Insider 2024-03: MacOS; OpenTelemetry;

Posted by Peter Czanik on March 14, 2024 02:37 PM

The March syslog-ng newsletter is now on-line:

  • Native MacOS source in syslog-ng
  • Using OpenTelemetry between syslog-ng instances
  • Collecting even more logs on MacOS using syslog-ng

It is available at https://www.syslog-ng.com/community/b/blog/posts/the-syslog-ng-insider-2024-03-macos-opentelemetry

<figure><figcaption>

syslog-ng logo

</figcaption> </figure>

Digital forgeries are hard

Posted by Matthew Garrett on March 14, 2024 09:11 AM
Closing arguments in the trial between various people and Craig Wright over whether he's Satoshi Nakamoto are wrapping up today, amongst a bewildering array of presented evidence. But one utterly astonishing aspect of this lawsuit is that expert witnesses for both sides agreed that much of the digital evidence provided by Craig Wright was unreliable in one way or another, generally including indications that it wasn't produced at the point in time it claimed to be. And it's fascinating reading through the subtle (and, in some cases, not so subtle) ways that that's revealed.

One of the pieces of evidence entered is screenshots of data from Mind Your Own Business, a business management product that's been around for some time. Craig Wright relied on screenshots of various entries from this product to support his claims around having controlled meaningful number of bitcoin before he was publicly linked to being Satoshi. If these were authentic then they'd be strong evidence linking him to the mining of coins before Bitcoin's public availability. Unfortunately the screenshots themselves weren't contemporary - the metadata shows them being created in 2020. This wouldn't fundamentally be a problem (it's entirely reasonable to create new screenshots of old material), as long as it's possible to establish that the material shown in the screenshots was created at that point. Sadly, well.

One part of the disclosed information was an email that contained a zip file that contained a raw database in the format used by MYOB. Importing that into the tool allowed an audit record to be extracted - this record showed that the relevant entries had been added to the database in 2020, shortly before the screenshots were created. This was, obviously, not strong evidence that Craig had held Bitcoin in 2009. This evidence was reported, and was responded to with a couple of additional databases that had an audit trail that was consistent with the dates in the records in question. Well, partially. The audit record included session data, showing an administrator logging into the data base in 2011 and then, uh, logging out in 2023, which is rather more consistent with someone changing their system clock to 2011 to create an entry, and switching it back to present day before logging out. In addition, the audit log included fields that didn't exist in versions of the product released before 2016, strongly suggesting that the entries dated 2009-2011 were created in software released after 2016. And even worse, the order of insertions into the database didn't line up with calendar time - an entry dated before another entry may appear in the database afterwards, indicating that it was created later. But even more obvious? The database schema used for these old entries corresponded to a version of the software released in 2023.

This is all consistent with the idea that these records were created after the fact and backdated to 2009-2011, and that after this evidence was made available further evidence was created and backdated to obfuscate that. In an unusual turn of events, during the trial Craig Wright introduced further evidence in the form of a chain of emails to his former lawyers that indicated he had provided them with login details to his MYOB instance in 2019 - before the metadata associated with the screenshots. The implication isn't entirely clear, but it suggests that either they had an opportunity to examine this data before the metadata suggests it was created, or that they faked the data? So, well, the obvious thing happened, and his former lawyers were asked whether they received these emails. The chain consisted of three emails, two of which they confirmed they'd received. And they received a third email in the chain, but it was different to the one entered in evidence. And, uh, weirdly, they'd received a copy of the email that was submitted - but they'd received it a few days earlier. In 2024.

And again, the forensic evidence is helpful here! It turns out that the email client used associates a timestamp with any attachments, which in this case included an image in the email footer - and the mysterious time travelling email had a timestamp in 2024, not 2019. This was created by the client, so was consistent with the email having been sent in 2024, not being sent in 2019 and somehow getting stuck somewhere before delivery. The date header indicates 2019, as do encoded timestamps in the MIME headers - consistent with the mail being sent by a computer with the clock set to 2019.

But there's a very weird difference between the copy of the email that was submitted in evidence and the copy that was located afterwards! The first included a header inserted by gmail that included a 2019 timestamp, while the latter had a 2024 timestamp. Is there a way to determine which of these could be the truth? It turns out there is! The format of that header changed in 2022, and the version in the email is the new version. The version with the 2019 timestamp is anachronistic - the format simply doesn't match the header that gmail would have introduced in 2019, suggesting that an email sent in 2022 or later was modified to include a timestamp of 2019.

This is by no means the only indication that Craig Wright's evidence may be misleading (there's the whole argument that the Bitcoin white paper was written in LaTeX when general consensus is that it's written in OpenOffice, given that's what the metadata claims), but it's a lovely example of a more general issue.

Our technology chains are complicated. So many moving parts end up influencing the content of the data we generate, and those parts develop over time. It's fantastically difficult to generate an artifact now that precisely corresponds to how it would look in the past, even if we go to the effort of installing an old OS on an old PC and setting the clock appropriately (are you sure you're going to be able to mimic an entirely period appropriate patch level?). Even the version of the font you use in a document may indicate it's anachronistic. I'm pretty good at computers and I no longer have any belief I could fake an old document.

(References: this Dropbox, under "Expert reports", "Patrick Madden". Initial MYOB data is in "Appendix PM7", further analysis is in "Appendix PM42", email analysis is "Sixth Expert Report of Mr Patrick Madden")

comment count unavailable comments

Contribute to Rawhide Test Days – DNF 5

Posted by Fedora Magazine on March 14, 2024 08:00 AM

Fedora Rawhide test days are events where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora Linux before, this is a perfect way to get started.

For some time, we have been trying to elevate the quality of Fedora by testing things well ahead of time. The Fedora Changes process lets people submit changesets well before the release process starts and many developers try their best to create a timeline for the new changes to land. We in the Quality team figured out we should also be able to execute test days for the changesets, which are crucial for us and will likely help us identify bugs very early. This will ensure a smoother and cleaner release process and also help us stay on track with on-time releases.

To kick off this effort, we would like to start with testing DNF 5, which is a changeset proposed for Fedora Linux 41. Since the brand new dnf5 package has landed in rawhide, we would like to organize a test week to get some initial feedback on it before it becomes the default. We will be testing DNF5 according to its basic acceptance criteria to iron out any rough edges.

The test days will be Friday, March 15th through Tuesday, March 19th. The test week page is available here.

Happy testing, and we hope to see you on one of the test days.

Matrix servers database Maint

Posted by Fedora Infrastructure Status on March 14, 2024 07:00 AM

The fedora.im / chat.fedoraproject.org and fedoraproject.org matrix servers will be down for 30-45minutes for database maintainance. Messages sent during the outage should arrive after the outage via federation.

Remotely checking out from git using ssh key forwarding.

Posted by Adam Young on March 13, 2024 11:16 PM

Much of my work is done on machines that are only on load to me, not permanently assigned. Thus, I need to be able to provision them quickly and with a minimum of fuss. One action I routinely need to do is to check code out of a git server, such as gitlab.com. We use ssh keys to authenticate to gitlab. I need a way to do this securely when working on a remote machine. Here’s what I have found

Key Forwarding

While it is possible to create an ssh key for every server I use, that leads to a mess. As important, it leads to an insecure situation where my ssh keys are sitting on machines that are likely to be reassigned to another user. To perform operations on git over ssh, I prefer to use key forwarding. That involves setting up on the development host a .ssh/config file that loos like this:

Host *.gitlab.com
   ForwardAgent yes

Depending on your setup, you might find it makes sense to just copy this file over as is, which is what I do. A more flexible scheme using something that appends these entries if they are non-existent may make sense if you are using Ansible and the ssh_config module or a comparable tool.

known_host seeding

When you first ssh to a development host, there is a likelihood that it will not know about the git server host. In order to make connections without warning or errors, you need to add the remote hosts fingerprints into the ~/.ssh/know_hosts files. This one-liner can do that:

ssh-keyscan gitlab.com 2>&1  | grep -v \# | ssh $DEV_USER@$DEV_HOST  "cat >> .ssh/known_hosts"

ssh-keyscan will produce output like this:

# gitlab.com:22 SSH-2.0-GitLab-SSHD
gitlab.com ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCsj2bNKTBSpIYDEGk9KxsGh3mySTRgMtXL583qmBpzeQ+jqCMRgBqB98u3z++J1sKlXHWfM9dyhSevkMwSbhoR8XIq/U0tCNyokEi/ueaBMCvbcTHhO7FcwzY92WK4Yt0aGROY5qX2UKSeOvuP4D6TPqKF1onrSzH9bx9XUf2lEdWT/ia1NEKjunUqu1xOB/StKDHMoX4/OKyIzuS0q/T1zOATthvasJFoPrAjkohTyaDUz2LN5JoH839hViyEG82yB+MjcFV5MU3N1l1QL3cVUCh93xSaua1N85qivl+siMkPGbO5xR/En4iEY6K2XPASUEMaieWVNTRCtJ4S8H+9
# gitlab.com:22 SSH-2.0-GitLab-SSHD
gitlab.com ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFSMqzJeV9rUzU4kWitGjeR4PWSa29SPqJ1fVkhtj3Hw9xjLVXVYrU9QlYWrOLXBpQ6KWjbjTDTdDkoohFzgbEY=
# gitlab.com:22 SSH-2.0-GitLab-SSHD
gitlab.com ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAfuCHKVTjquxvt6CM6tdG4SLp1Btn/nOeHHE5UOzRdf
# gitlab.com:22 SSH-2.0-GitLab-SSHD
# gitlab.com:22 SSH-2.0-GitLab-SSHD

So I remove the comments and just add the fingerprints.

I tried to get this to work using Ansible and the lineinfile module, but I got an error 127…not sure why.

EDIT: I have corrected it. I should have used with_items, not with_lines, and ssh_keyscan_output.stdout_lines

---
- name: Set up ssh forwarding for gitlab 
  hosts: servers
  remote_user: root

  tasks:

  - name: keyscan gitlab.com
    command: ssh-keyscan gitlab.com
    register: ssh_keyscan_output

  - name: Save key fingerprints
    ansible.builtin.lineinfile:
      path: /root/.ssh/known_hosts
      line: "{{ item }}"
    with_items: " {{ ssh_keyscan_output.stdout_lines }}"

But something like that should be possible. When I did not first pre-seed the fingerprint, and I tried to do a git checkout over ssh, I would get this error:

 ssh root@10.76.111.74 "cd testing ;  git clone git@gitlab.com:$REMOTE_REPO "
bash: line 1: cd: testing: No such file or directory
Cloning into 'kernel-tools'...
Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

I saw a comparable error over Ansible. The solution was to run the one liner I posted above.

EDIT: One thing I did not make expliciti is that you need to enable ssh forwarding in your ssh command:

        ssh $DEV_USER@$DEV_SYSTEMIP -A

Fedora Laptop Backpacks Available

Posted by Fedora Magazine on March 13, 2024 08:00 AM

Linux clothes specialist HELLOTUX has been making Fedora shirts and hoodies since 2020. They are happy to announce the new Fedora laptop backpacks with an embroidered Fedora logo – and now with a great offer.

<figure class="aligncenter size-full"><figcaption class="wp-element-caption">Fedora laptop backpack</figcaption></figure>

Many other popular Linux distributions and free software projects are available in the HELLOTUX collection. The partner program allows not only the biggest Linux distributions, like Fedora, to participate but smaller free software projects, as well. CentOS, GNOME, KDE, LibreOffice, VLC, GIMP, Inkscape, Perl and Python are some of the biggest free software projects, but there are smaller ones like SourceHut, Taskwarrior and DataLad, too.

Check out the embroidered Fedora collection here, and don’t forget to use the <mark class="has-inline-color has-white-color" style="background-color:#0693e3">FEDORA5</mark> coupon code for the $5 discount on every Fedora shirt, sweatshirt and laptop backpack.

Special Offer

While our current supply lasts, when ordering four or more shirts, your Fedora laptop backpack is a gift to you.

Untitled Post

Posted by Zach Oglesby on March 13, 2024 04:40 AM

I can never sleep when I need to and am always tired when I can’t. What punishment!

New badge: SCaLE 21x Attendee !

Posted by Fedora Badges on March 12, 2024 08:43 AM
SCaLE 21x AttendeeYou dropped by the Fedora booth at SCaLE 21x!

Enforcing a touchscreen mapping in GNOME

Posted by Peter Hutterer on March 12, 2024 04:33 AM

Touchscreens are quite prevalent by now but one of the not-so-hidden secrets is that they're actually two devices: the monitor and the actual touch input device. Surprisingly, users want the touch input device to work on the underlying monitor which means your desktop environment needs to somehow figure out which of the monitors belongs to which touch input device. Often these two devices come from two different vendors, so mutter needs to use ... */me holds torch under face* .... HEURISTICS! :scary face:

Those heuristics are actually quite simple: same vendor/product ID? same dimensions? is one of the monitors a built-in one? [1] But unfortunately in some cases those heuristics don't produce the correct result. In particular external touchscreens seem to be getting more common again and plugging those into a (non-touch) laptop means you usually get that external screen mapped to the internal display.

Luckily mutter does have a configuration to it though it is not exposed in the GNOME Settings (yet). But you, my $age $jedirank, can access this via a commandline interface to at least work around the immediate issue. But first: we need to know the monitor details and you need to know about gsettings relocatable schemas.

Finding the right monitor information is relatively trivial: look at $HOME/.config/monitors.xml and get your monitor's vendor, product and serial from there. e.g. in my case this is:

  <monitors version="2">
   <configuration>
    <logicalmonitor>
      <x>0</x>
      <y>0</y>
      <scale>1</scale>
      <monitor>
        <monitorspec>
          <connector>DP-2</connector>
          <vendor>DEL</vendor>              <--- this one
          <product>DELL S2722QC</product>   <--- this one
          <serial>59PKLD3</serial>          <--- and this one
        </monitorspec>
        <mode>
          <width>3840</width>
          <height>2160</height>
          <rate>59.997</rate>
        </mode>
      </monitor>
    </logicalmonitor>
    <logicalmonitor>
      <x>928</x>
      <y>2160</y>
      <scale>1</scale>
      <primary>yes</primary>
      <monitor>
        <monitorspec>
          <connector>eDP-1</connector>
          <vendor>IVO</vendor>
          <product>0x057d</product>
          <serial>0x00000000</serial>
        </monitorspec>
        <mode>
          <width>1920</width>
          <height>1080</height>
          <rate>60.010</rate>
        </mode>
      </monitor>
    </logicalmonitor>
  </configuration>
</monitors>
  
Well, so we know the monitor details we want. Note there are two monitors listed here, in this case I want to map the touchscreen to the external Dell monitor. Let's move on to gsettings.

gsettings is of course the configuration storage wrapper GNOME uses (and the CLI tool with the same name). GSettings follow a specific schema, i.e. a description of a schema name and possible keys and values for each key. You can list all those, set them, look up the available values, etc.:


    $ gsettings list-recursively
    ... lots of output ...
    $ gsettings set org.gnome.desktop.peripherals.touchpad click-method 'areas'
    $ gsettings range org.gnome.desktop.peripherals.touchpad click-method
    enum
    'default'
    'none'
    'areas'
    'fingers'
  
Now, schemas work fine as-is as long as there is only one instance. Where the same schema is used for different devices (like touchscreens) we use a so-called "relocatable schema" and that requires also specifying a path - and this is where it gets tricky. I'm not aware of any functionality to get the specific path for a relocatable schema so often it's down to reading the source. In the case of touchscreens, the path includes the USB vendor and product ID (in lowercase), e.g. in my case the path is:
  /org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/
In your case you can get the touchscreen details from lsusb, libinput record, /proc/bus/input/devices, etc. Once you have it, gsettings takes a schema:path argument like this:
  $ gsettings list-recursively org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/
  org.gnome.desktop.peripherals.touchscreen output ['', '', '']
Looks like the touchscreen is bound to no monitor. Let's bind it with the data from above:
 
   $ gsettings set org.gnome.desktop.peripherals.touchscreen:/org/gnome/desktop/peripherals/touchscreens/04f3:2d4a/ output "['DEL', 'DELL S2722QC', '59PKLD3']"
Note the quotes so your shell doesn't misinterpret things.

And that's it. Now I have my internal touchscreen mapped to my external monitor which makes no sense at all but shows that you can map a touchscreen to any screen if you want to.

[1] Probably the one that most commonly takes effect since it's the vast vast majority of devices

چهارشنبه سوری مبارک

Posted by Fedora fans on March 11, 2024 07:22 AM
charshanbe-sori

charshanbe-sori

رمز آتش پاکی و روشنگری

سرخی و گرما و شادی پروری

یادگاری از کهن آئین ما

روزگاران خوش و شیرین ما

آری امشب جشن سور و آتش است

جشن رقص شعله‌ های سرکش است

چهارشنبه سوری مبارک

The post چهارشنبه سوری مبارک first appeared on طرفداران فدورا.

Episode 419 – Malicious GitHub repositories

Posted by Josh Bressers on March 11, 2024 12:00 AM

Josh and Kurt talk about an attack against GitHub where attackers are creating malicious repositories then artificially inflating the number of stars and forks. This is really a discussion about how can we try to find signal in all the noise of a massive ecosystem like GitHub.

<audio class="wp-audio-shortcode" controls="controls" id="audio-3336-3" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_419_Malicious_GitHub_repositories.mp3?_=3" type="audio/mpeg">https://traffic.libsyn.com/opensourcesecuritypodcast/Episode_419_Malicious_GitHub_repositories.mp3</audio>

Show Notes

New badge: F42 i18n Test Day Participant !

Posted by Fedora Badges on March 10, 2024 04:05 PM
F42 i18n Test Day ParticipantYou helped testing Fedora 42 i18n features

New badge: F41 i18n Test Day Participant !

Posted by Fedora Badges on March 10, 2024 04:04 PM
F41 i18n Test Day ParticipantYou helped testing Fedora 41 i18n features

New badge: F40 i18n Test Day Participant !

Posted by Fedora Badges on March 10, 2024 04:02 PM
F40 i18n Test Day ParticipantYou helped testing Fedora 40 i18n features