From Fedora Project Wiki
(Created page with "From November 2016 == What is a Focus Document? == The Factory 2.0 team produces a confusing number of documents. The first round was about the Problem Statements we were t...")
 
No edit summary
 
Line 1: Line 1:
From November 2016
== What is a Focus Document? ==
The Factory 2.0 team produces a confusing number of documents.  The first round was about the Problem Statements we were trying to solve.  Let’s retroactively call them Problem Documents.  The Focus Documents (like this one) focus on some system or some aspect of our solutions that cut across different problems.  The content here doesn’t fit cleanly in one problem statement document, which is why we broke it out.
== Background on the MBS ==
* MBS is a build service.  Unsurprising!
* It has an HTTP frontend where you can POST new module builds and GET status.
* It does not have a visible web UI (for now, we may add one some day.  KISS.)
* Originally written by the Modularity team, and is now developed by the Factory 2.0 team.
* It has had many names:  “Řida” and “FM-Orchestrator”.
* We renamed it MBS to try and better describe it.
* It uses a variety of backends to do the heavy-lifting.
* It can be thought of as automation on top of those backends.
Links:
* [https://mbs.fedoraproject.org/ Production instance]
* [https://pagure.io/fm-orchestrator Source code]
* [https://fedoraproject.org/wiki/Changes/ModuleBuildService Fedora Change]
* [https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/thread/YHP4YYG6MJAHWEMCKJTF25UMJAV3OL4Y/#YHP4YYG6MJAHWEMCKJTF25UMJAV3OL4Y Discussion on the Fedora devel list.]
* [https://meetbot.fedoraproject.org/fedora-meeting/2016-11-18/fesco.2016-11-18-16.00.html FESCo meeting minutes.]
== Where do modules even come from? ==
You know how when you build an rpm in Koji or Brew you do so from a .spec file that lives in dist-git?  Well, we carved out namespaces in dist-git to allow us to store not just specfiles, but also docker files.  We will be using that same namespace separation to store modulemd yaml files in dist-git.  For an example, see the base-runtime definition or this testmodule definition for a simpler one.
We’re not there yet, but the Modularity Working Group is planning on working with the Fedora Packaging Committee to develop guidelines, policies, and processes for how modulemd yaml files are written and what kind of reviews they need to pass before they can be included in the distribution.
== How do you submit a build? ==
In short, you do an HTTP POST to the endpoint where it receives new builds.  Take a look at this script and the accompanying JSON file that we use for development on local instances of the MBS to get an idea for how it works.  You tell the MBS the scmurl of a modulemd yaml file that you want it to build, and it takes it from there.
Of course, we don’t expect packagers/engineers to POST to this thing with curl!  Karsten Hopp developed patches to rpkg (which is the library that undergirds the well-known fedpkg and rhpkg commands) which will allow us all to submit builds with fedpkg module-build in the future.  Sound familiar?
Eventually, we plan to reach a place where new commits to dist-git (either to the modulemd definition of a module itself or to the spec files of a component that is included in that module) will automatically trigger new rebuilds of the module.
== What happens when you submit a build? ==
Let’s walk through how a module build works when the MBS is configured to submit to Koji:
* The modulemd definition is validated and enriched by the frontend.
** Your modulemd yaml file will include references to some components like glibc or python-requests.  It will furthermore specify that the module wants a particular git ref of those components.  The validation step here includes checking out those git refs to make sure they actually exist before we waste time submitting stuff to koji, clogging it up unnecessarily.
** Your modulemd yaml file furthermore will not specify its own version string like you usually do in a .spec file.  The MBS will enrich the submitted metadata by assigning it stream and version identifiers that make sense.
* Once the file has been validated and enriched, the MBS publishes a message to the message bus stating that it has recorded this new request in its database.  It marks the module as being in the Wait state, and it returns HTTP 201.
* While the MBS is waiting, another service receives its message bus announcement and sets to work.  The pdc-updater daemon receives the notice and makes a note in PDC.  The primary responsibility here is to assign a new unused Koji tag for this module build, and to do so in a way that isn’t susceptible to the fragility of hard-coded strings in the future.
* Meanwhile, the MBS backend daemon wakes up and considers whether or not to start submitting work to Koji.  Once the new unique Koji tag is available from PDC, it sets to work.  The first step is to initialize the buildroot for this module build:
** It creates koji “tags”.  One build tag, and one resultant tag.
** Link in dependency tags
** Define build and srpm-build groups
** Build a module-build-macros srpm
* After the buildroot is initialized, the MBS starts an iterative process of rebuilding all the components from source.
** The components are grouped by a buildorder defined in the modulemd yaml file which groups the components into batches.  The first batch is built first, then the repo is regenerated.  Then the second batch is built, then the repo is regenerated, etc..
* If any component fails in any batch, then the module build fails as a whole.
* If all of the components successfully build, then the module is marked as being in the “done” state (which is not the final state).
* External to the MBS, tests are run (in taskotron and/or jenkins) and reported to resultsdb.  When tests have passed sufficiently, MBS transitions the module to the final “ready” state, meaning that it is ready to be consumed by a pungi compose.
== What about local builds? ==
We recently have implemented a local mock backend for the MBS.  This means that you can clone the repo, run a local instance, and submit module builds to it all without touching koji or the real build system.  This is indispensable for rapidly working on modules and for working on the MBS itself.
You can submit a local module build by running the following from the git repo:
<code> $ python manage.py build_module_locally file:///path/to/module/repo</code>
Nicer client tooling coming soon!
== Writing additional backends. ==
Take a look in builder.py.  You’ll see a GenericBuilder base class with some abstract methods.  All you need is to provide a new class that extends GenericBuilder and implement the missing methods.
== Shipping Updates to Modules ==
== Shipping Updates to Modules ==



Latest revision as of 19:51, 2 June 2017

Shipping Updates to Modules

A Factory 2.0 Focus Document

What is a Focus Document?

The Factory 2.0 team produces a confusing number of documents. The first round was about the Problem Statements we were trying to solve. Let’s retroactively call them Problem Documents. The Focus Documents (like this one) focus on some system or some aspect of our solutions that cut across different problems. The content here doesn’t fit cleanly in one problem statement document, which is why we broke it out.


Introduction

For Fedora 26 (Boltron), we said from the start that we would only be doing a GA release and that we would release no updates, security fixes or otherwise. For Fedora 27 and onwards that cannot stand. We need a way to ship updates to the modular release.

Background on Bodhi

  • Bodhi is the service in Fedora that manages updates to the traditional distro.
  • Packagers submit their builds to Bodhi as updates to queue them for distribution.
  • Community QA persons provide feedback on bodhi updates (+1 or -1).
  • The output of automated QA tools is visualized in the Bodhi web UI.
  • Release Engineers use Bodhi’s backend to make and distribute the updates repos. This is called the “bodhi masher”.

Multi-type Support in Bodhi

Beyond shipping updates to modules, there are requests out there for Bodhi to ship updates to lots of other non-rpm content: containers chief among them. There is a milestone in the upstream issue tracker listing the work needed in general to make Bodhi capable of handling non-rpm content types.

The common requirement on these content types is that their individual builds must be represented as a build in koji. The maxim from the Bodhi maintainers is: “if it can be tagged in Koji, it can be shipped by Bodhi”. There are a thousand different ways we could have designed this -- but this statement helps us narrow in on an approach.

RPMs, of course, meet this requirement. They are built in Koji, and therefore have a uniquely-identifying “build object” in Koji’s database. They can be tagged into Koji tags.

Containers, too, meet this requirement. They are built in a different system -- the OpenShift Build System (OSBS) -- but when OSBS is done, it imports them back into Koji via the Content Generator API. At this point, container builds have a corresponding “build object” in Koji’s database, and they can be tagged into Koji tags.

Modules, at the moment, do not meet this requirement. Module builds are orchestrated by the Module Build Service (MBS). It uses Koji to build all of the components of the module, but koji never knows about the module as a whole.

To solve this, we are going to add an additional step at the end of the MBS’ build process. The MBS will use the Content Generator API to import a “note” about the built modules back into Koji. This top-level build object serves as a marker that Bodhi can move through various koji tags as a part of the standard update workflow.

Other fun facts about multi-type/modular support

No multi-type updates. On one level, it might make sense to have a single update that contains both a module and a container built from that module. They either ship or don’t ship as a unit. For a variety of reasons, we decided to forbid this. Updates can only be of a single type: either rpm updates, modular updates, or container updates. Eventually, (after F27) Bodhi will grow the ability to link these updates so that they can ship as a group.

No specification of type when submitting new updates. Instead, the server infers the content_type from NVR given. This makes it so the same API with the same arguments can be used to submit an rpm update, a modular update or a container update. These means the API change for multi-type support is minimal. You can read the content-type of an update from the API, but you cannot write it. The Bodhi server decides this for you by asking Koji for details about the NVRs you gave.

Mashing Modules

This is the hard part.

On one level, there is a simple requirement: the mashed updates repositories produced from modules must now additionally contain the the concatenated modulemd file describing what modules are present.

There is a much deeper problem to solve of how to mash the repository in the first place.

Bodhi’s backend does this today by invoking a tool called mash. Mash expects to only deal with rpms that come from a common koji tag (like, say, f26-updates-pending).

To mash repos from modules, we have identified two possible approaches:

For a first, high-level approach we could teach the bodhi masher to mash modules directly. We could mandate that the module NSV (the “NVR” of the module as a whole) must appear in a traditional-style tag. We could then either:

  • Have Bodhi tag the components from each module into a common tag, and then have mash operate on that common tag.
  • Teach mash how to operate on multiple tags at once and to pull content from all of those tags together into a single mash.

In either case, inserting the concatenated modulemd file into the repo metadata will be necessary.

The second high-level approach is to replace the bodhi masher with pungi, since pungi already understands how to put modules together from our F26 work on the “Boltron” release. Bodhi’s backend would “simply” need to write out a pungi config file on the fly, and invoke some pungi functions. A major challenge here is that pungi currently only knows how to produce composes from scratch. To produces “updates” repos like Bodhi does, we would want pungi to be able to produce a new “updates” compose as a kind of tweak on a pre-existing compose. It currently has no notion of doing this, so we would need to teach it.

This has other indirect benefits. From Bodhi’s point of view, using pungi would give us the ability to produce other kinds of artifacts as part of the updates process (ostrees, base images, etc..). From the point of view of the GA compose, an incremental compose process could hypothetically cut down on the many-hours long compose process that we endure today.

Updating pungi to allow for incremental composes and rewiring bodhi to use that looks promising, but it is also a more invasive and risky change.

Further scoping and practical investigation is required before we figure out which way we’re going to go.