From Fedora Project Wiki
Line 223: Line 223:
Port a beakerlib test to this standard interface:
Port a beakerlib test to this standard interface:


* dist-git: https://github.com/stefwalter/setup
* dist-git: https://github.com/stefwalter/setup
* Origin of test: https://www.mankier.com/1/beakerlib#Examples
* Reference: https://www.mankier.com/1/beakerlib#Examples


=== Example: Cockpit upstream test ===
=== Example: Cockpit upstream test ===

Revision as of 16:28, 13 April 2017

Standard Discovery, Packaging, Invocation of Integration Tests via RPM packages

Warning.png
This is a proposal
Feedback is more than welcome. There's a discussion tab above.

First see the Terminogy division of Responsibilities and Requirements

Invoking-tests-standard-interface.png

Detailed Description

This standard interface describes how to discover, stage and invoke tests. It is important to cleanly separate implementation details of the testing system from the test suite and its framework. It is also important to allow packagers to locally and manually invoke a test suite.

Packaging

The integration tests are packaged and delivered through Fedora as packages.

Each dist-git repo that has integration tests should package those tests in one or more subpackages like %{name}-tests. This is similar to the %{name}-debuginfo or %{name}-docs subpackages we have today.

The spec file for a dist-git repo may install upstream integration tests as files in its %{name}-tests package. The spec file may also include tests directly from files in tests/ subdirectory of the dist-git repo itself.

The tests package should use Requires: to require any other package, testing framework, or dependency necessary to run the tests. In in-situ testing cases, the tests package will directly Requires: the package of the test subject.

Invocation

To invoke the test suite, the test package that contains it is installed. Each test of the suite installs an executable in the path /usr/tests/sourcepackage/ (this will avoid name collisions between packages).

To invoke the test suite, one would:

  1. Create a temporary directory, referred to as: $TESTDIR
  2. Place the test subject(s) being tested in $TESTDIR/subjects/
  3. Execute all executable files in /usr/tests/*/ directories one at a time.
    1. Each executable test is invoked with a working directory of $TESTDIR
    2. Each executable test is invoked as root, and may drop privileges as desired.
    3. Treat the stdout/stderr of the test process as the test log. This is a standard test artifact and written to $TESTDIR/artifacts/testname.log.
    4. Examine the exit code of each test process. Zero exit code is a successful test result, non-zero is failure.
  4. Tests can put any additional test artifacts like screenshots into $TESTDIR/artifacts/.

This ensures that tests can be run on a production system without accidentally clobbering permanent directories, don't require root privileges (simplifies test development), and that CI systems have one unique place from where to collect artifacts. It also avoids collecting temporary files such as downloaded container or VM images as artifacts, as these would usually get stored for a longer time period.

These steps would usually be done through a standard test driver tool (particularly for sensible stdout/stderr teeing and log capturing), but its usage is not mandatory for developing and calling tests manually.

Staging

The %{name}-test package should Requires: all other packages that the testsuite executable needs in order to run. This includes libraries or frameworks, or subsystems like libvirt.

Some integration tests may choose to test in-situ, on the system on which the test suite is installed. In these cases the %{name}-tests package should directly depend on the package being tested.

More rigorous integration tests are outside-in. They test an integrated system without affecting its contents. It is the responsibility of the %{name}-tests packages to provision virtual machines or containers necessary to do such testing. In almost all cases this will happen by way of a provisioning framework such as Avocado, Ansible, Module Testing Framework, linch-pin, etc.

Multiple tests packages may be installed as long as their dependencies do not conflict.

Discovery

A testing system needs to be able to efficiently answer the question "does this subject have any tests packages, and if so, what are their names". This should be automatically discoverable to the extent possible.

For any RPM test subject this process requires no additional metadata and can be fully automatic:

  • It is possible to map a RPM to its SRPM source package (<rpm:sourcerpm> in the package index *-primary.xml.gz).
  • One can map an SRPM to all the RPMs that it builds (from the same index), and using the *-filelists.xml.gz index one can mechanically tell which of the RPMs are of this test package kind described here.

TODO: For other types of test subject cases such as docker images or distribution ISO files this discovery still needs to be discussed.

  • E. g. a Dockerfile might grow a reference to a test package RPM, or at least initially there is a manually maintained map of subject to test package in the testing system.

Scope

This change requires no initial changes to Fedora infrastructure itself. The change only affects contents spec files in dist-git repos.

TODO: However certain key infrastructure changes could mitigate usability or side-effects of this change. In particular, once this grows beyond the experimental phase, these test packages need to be put into a separate archive, similar to -debuginfo.

  • How much effort is that to set up?
  • Does this require any additional tags, keywords, or other explicit declaration in the spec file, other than "this RPM ships something in /usr/tests/*"?

User Experience

A standard way to package tests benefits Fedora stability, and makes Fedora better for users.

Users could also benefit by having tests that they can reproduce on their own systems. They could install the similar to how they consume %{name}-doc or %{name}-debuginfo subpackages today.

We may choose to avoid having such packages available in the standard repositories. We may choose to only have them in updates-testing or an arrangement similar to debuginfo. These choices will require some markup and/or change to infrastructure.

Upgrade/compatibility impact

Although there may already be packages that are named %{name}-tests this is merely a convention, and such packages will not affect the behavior of this proposal.

Comparison with Debian's autopkgtest

Debian/Ubuntu have used CI with packaged tests (called "autopkgtests") for many years, with over 7.000 tests. These are good candidates or at least bases for taking into Fedora packages. This compares the structure of autopkgtest with this proposal to learn from autopkgtest's experiences and take what works, and justifies the differences. See the format definition for details.

Packaging

Similarities: Both specifications use an existing test metatada format (RPM spec files with Requires: here, Debian RFC822 control files with Depends:in autopkgtest).

Differences:

  • This specification requires packaging tests as binary RPM packages, whereas autopkgtest opted for keeping the test in the source package (equivalent of dist-git) only. The latter avoids the overhead of packaging the tests and having to create a separate archive for them. An important point is also that installing an RPM -test package requires root privileges, while invoking autopkgtest doesn't.
  • As autopkgtest uses a separate control file (debian/tests/control instead of debian/control which describes the binary packages), it offers a much richer set of test metadata which cannot be expressed with debian/control or RPM spec files.

Invocation

Simimlarities: The test interface is very similar: In both specifications, a test is an executable (of any script or compiled language), the exit code is the primary indicator of pass/fail, the executable's stdout/err is a standard test artifact ("test log"), and tests can write additional artifacts into the $AUTOPKGTEST_ARTIFACTS dir (like ./artifacts/ here).

Differences:

  • By default, autopkgtest considers a test as failed if it produces anything on stderr, for catching unexpected new warnings. This can be disabled with adding Restrictions: allow-stderr to the test metadata. However, this turned out to be not overly useful, and tests which want to intercept warnings should better do that themselves.
  • autopkgtest has no concept of passing test subjects to the test. Tests expect that their subjects are already available/installed, i. e. they get called in a testbed of the desired kind and state. It is the responsibility of the autopkgtest command line tool (the "test driver/executor") to install proposed new package(s) into the testbed (due to its origin of being primarily focussed on testing packages). For testing desktop/cloud images, upgrades, or other non-package subjects, it is instead the testing system's responsibility to produce the desired testbed and call autopkgtest on it. As the scope of this specification puts the staging into the hand of the test instead of the testing system, passing the test subject is a necessary consequence.

Staging

Similarities: Both specifications use standard dpkg/rpm package dependencies (Depends: for dpkg, Requires: for rpm) to pull in test dependencies, and both can opt into doing their own provisioning of containers/VMs etc. for doing outside-in tests instead of in-situ. However, of all the ~ 7.000 autopkgtests, only a small handful is actually doing that (known cases are systemd and open-iscsi), as the vast majority of package/upgrade/image tests can (because it's sufficient) and should (because it's magnitudes faster) be run in-situ.

Differences: Here the test itself is responsible for installing the test subjects, while in autopkgtest it's the testing system's responsibility (see above).

Discovery

Similarities: The idea is the same in both specifications. Here, as soon as there is a binary package that ships /usr/tests/* it can be discovered through file lists. In autopkgtest, as soon as there is a debian/tests/control, the source package index entry will automatically get a Testsuite: autopkgtest tag. So in both cases the developer does not need to explicitly do anything other than adding the tests.

Differences: None concerning the interface, just technical implementation details due to how rpm/dpkg work.

Examples

What follows are examples of writing and/or packaging existing tests to this standard.

There is a mock test system' which is a simple shell script: run-installed-test. It runs all /usr/tests/*, can pass arbitrary subjects to them, and report/capture the results/logs. This is purely to study what a CI system would do and whether the standard interface works.

Example: Simple in-situ test

Add simple downstream integration test for gzip:

With this you can install test RPM from above gzip repo:

 $ sudo rpm -i results_gzip/1.8/2.fc27/gzip-tests-1.8-2.fc25.x86_64.rpm

and run the gzip tests on the already installed package (as user) with

 $ ~/run-installed-test
 Subjects/artifacts directory: /tmp/test.vsR
 -----------------------------------------
 Running /usr/tests/gzip/test-simple
 -----------------------------------------
 ++ ls 'subjects/*.rpm'
 + echo Bla
 + cp bla.file bla.file.orig
 + gzip bla.file
 + gunzip bla.file.gz
 + cmp bla.file bla.file.orig
 + rm bla.file bla.file.orig
 PASS: /usr/tests/gzip/test-simple
 $ ls -l /tmp/test.vsR/artifacts/
 -rw-r--r-- 1 martin martin 156 Mar 28 16:49 test-simple.log

or run them as root (as officially specified) with a subject (locally built gzip RPM):

 $ sudo ~/run-installed-test results_gzip/1.8/2.fc27/gzip-1.8-2.fc25.x86_64.rpm
 Installing subject results_gzip/1.8/2.fc27/gzip-1.8-2.fc25.x86_64.rpm
 Subjects/artifacts directory: /tmp/test.Cck
 -----------------------------------------
 Running /usr/tests/gzip/test-simple
 -----------------------------------------
 ++ ls subjects/gzip-1.8-2.fc25.x86_64.rpm
 + '[' -w / ']'
 + rpm --verbose --force -U subjects/gzip-1.8-2.fc25.x86_64.rpm
 Preparing packages...
 gzip-1.8-2.fc25.x86_64
 + echo Bla
 + cp bla.file bla.file.orig
 + gzip bla.file
 + gunzip bla.file.gz
 + cmp bla.file bla.file.orig
 + rm bla.file bla.file.orig
 PASS: /usr/tests/gzip/test-simple

Example: GNOME style "Installed Tests"

Add downstream integration test running in gnome installed tests.

Example: Tests run in Docker Container

Add integration test running glib2 installed tests in a docker container. This is also an example of having two different tests packages being created by the same dist-git repo.

Example: Modularity testing Framework

TODO: Port an example

Example: Ansible with Atomic Host

TODO: Port an existing test

Example: Beakerlib based test

Port a beakerlib test to this standard interface:

Example: Cockpit upstream test

Run upstream integration test, which uses VMs through libvirt, in a docker container; the entire libvirt/bridge setup is confined to the container, so this can be run without interfering with the host system.

Evaluation

Instructions: Copy the block below, sign your name and fill in each section with your evaluation of that aspect. Add additional bullet points with overall summary or notes.

Full Name -- SignAture

  • Summary: ...
  • Staging: ...
  • Invocation: ...
  • Discovery: ...

Stef Walter -- Stefw

  • Summary:
    • Disclaimer: I am one of the owners above.
    • PRO: RPM is used for staging. RPM and YUM-style reposotiries are a standard part of Fedora. No other technology is involved in the standard.
    • PRO: Simple Unix invocation mechanism: executable + stdin/stdout + environment variables.
    • CON: RPM has a learning curve. Although a dist-git maintainer is required to already know about this.
    • CON: /usr/tests is a new FHS directory, should probably be /usr/libexec/tests.
    • CON: Only a partial way to describe whether tests are compatible with or conflict with a specific NVR of test subjects.
    • CON: The *-tests packages may require special handling if the distro does not want to have users able to install/run tests.
  • Staging:
    • Requires rpm and yum/dnf as well known staging dependencies.
    • The *-tests suffix is implied by the standard, not required. Is this confusing?
  • Invocation:
    • The standard describes how multiple test suites can be staged together and executed in one shot.
  • Discovery:
    • An NVR is the unique identifier for a test suite.
    • This uses capabilities of how YUM repositories work, but requires no additional technology.


Pierre-Yves Chibon -- pingou

  • Summary:
    • PRO: RPM is used and well-know by all packagers.
    • CON: It would require buy-in by the Fedora Packaging Committee and documentation in the Fedora Package Guidelines
    • CON: Complexifies the spec file, some of which are already quite complex/un-readable
    • CON: What about auto-generated spec file? (Think TexLive)
    • CON: Require local tooling to run the tests (or rpm -ql <foo>-test first to find the executables)
    • CON: /usr/tests needs to be changed
    • CON: How are test subject supported?
    • CON: Will take up space on the mirror
    • CON: Packagers and QA/testers are working on the same file all the time, high chances PR will conflict, higher changes to disagreement among contributors
  • Staging:
    • PRO: Requires rpm and yum/dnf as well known staging dependencies.
    • CON: How is test subject supported?
  • Invocation:
    • CON: The standard describes how multiple test suites can be staged together and executed but not in one shot, first install then run. -> May need a wrapper tool to do both in one go.
  • Discovery:
    • PRO: An NVR is the unique identifier for a test suite.
      • Also meaning that the test suite may change NVR while its content has not changed or we would need to define a EVR just for the -test sub-package
    • PRO: This uses capabilities of how RPM repositories work, but requires no additional technology.
    • CON: This implies tracking the dependencies at two distinct places, the main package then the -test sub-package


Tim Flink -- Tflink

  • Summary:
    • PRO: No significantly new technology, no huge requirements for additional software development
    • PRO: Paradigm works really well for package-specific tests
    • CON: All involved folks need to learn RPM packaging
    • CON: RPM packaging overhead once folks learn RPM packaging
    • CON: Not sure how well the paradigm works for non-package testing (containers, images, etc.)
  • Staging:
    • Nothing to add here, same concerns about convention over having a standard.
    • Not clear on how the test subject is found or passed into the framework? How are tests modified during development? How does one find the correct NVR for the test rpm at staging time?
  • Invocation:
    • What kinds of features would we eventually want to see in the run-installed test script?
    • How do we differentiate between in-situ and outside-in tests? Is this needed?
  • Discovery:
    • Are there existing tools to do the RPM to SRPM mapping?

Dennis Gilmore -- Ausil

  • Summary:
    • PRO: works well for singular tests
    • PRO: tests in package branching is easy to map tests to package nvr's
    • CON: difficult to map tests to package groups
    • CON: high cost of entry to people not familiar with packaging.
    • CON: easy for people to do non standard things, resulting in it being hard to test test changes for validity.
    • CON: proper rpm/yum/dnf support may be difficult to implement
  • Staging:
    • the opt in use of containers/VMs for staging seems to be open wildly to interpretation and prone to people doing incompatible things
  • Invocation:
    • Nothing extra to add here thats not already covered
  • Discovery:
    • Seems like a sane way to do it, would need to have changes to all of the compose tools in order to make test repos like debug for debuginfo, unknown how invasive it would be to do in rpm.


Micah Abbott -- miabbott

  • Summary:
    • Disclaimer: I was pulled into this evaluation later in the game and may be missing some context/pieces of the larger effort.
    • PRO: well suited for unit tests/simple integration tests
    • PRO: RPM spec files well defined and understood
    • CON: doesn't feel well suited for outside-in integration testing
    • CON: learning RPM packaging could be cumbersome
  • Staging:
    • The in-situ test example makes sense for this approach; the outside-in case seems like it could lead to multiple, conflicting provisioning solutions.
  • Invocation:
    • The steps outlined are a great template. I think they could be used by either approach.
  • Discovery:
    • The TODO about other artifacts is worrisome, especially if we intend to run integration tests on an ostree compose.

Dusty Mabe -- dustymabe

  • Summary:
    • While rpms are well defined and an extension of something we already know well. I don't really like the overhead of creating rpms out of the tests and I especially don't like trying to figure out how to define tests to fit them inside of test rpms for higher level artifacts that are not delivered as rpms.
    • PRO: dependencies (at least rpm deps) are clearly defined via well known methods
    • CON: overhead of creating rpms for test content
  • Staging:
    • CON: the outside-in tests would be a bit awkward I think. having a "tests" rpm that requires docker or libvirt to run would not be an ideal dependency relationship
    • CON: it may be a foregone conclusion, but learning rpm to contribute a new test might be a roadblock for some new contributors
  • Invocation:
    • PRO: well defined
    • CON: simple shell script is nice because it is simple, but could have limitations
  • Discovery:
    • PRO: discoverability of tests because they are in a yum repo is nice
    • CON: how do test rpms, work for testing higher level artifacts, like qcow images?


Nick Coghlan -- Ncoghlan

  • Summary:
    • Ansible offers a lot more flexibility than RPM in managing complex test resources (VMs, users, etc), as well as installing test dependencies that aren't themselves packaged as RPMs
    • However, it's likely to be overkill for simple projects that just need to re-run their standard tests on a fully installed system
    • Regardless of which option is chosen, a standard shim should be provided to bootstrap the other (so if using packaged tests, have a boilerplate *-tests subpackage definition that bootstraps an Ansible based test)
    • With RPM, two boilerplate templates could be provided: one for running a shell script from the source package, one for running an Ansible playbook from dist-git
  • Staging:
    • PRO: Running a shell script on the current system is as simple as it can get for non-intrusive tests
    • PRO: It seems easier to use RPM to consistently bootstrap an Ansible(/Vagrant/Docker)-based test than vice-versa
    • CON: relying on a boilerplate snippet in spec files to bootstrap Ansible based tests is a recipe for those snippets bitrotting over time. This could be mitigated with a helper package, though.
  • Invocation:
    • PRO: The spec file inherently has a lot of access to information about the component being tested
    • CON: Some virtual packages and package groups may need a dedicated SRPM just to define a test package, or else there would need to be a way to define module level test RPMs
  • Discovery:
  • PRO: Having the test discovery metadata integrated into the RPM database should provide some benefits in sharing tests across distributions even if they don't share a dist-git instance
  • CON: As with invocation, some virtual packages and package groups may need a dedicated SRPM just to define a test package, or else there would need to be a way to define module level test RPMs