From Fedora Project Wiki
(Note that this proposal was chosen.)
 
(44 intermediate revisions by 11 users not shown)
Line 1: Line 1:
= Ansible: Standard Discovery, Staging, Invocation of Integration Tests =
= Ansible: Standard Discovery, Staging, Invocation of Integration Tests =


{{admon/warning|This is a proposal|Feedback is more than welcome.
{{admon/warning|This proposal was selected and is kept here for historical reasons, including evaluation info below.
There's a ''discussion'' tab above.}}
There's a ''discussion'' tab above.}}


== Summary ==
'''First see the [https://fedoraproject.org/wiki/Changes/InvokingTests#Terminology Terminogy] division of [https://fedoraproject.org/wiki/Changes/InvokingTests#Terminology Responsibilities] and [https://fedoraproject.org/wiki/Changes/InvokingTests#Requirements Requirements]'''
 
Lets define a clear delineation of between a ''test suite'' (including framework) and the CI system that is running the test suite. This is the standard interface.


[[File:Invoking-tests-standard-interface.png|800px]]
[[File:Invoking-tests-standard-interface.png|800px]]


What follows is a standard way to discover, package and invoke integration tests for a package stored in a Fedora dist-git repo.
== Detailed Description ==


Many Fedora packages have unit tests. These tests are typically run during a <code>%check</code> RPM build step and run in a build root. On the other hand, integration testing should happen against a composed system. Upstream projects have integration tests, both Fedora QA and the Atomic Host team would like to create more integration tests, Red Hat would like to bring integration tests upstream.  
This standard interface describes how to discover, stage and invoke tests. It is important to cleanly separate implementation details of the ''testing system'' from the ''test suite'' and its framework. It is also important to allow Fedora packagers to locally and manually invoke a ''test suite''.


== Owner ==
'''First see the [https://fedoraproject.org/wiki/Changes/InvokingTests#Terminology Terminogy] division of [https://fedoraproject.org/wiki/Changes/InvokingTests#Terminology Responsibilities] and [https://fedoraproject.org/wiki/Changes/InvokingTests#Requirements Requirements]'''


* Name: '''TODO:''' Fill in owner here. Maybe pingou, tflink, other? More than one owner is possible.
=== Staging ===
* Email: '''TODO:''' Fill in email of owner here.


== Terminology ==
Tests files will be added into the <code>tests/</code> folder of a dist-git repository branch. The structure of the files and folders is left to the liberty of the packagers but there are one or more playbooks in the <code>tests/</code> folder that can be invoked to run the test suites.


* '''Test Subject''': The items that are to be tested.
# The ''testing system'' SHOULD stage the tests on Fedora operating system appropriate for the branch name of the dist-git repository containing the tests.
** Examples: RPMs, OCI image, ISO, QCow2, Module repository ...
# The ''testing system'' SHOULD stage a clean a system for each set of tests it runs.
* '''Test''': A callable/runnable piece of code and corresponding test data and mocks which exercises and evaluates a ''test subject''.
# The ''testing system'' MUST stage the following packages:
* '''Test Suite''': The collection of all tests that apply to a ''test subject''.  
## <code>ansible python2-dnf libselinux-python</code>
* '''Test Framework''': A library or component that the ''test suite'' and ''tests'' use to accomplish their job.
# The ''testing system'' MUST clone the dist-git repository for the test, and checks out the appropriate branch.
** Examples: [https://avocado-framework.github.io/ Avocado], [https://wiki.gnome.org/Initiatives/GnomeGoals/InstalledTests GNOME Installed Tests], [https://pagure.io/modularity-testing-framework/ Modularity Testing Framework], [https://github.com/projectatomic/atomic-host-tests Ansible tests in Atomic Host], [https://tunir.readthedocs.io/en/latest/ Tunir tests], docker test images, ...
# The contents of <code>/etc/yum.repos.d</code> on the staged system SHOULD be replaced with repository information that reflects the known good Fedora packages corresponding to the branch of the dist-git repository.
* '''Test Result''': A boolean pass/fail output of a ''test suite''.  
## The ''testing system'' MAY use multiple repositories, including ''updates'' or ''updates-testing'' to ensure this.
** ''Test results'' are for consumption by automated aspects of a ''testing systems''.
* '''Test Artifact''': Any additional output of the test suite such as the stdout/stderr output, log files, screenshots, core dumps, or TAP/Junit/subunit streams.  
** ''Test artifacts'' are for consumption by humans, archival or big data analysis.
* '''Testing System''': A CI or other ''testing system'' that would like to discover, stage and invoke tests for a ''test subject''.
** Examples: [https://jenkins.io/ Jenkins], [https://taskotron.fedoraproject.org/ Taskotron], [https://docs.openstack.org/infra/zuul/ ZUUL], [https://ci.centos.org/ CentOS CI], Red Hat CI, [https://travis-ci.org/ Travis], [https://semaphoreci.com/ Semaphore], [https://developers.openshift.com/managing-your-applications/continuous-integration.html Openshift CI/CD], [https://wiki.ubuntu.com/ProposedMigration/AutopkgtestInfrastructure Ubuntu CI], ...


== Responsibilities ==
=== Invocation ===
 
The '''testing system''' is responsible to:
* Build or otherwise acquire the ''test subject'', such as package, container image, tree …
* Decide which ''test suite'' to run, often by using the standard interface to discover appropriate ''tests'' for the dist-git repo that a test subject originated in.
* Schedule, provision or orchestrate a job to run the ''test suite'' on appropriate compute, storage, ...
* Stage the ''test suite'' as described by the ''standard interface''.
* Invoke the ''test suite'' as described by the ''standard interface''.
* Gather the ''test results'' and ''test artifacts'' as described by the ''standard interface''.
* Announce and relay the ''test results'' and ''test artifacts'' for gating, archival ...
 
The '''standard interface''' describes how to:
* Discover a ''test suite'' for a given dist-git repo.
* Uniquely identify a ''test suite''.
* Stage a ''test suite'' and its dependencies such as ''test frameworks''.
* Provide the ''test subject'' to the ''test suite''.
* Invoke a ''test suite'' in a consistent way.
* Gather ''test results'' and ''test artifacts'' from the invoked ''test suite''.
 
The '''test suite''' is responsible to:
* Declare its dependencies such as a ''test framework'' via the ''standard interface''.
* Execute the ''test framework'' as necessary.
* Provision (usually locally) any containers or virtual machines necessary for testing the ''test subject''.
* Provide ''test results'' and ''test subjects'' back according to the standard 
 
The format of the textual logs and ''test artifacts'' that come out of a test suite is not prescribed by this document. Nor is it envisioned to be standardized across all possible ''test suites''.
 
== Requirements ==


* The ''test suite'' and ''test framework'' SHOULD NOT leak its implementation details into the testing system, other than via the ''standard interface''.
The ''testing system'' MUST select a playbooks in the <code>tests/</code> folder depending on the type of ''test subject'' it would like to test. The filename of each of these playbooks start with the <code>test_</code> prefix and ends with a <code>.yml</code> extension. The following well known playbooks correspond to common ''test subjects''. Additional playbooks will be added to this list as additional ''test subjects'' become common:
* The ''test suite'' and ''test framework'' SHOULD NOT rely on behavior of the testing system other than the ''standard interface''.
* The ''standard interface'' MUST enable a dist-git packager to run a ''test suite'' locally.
** ''Test suites'' or ''test frameworks'' MAY call out to the network for certain tasks.
* It MUST be possible to stage an upstream ''test suite'' using the ''standard interface''.
* Both ''in-situ tests'', and more rigorous ''outside-in tests'' MUST be possible with the ''standard interface''.
** For ''in-situ tests'' the ''test suite'' is in the same file system tree and process space as the ''test subject''.
** For ''outside-in tests'' the ''test suite'' is outside of the file system tree and process space of the ''test subject''.
* The ''test suite'' and ''test framework'' SHOULD be able to provision containers and virtual machines necessary for its testing without requesting them from the ''testing system''.
* The ''standard interface'' SHOULD describe how to uniquely identify a ''test suite'',


== Detailed Description ==
{|
! Playbook invoked !! Test subject
|-
| test_rpm.yml    || A string containing a space separated list of rpm filenames
|-
| test_repo.yml    || A string containing a space separated list of repo filenames appropriate for <code>/etc/yum.repos.d</code>
|-
| test_cloud.yml  || A string containing the filename of one virtual machine disk image bootable with cloud-init
|-
| test_oci.yml || A string containing the filename of one OCI container image filesystem bundle
|-
| test_local.yml  || An empty string. No test subject or installation.
|}


This standard interface describes how to discover, stage and invoke tests. It is important to cleanly separate implementation details of the ''testing system'' from the ''test suite'' and its framework. It is also important to allow Fedora packagers to locally and manually invoke a ''test suite''.
If a playbook for a given ''test subject'' is not present in a dist-git repository, the ''testing system'' SHOULD treat the test as having been "skipped". That is, the invocation SHOULD neither pass nor fail.


=== Staging ===
The <code>test_local.yml</code> SHOULD test a booted system where the test suite, its framework, and test subject are already installed. This playbook is usually invoked by the other playbooks. Additional playbooks may be present in the <code>tests/</code> folder, and these MAY represent multiple test suites. The testing system is not expected to be aware of these additional playbooks.


To invoke the selected playbook, the ''testing system'':


Tests files will be added into the ''tests'' folder in the dist-git of the package that they are testing. The structure of the files and folders is left to the liberty of the packagers but there should be a ''run_tests.yml'' playbook at the top level of the tests folder to set up and run all the tests.
# MUST execute the playbook locally with <code>ansible_connection=local</code> and host <code>localhost</code>
# MUST execute the playbook with the following variables.
## <code>subjects</code>: The ''test subjects'' string as described above
## <code>artifacts</code>: The full path of an empty folder for ''test artifacts''
# MUST execute the playbook as root.
# MUST examine the exit code of the playbook. A zero exit code is successful ''test result'', non-zero is failure.
# MUST treat the file <code>test.log</code> in the <code>artifacts</code> folder as the main readable output of the test.
# SHOULD place the textual stdout/stderr of the <code>ansible-playbook</code> command in the <code>ansible.log</code> file in the <code>artifacts</code> folder.
# SHOULD treat the contents of the <code>artifacts</code> folder as the ''test artifacts''.


=== Invocation ===
The playbook and its ''test suite'' or ''test framework'':


The test can be invoke simply by calling ''sudo ansible-playbook run_tests.yml'' on the ''run_tests.yml'' playbook of interest.
# SHOULD drop privileges appropriately if the ''test suite'' should be run as non-root.
# MUST install any requirements of its ''test suite'' or ''test framework'' and MUST fail if this is not possible.
# MUST provision the ''test subject'' listed in the <code>subjects</code> variable appropriately for its playbook name (described above) and MUST fail if this is not possible.
# MUST place the main readable output of the ''test suite'' into a <code>test.log</code> file in the <code>artifacts</code> variable folder. This MUST happen even if some of the test suites fail.
# SHOULD place additional ''test artifacts'' in the folder defined in the <code>artifacts</code> variable.


=== Discovery ===
=== Discovery ===


A testing system needs to be able to efficiently answer the question "does this subject have any tests packages, and if so, what are their names". This should be automatically discoverable to the extent possible.
A testing system needs to be able to efficiently answer the question "does this subject have any tests packages, and if so, what are their names". This should be automatically discoverable to the extent possible.


Use repoquery, basically I propose we rely on the dependency chain of the
Use repoquery, basically I propose we rely on the dependency chain of the
Line 100: Line 80:
"foo" requires which we currently have)
"foo" requires which we currently have)
and we should be able to build a list of dependencies.
and we should be able to build a list of dependencies.
== Test Output Collection ==
This will enable us to collect full consistent output regardless of the test output to report with ansible invocation
https://github.com/openstack/ara


In addition, a ''test suite'' can be uniquely identified using the git hash of the commit of the git repo.
In addition, a ''test suite'' can be uniquely identified using the git hash of the commit of the git repo.
Line 115: Line 88:


Only the testing system will need to be taught to install the requirements and run the playbooks.
Only the testing system will need to be taught to install the requirements and run the playbooks.
== Benefit to Fedora ==
Developers benefit by having a consistent target for how to describe tests, while also being able to execute them locally while debugging issues or iterating on tests.
By staging and invoking tests consistently in Fedora we create an eco-system for the tests that allows varied test frameworks as well as CI system infrastructure to interoperate. The integration tests outlast the implementation details of either the frameworks they're written in or the CI systems running them.
'''TODO:''' note any additional benefits to Fedora.


== User Experience ==
== User Experience ==
Line 137: Line 102:




== Structure ==
== Examples ==


  .
What follows are examples of writing and/or packaging existing tests to this standard. This is how to run the various examples:
  └── tests
     └── test-case
     └── config
    └── playbooks
        └── group_vars
        └── roles
        │  └── configure
        │  │  └── defaults
        │  │  └── files
        │  │  └── handlers
        │  │  └── meta
        │  │  └── tasks
        │  │  └── templates
        │  │  └── vars
        │  └── run_tests
        │  │  └── defaults
        │  │  └── files
        │  │  └── handlers
        │  │  └── meta
        │  │  └── tasks
        │  │  └── templates
        │  │  └── vars
        └── configure.yml
        └── run_tests.yml


Tests will live under tests directory in a dist-git repo.  The playbooks directory will define the roles for configuration and execution of the tests.
The run_tests.yml will call roles necessary and dependencies of other roles can be defined there or in the meta of another role. (Well documented on writing ansible playbooks)
I put the config as a place holder for configuration files needed or for provisioning (thinking of linch-pin https://github.com/CentOS-PaaS-SIG/linch-pin)


== Examples ==
* <code>test_rpm.yml</code>
  $ fedpkg local
  $ mkdir -p ./artifacts
  $ sudo ansible-playbook tests/test_rpm.yml -e artifacts=$PWD/artifacts -e subjects=$PWD/x86_64/sed-4*.x86_64.rpm


What follows are examples of writing and/or packaging existing tests to this standard.
* <code>test_local.yml</code>
  $ mkdir -p ./artifacts
  $ sudo ansible-playbook tests/test_local.yml -e artifacts=$PWD/artifacts -e subjects=


'''TODO:''' Put general example notes here.
* <code>test_cloud.yml</code>
  $ mkdir -p ./artifacts
  $ curl -o cloud.qcow2 https://s3.amazonaws.com/fedora-atomic-s3/Fedora-26-20170331.n.0/Fedora-Atomic-26-20170331.n.0.x86_64.qcow2
  $ sudo ansible-playbook tests/test_cloud.yml -e artifacts=$PWD/artifacts -e subjects=$PWD/cloud.qcow2


=== Example: Simple in-situ test ===
* <code>test_oci.yml</code>
** No examples here yet


A simple downstream integration test for gzip can be found at: https://pagure.io/ansible_based_tests/blob/master/f/tests/gzip
* <code>test_repo.yml</code>
  $ mkdir -p ./artifacts
  ... get a repo file ...
  $ sudo ansible-playbook tests/test_repo.yml -e artifacts=$PWD/artifacts -e subjects=$PWD/haproxy.repo


This is how the folder structure looks like:
=== Example: Simple in-situ test ===


  .
Copy of Debian 'gzip' test:
  ├── files
  │   └── test-simple
  └── run_tests.yml


And the content of ''run_tests.yml'' is:
* Package: '''gzip'''
<pre>
* dist-git: https://github.com/stefwalter/gzip-dist-git/commits/ansible-test
---
* Reference: https://patches.ubuntu.com/g/gzip/
- hosts: localhost
  remote_user: root
  tasks:
  - name: Install the requirements
    package: name={{item}} state=latest
    with_items:
    - coreutils
    - /sbin/install-info
    - gzip
  - name: Create the folder where we will store the tests
    action: file state=directory path={{ item }}
            owner=root group=root
    with_items:
    - /usr/libexec/tests/gzip/


  - name: Install the test files
=== Example: GNOME style "Installed Tests" ===
    copy: src={{ item.file }} dest=/usr/libexec/tests/gzip/{{ item.dest }}
          mode=0755
    with_items:
    - {file: test-simple, dest: test-simple }


  - name: Execute the tests
Upstream glib2-tests being executed according to this standard interface:
    shell: /usr/libexec/tests/gzip/test-simple
</pre>


=== Example: GNOME style "Installed Tests" ===
* Package: '''glib2'''
* dist-git repo: https://github.com/stefwalter/glib2-dist-git/tree/ansible-test
* Reference: https://wiki.gnome.org/Initiatives/GnomeGoals/InstalledTests


A downstream integration test running in gnome installed tests can be found at: https://pagure.io/ansible_based_tests/blob/master/f/tests/gzip
=== Example: Tests run in Docker Container ===
More thorough structured of the same example: https://pagure.io/ansible_based_tests/pull-request/1


=== Example: Tests run in Docker Container ===
'''WARNING''': Not yet migrated to above spec changes.


An integration test running tests in a docker container can be found at: https://pagure.io/ansible_based_tests/blob/master/f/tests/glib2
An integration test running tests in a docker container can be found at: https://pagure.io/ansible_based_tests/blob/master/f/tests/glib2
More thorough structured of the same example: https://pagure.io/ansible_based_tests/pull-request/1
full example structure: https://pagure.io/ansible_based_tests/blob/master/f/tests/glib2/playbooks


=== Example: Modularity testing Framework ===
=== Example: Modularity testing Framework ===


TODO: Port [https://pagure.io/modularity-testing-framework/blob/master/f/examples an example]
Module testing framework tests wrapped in this standard interface:
 
* Module: '''haproxy'''
* dist-git repo: https://github.com/stefwalter/haproxy-dist-git/tree/ansible-test
* Example repo file:
 
  [haproxy-repo-test-subject]
  name=Example haproxy repo test subject
  baseurl=http://kojipkgs.fedoraproject.org/repos/module-8e83a5f6f6ed55ca/latest/x86_64/
  gpgcheck=0
  enabled=1


=== Example: Ansible with Atomic Host ===
=== Example: Ansible with Atomic Host ===


TODO: Port [https://github.com/projectatomic/atomic-host-tests an existing test]
'''TODO:''' Port [https://github.com/projectatomic/atomic-host-tests an existing test]


=== Example: Beakerlib based test ===
=== Example: Beakerlib based test ===


TODO: Port and shim a beakerlib test
Beakerlib tests of sed package:
 
* Package: '''sed'''
* dist-git: https://github.com/stefwalter/sed-dist-git/commits/ansible-test
* Reference: Ported upstream
 
Beakerlib test of 'setup' package:
 
* Package: '''setup'''
* dist-git: https://github.com/stefwalter/setup-dist-git/commits/ansible-test
* Reference: https://www.mankier.com/1/beakerlib#Examples
 
Beakerlib test of 'coreutils' package:
 
* Package: '''coreutils'''
* dist-git: https://github.com/stefwalter/coreutils-dist-git/commits/ansible-test
* Reference: https://www.mankier.com/1/beakerlib#Examples
 
=== Example: Full Structure ===
 
  .
  └── tests
    └── test-case
    └── config
    └── group_vars
    └── roles
    │  └── configure
    │  │  └── defaults
    │  │  └── files
    │  │  └── handlers
    │  │  └── meta
    │  │  └── tasks
    │  │  └── templates
    │  │  └── vars
    │  └── rpm
    │  │  └── defaults
    │  │  └── files
    │  │  └── handlers
    │  │  └── meta
    │  │  └── tasks
    │  │  └── templates
    │  │  └── vars
    └── test_rpm.yml
    └── test_local.yml
 
Tests will live under tests directory in a dist-git repo.  The playbooks directory will define the roles for configuration and execution of the tests.
The test_rpm.yml will call roles necessary and dependencies of other roles can be defined there or in the meta of another role. (Well documented on writing ansible playbooks)
I put the config as a place holder for configuration files needed or for provisioning (thinking of linch-pin https://github.com/CentOS-PaaS-SIG/linch-pin)
''Note :This does not mean all these role sub-directories are required this just shows a full example case''
 
'''Note:''' The common Ansible roles that can be shared between tests have been consolidated into a <code>standard-test-roles</code> [https://pagure.io/standard-test-roles Pagure repository] and [https://admin.fedoraproject.org/pkgdb/package/rpms/standard-test-roles/ RPM package].


== Evaluation ==
== Evaluation ==
Line 256: Line 243:
** CON: If tests become a core Fedora concept (which we hope), Ansible becomes a core technology that Fedora requires and is built upon.
** CON: If tests become a core Fedora concept (which we hope), Ansible becomes a core technology that Fedora requires and is built upon.
** CON: Most Ansible modules require Python 2.x while the distro is trying to move to Python 3.x
** CON: Most Ansible modules require Python 2.x while the distro is trying to move to Python 3.x
** CON: No mechanism for giving a test subject to the standard test interface
*** Python 3 is supported for most common modules since 2.2 --[[User:Misc|Misc]] ([[User talk:Misc|talk]]) 12:32, 19 April 2017 (UTC)
** CON: No mechanism for reporting test log, or test artifacts from standard interface
** CON: No standard mechanism for passing a test subject to a test suite implementing the standard test interface
** CON: No standard mechanism for reporting test log, or test artifacts from standard interface
** CON: No way to describe whether tests are compatible with or conflict with specific NVR of test subjects.
* ''Staging:''  
* ''Staging:''  
** No mechanism for passing a test subject (eg: a built package, a module, or a container) to the test suite to operate on.
** No mechanism for passing a test subject (eg: a built package, a module, or a container) to the test suite to operate on.
** No guidance on what Ansible modules should be used to install test dependencies
** No guidance on what Ansible modules should be used to install test dependencies
** No mechanism for a test system to control which repo of known-good packages to pull test or test suite dependencies from.
** No mechanism for a test system to control which repo of known-good packages to pull test or test suite dependencies from.
** Requires sudo, git, ansible, python2-dnf, libselinux-python as well known staging dependencies
** Requires sudo, dnf, git, ansible, python2-dnf, libselinux-python as well known staging dependencies
* ''Invocation:''  
* ''Invocation:''  
** Seems that zero exit code from sudo means success, non-zero exit code means failure? Not described explicitly in standard.
** Seems that zero exit code from sudo means success, non-zero exit code means failure? Not described explicitly in standard.
Line 271: Line 260:
** Mechanism is simple, but no concrete description of how exactly this works. How does a testing system find tests given a test subject such as an RPM or NVR?
** Mechanism is simple, but no concrete description of how exactly this works. How does a testing system find tests given a test subject such as an RPM or NVR?
** MDAPI link is broken: https://apps.fedoraproject.org/mdapi/
** MDAPI link is broken: https://apps.fedoraproject.org/mdapi/
*** This has been fixed --[[User:Pingou|Pingou]] ([[User talk:Pingou|talk]]) 08:03, 12 April 2017 (UTC)
'''Martin Pitt''' -- mpitt
* ''Summary:''
** I agree to what Stef said above, so I just add my "delta" review.
** PRO: I prefer keeping tests in the sources (like in this proposal) over packaging tests, as it's much less overhead for the packager and avoids having to create a new kind of package archive.
** CON: My main concern is that the Ansible format/tool might be replaced with something else in a few years, but the test format should be stable for a long time to avoid having to port hundreds/thousands of tests.
** CON: The ansible format is relatively verbose and too procedural for my taste; I prefer a purely declarative syntax and avoiding boilerplate for installing test deps and invoking the tests.
* ''Staging:''
** Not supporting test subjects is a major gap in the prototype - this is one of the core requirements here!
** Installing the actual tests is unnecessary overhead in the playbook, and clutters the host system with files in <code>/usr</code> that don't belong to a package; this can be rectified though with dropping the "Create folder"/"Install" tasks and replacing the run part with
  <pre>
- name: Execute the tests
  script: files/test-simple</pre>
* ''Invocation:''
** Getting live logs from the test and also saving it as an artifact is crucial, this is a major  gap in the prototype. Can ansible do this somehow?
* ''Discovery:''
** Checking out and inspecting hundreds/thousands of dist-gits whether they contain tests does not meet "able to efficiently answer the question..."; this needs a new service which regularly indexes all dist-gits and creates list of source packages that have tests.
'''Pierre-Yves Chibon''' -- pingou
* Disclaimer: I am one of the owners above.
* ''Summary:''
** PRO: Ansible is a well-know technology for sys-admin making it easier for them to contribute tests
** CON: While being well-know for some people, it will be new for others
** PRO: Very flexible it gives the packagers all the flexibility to install/configure/run their tests as they wish
** PRO: We could use --tag to allow running just a part of the test suite at certain time (''-t PR'' to run on pull-request ''-t updates'' to run on bodhi updates...)
** CON: We may need to "regulate" the flexibility to suggest a set of standard/gold practices to be used in the test system (using different tags or playbook if we want)
* ''Staging'':
** PRO: its flexibility makes it easy to test anything
** CON: we will need to write policies/guidelines on how to test the different subject (RPM, container, images...)
* ''Invocation:''
** PRO: easy to run locally
** PRO: easy to run as root and switch to a local user or vice-versa
** PRO: easy to couple with something like vagrant to allow running locally destructive tests
** CON: May require policy to set expectations and document how to move from one to the other
** CON: Inter-package dependencies is a challenge that can be overcome with a custom ansible module allowing to git clone other dist-git repo and while allowing us to block other network accesses (to avoid downloading random things from the internet that may be gone tomorrow and thus kill the reproducibility aspect).
* ''Discovery:''
** Git hash uniquely identifies a test suite
*** Meaning the identifier may change while the test suite itself hasn't
** PRO: Relies on the same dependency chain as the artefacts themselves
** QUESTION: What is the aim here? Do we really want to run all the tests of every perl module for every change made to the perl package? If so, good luck, otherwise ''repoquery --whatrequires <pkg>'' should do what we want.
*** MartinPitt: That's what Debian/Ubuntu do, and indeed that triggers thousands of tests (times 5 architectures). This allows landing new Perl versions with confidence and points out modules that need to be adjusted (and believe me, pretty much every new Perl version breaks some module or two!). That said, it should be ''possible'' to discover tests for that reason - I don't expect our infra to be scalable and fast enough right from the start to actually do testing at that depth.
'''Tim Flink''' -- Tflink
* Disclaimer: I am one of the owners of this proposal
* ''Summary:''
** PRO: Storing tests in this way decouples them from the build process
** PRO: Ansible has better docs and more examples than Fedora packages or RPM do
** PRO: non-packager testers don't have to learn RPM syntax
** PRO: Able to provide a lot more in the way of convenience functions to the test author - galaxy, roles/modules that we provide
** PRO: easy to change tests during devel, does not require a dedicated path in the filesystem
** PRO/CON: More easily extendable
** CON: Adds ansible et. al as a dependency for the test process - what happens if ansible changes or if it becomes unattractive 5-10 years from now?
** CON: Adds additional thing that packagers have to learn
** CON: We would have no control over when/how ansible changes
** It's not incredibly clear what all would be distributed (ansible modules, plugins) or how those would be distributed (galaxy-ish, package, etc.)
* ''Staging:''
** There is no obvious way to say what NVR is under test other than looking at what's installed or what's locally available pre-build
* ''Invocation:''
** Not sure sudo is required, it would likely be easier to have a plugin (if required) that ran things in a temp dir kind of how we do with libtaskotron today
* ''Discovery:''
** While arguably more complex than the <code>-tests</code> package proposal, the additional complexity in terms of code to be written doesn't seem to be much more complex
** There are systems already doing some parts of this discovery and could likely be re-used to a certain extent (Taskotron's trigger)
'''Dennis Gilmore''' -- Ausil
* ''Summary:''
** PRO: we could have unique git repos for collections, gnome-desktop, KDE, Atomic Host, Server, etc
** PRO: Docs are good as is support for the format across platforms
** PRO: Branching could be separate from package branching, simplifying workflows
*** I believe the idea is to store the tests in dist-git next to the spec files and patches, so branching would be at the same time --[[User:Pingou|Pingou]] ([[User talk:Pingou|talk]]) 08:03, 12 April 2017 (UTC)
** PRO: should be simple to write validation testing of tests, making sure that people are in compliance.
** CON: Not clear how we should store tests for same package with different git namespaces. for example Cockpit rpm and cockpit container
*** If they are stored in dist-git the tests for the rpm would be stored next to the spec file and the tests for the container next to the Dockerfile or equivalent --[[User:Pingou|Pingou]] ([[User talk:Pingou|talk]]) 08:03, 12 April 2017 (UTC)
** CON: getting started with Ansible for those who do not now it is a steep learning curve
** CON: can not reuse tools like rpmlint, rpmdiff etc
*** Could you expand on why? I don't see anything preventing using these tools. --[[User:Pingou|Pingou]] ([[User talk:Pingou|talk]]) 08:03, 12 April 2017 (UTC)
** PRO: seems like we should be able to easily setup a template for a tests repo
** PRO: We should be able to easily put a web interface for adding and editing tests for people not familiar with git
* ''Staging:''
** Using VM's and containers seems to have a much clearer path than the <code>-tests</code> package proposal
* ''Invocation:''
** use of sudo seems very suboptimal.
* ''Discovery:''
** indexing, searching and mapping of tests seems uncovered. Likely we will need to write some tooling to make it useful and easy to find and get for people.
'''Micah Abbott''' -- miabbott
* ''Summary:''
** Disclaimer: I was pulled into this evaluation later in the game and may be missing some context/pieces of the larger effort.
** PRO: Ansible feels easier to read/understand/learn
** PRO: Ansible appears to give more flexibility and options to packagers
** CON: New requirement on Ansible; not a standard install option like rpm/yum/dnf
** CON: Easy to do bad things with Ansible + root user
* ''Staging:''
** Using Ansible here seems to better support the in-situ and outside-in test approaches.  There may still be the issue of multiple, conflicting provisioning solutions.
* ''Invocation:''
** Using root has risks, although widely used when running Ansible playbooks.
* ''Discovery:''
** Using reqoquery seems reasonable enough, although I'd like to see a more concrete example of the whole process.
'''Dusty Mabe''' -- dustymabe
* ''Summary:'' ...
** I think ansible gives a balance of simple & sophisticated tooling to enable us to write simple tests or write complex tests. If a user is not familiar with ansible then they can use an example yaml file to just execute a shell script. More advanced users can ramp up to ansible's potential.
* ''Staging:''
** PRO: storing tests in git and not needing to repacking them into an RPM.
* ''Invocation:''
** PRO: simple: can invoke test by cloning repo and running run_tests.yml
* ''Discovery:''
** CON: not quite sure how this discovery is going to work. are we baking in the rpm some meta about where the tests live?
'''Nick Coghlan''' -- Ncoghlan
* ''Summary:''
** Ansible offers a lot more flexibility than RPM in managing complex test resources (VMs, users, etc), as well as installing test dependencies that aren't themselves packaged as RPMs
** However, it's likely to be overkill for simple projects that just need to re-run their standard tests on a fully installed system
** Regardless of which option is chosen, a standard shim should be provided to bootstrap the other (so if using packaged tests, have a boilerplate `*-tests` subpackage definition that bootstraps an Ansible based test)
** With RPM, two boilerplate templates could be provided: one for running a shell script from the source package, one for running an Ansible playbook from dist-git
* ''Staging:''
** CON: An Ansible-only approach introduces additional complexity in running non-intrusive test suites directly on the current system
* ''Invocation:''
** CON: Spec file helpers can assist in defining test package definitions, but they'd need to be dist-git aware to help define out-of-band test cases
* ''Discovery:''
* CON: Requiring additional metadata outside the RPM database for integration test discovery makes it more difficult to share tests across distributions
'''Michael Scherer''' -- Misc
* ''Summary:''
** PRO: Ansible is well know among Fedora community (and RH sponsored ones), as well as RH QA, from what I see
** CON: Ansible tend to still break too often after each major upgrade, and dependency on it is already a issue for a fe Centos SIG, due to reliance on unspecified trick. For example, Ceph deployment was stuck for a long time on 1.9, there is various issue with ansible-openshift, etc Thus this might requires more resources than expected, and might prove to be a issue
** CON: Lack of metadata to express requirements for tests. I can imagine a need to tests some packages on more than 1 server, or have some tests that are more destructive than others. So we need more than just ansible playbook for that.
** PRO: written in yaml, thus permitting some form of static analysis
[[Category:FedoraAtomicCi]]
[[Category:FedoraCi]]

Latest revision as of 16:30, 18 June 2017

Ansible: Standard Discovery, Staging, Invocation of Integration Tests

Warning.png
This proposal was selected and is kept here for historical reasons, including evaluation info below. There's a discussion tab above.

First see the Terminogy division of Responsibilities and Requirements

Invoking-tests-standard-interface.png

Detailed Description

This standard interface describes how to discover, stage and invoke tests. It is important to cleanly separate implementation details of the testing system from the test suite and its framework. It is also important to allow Fedora packagers to locally and manually invoke a test suite.

First see the Terminogy division of Responsibilities and Requirements

Staging

Tests files will be added into the tests/ folder of a dist-git repository branch. The structure of the files and folders is left to the liberty of the packagers but there are one or more playbooks in the tests/ folder that can be invoked to run the test suites.

  1. The testing system SHOULD stage the tests on Fedora operating system appropriate for the branch name of the dist-git repository containing the tests.
  2. The testing system SHOULD stage a clean a system for each set of tests it runs.
  3. The testing system MUST stage the following packages:
    1. ansible python2-dnf libselinux-python
  4. The testing system MUST clone the dist-git repository for the test, and checks out the appropriate branch.
  5. The contents of /etc/yum.repos.d on the staged system SHOULD be replaced with repository information that reflects the known good Fedora packages corresponding to the branch of the dist-git repository.
    1. The testing system MAY use multiple repositories, including updates or updates-testing to ensure this.

Invocation

The testing system MUST select a playbooks in the tests/ folder depending on the type of test subject it would like to test. The filename of each of these playbooks start with the test_ prefix and ends with a .yml extension. The following well known playbooks correspond to common test subjects. Additional playbooks will be added to this list as additional test subjects become common:

Playbook invoked Test subject
test_rpm.yml A string containing a space separated list of rpm filenames
test_repo.yml A string containing a space separated list of repo filenames appropriate for /etc/yum.repos.d
test_cloud.yml A string containing the filename of one virtual machine disk image bootable with cloud-init
test_oci.yml A string containing the filename of one OCI container image filesystem bundle
test_local.yml An empty string. No test subject or installation.

If a playbook for a given test subject is not present in a dist-git repository, the testing system SHOULD treat the test as having been "skipped". That is, the invocation SHOULD neither pass nor fail.

The test_local.yml SHOULD test a booted system where the test suite, its framework, and test subject are already installed. This playbook is usually invoked by the other playbooks. Additional playbooks may be present in the tests/ folder, and these MAY represent multiple test suites. The testing system is not expected to be aware of these additional playbooks.

To invoke the selected playbook, the testing system:

  1. MUST execute the playbook locally with ansible_connection=local and host localhost
  2. MUST execute the playbook with the following variables.
    1. subjects: The test subjects string as described above
    2. artifacts: The full path of an empty folder for test artifacts
  3. MUST execute the playbook as root.
  4. MUST examine the exit code of the playbook. A zero exit code is successful test result, non-zero is failure.
  5. MUST treat the file test.log in the artifacts folder as the main readable output of the test.
  6. SHOULD place the textual stdout/stderr of the ansible-playbook command in the ansible.log file in the artifacts folder.
  7. SHOULD treat the contents of the artifacts folder as the test artifacts.

The playbook and its test suite or test framework:

  1. SHOULD drop privileges appropriately if the test suite should be run as non-root.
  2. MUST install any requirements of its test suite or test framework and MUST fail if this is not possible.
  3. MUST provision the test subject listed in the subjects variable appropriately for its playbook name (described above) and MUST fail if this is not possible.
  4. MUST place the main readable output of the test suite into a test.log file in the artifacts variable folder. This MUST happen even if some of the test suites fail.
  5. SHOULD place additional test artifacts in the folder defined in the artifacts variable.

Discovery

A testing system needs to be able to efficiently answer the question "does this subject have any tests packages, and if so, what are their names". This should be automatically discoverable to the extent possible.

Use repoquery, basically I propose we rely on the dependency chain of the RPMs itself instead of trying to replicate it differently.

repoquery --whatrequires or an equivalent relying on mdapi: https://apps.fedoraproject.org/mdapi/ (which I need to adjust to support back walking (ie find which packages requires "foo" instead of what packages "foo" requires which we currently have) and we should be able to build a list of dependencies.

In addition, a test suite can be uniquely identified using the git hash of the commit of the git repo.

Scope

Since the tests are added in a sub-folder of the dist-git repo, there are no changes required to the Fedora infrastructure and will have no impact on the packagers' workflow and tooling.

Only the testing system will need to be taught to install the requirements and run the playbooks.

User Experience

A standard way to package, store and run tests benefits Fedora stability, and makes Fedora better for users.

  • This structure makes it easy to run locally thus potentially reproducing an error triggered on the test system.
  • Ansible is being more and more popular, thus making it easier for people to contribute new tests
  • Used by a lot of sys-admin, ansible could help sys-admin to bring test-cases to the packagers and developers about situation where something failed for them.

Upgrade/compatibility impact

There are no real upgrade or compatibility impact. The tests will be branched per release as spec files are branched dist-git is now.


Examples

What follows are examples of writing and/or packaging existing tests to this standard. This is how to run the various examples:


  • test_rpm.yml
 $ fedpkg local
 $ mkdir -p ./artifacts
 $ sudo ansible-playbook tests/test_rpm.yml -e artifacts=$PWD/artifacts -e subjects=$PWD/x86_64/sed-4*.x86_64.rpm
  • test_local.yml
 $ mkdir -p ./artifacts
 $ sudo ansible-playbook tests/test_local.yml -e artifacts=$PWD/artifacts -e subjects=
  • test_cloud.yml
 $ mkdir -p ./artifacts
 $ curl -o cloud.qcow2 https://s3.amazonaws.com/fedora-atomic-s3/Fedora-26-20170331.n.0/Fedora-Atomic-26-20170331.n.0.x86_64.qcow2
 $ sudo ansible-playbook tests/test_cloud.yml -e artifacts=$PWD/artifacts -e subjects=$PWD/cloud.qcow2
  • test_oci.yml
    • No examples here yet
  • test_repo.yml
 $ mkdir -p ./artifacts
 ... get a repo file ...
 $ sudo ansible-playbook tests/test_repo.yml -e artifacts=$PWD/artifacts -e subjects=$PWD/haproxy.repo

Example: Simple in-situ test

Copy of Debian 'gzip' test:

Example: GNOME style "Installed Tests"

Upstream glib2-tests being executed according to this standard interface:

Example: Tests run in Docker Container

WARNING: Not yet migrated to above spec changes.

An integration test running tests in a docker container can be found at: https://pagure.io/ansible_based_tests/blob/master/f/tests/glib2 full example structure: https://pagure.io/ansible_based_tests/blob/master/f/tests/glib2/playbooks

Example: Modularity testing Framework

Module testing framework tests wrapped in this standard interface:

 [haproxy-repo-test-subject]
 name=Example haproxy repo test subject
 baseurl=http://kojipkgs.fedoraproject.org/repos/module-8e83a5f6f6ed55ca/latest/x86_64/
 gpgcheck=0
 enabled=1

Example: Ansible with Atomic Host

TODO: Port an existing test

Example: Beakerlib based test

Beakerlib tests of sed package:

Beakerlib test of 'setup' package:

Beakerlib test of 'coreutils' package:

Example: Full Structure

 .
 └── tests
    └── test-case
    └── config
    └── group_vars
    └── roles
    │   └── configure
    │   │   └── defaults
    │   │   └── files
    │   │   └── handlers
    │   │   └── meta
    │   │   └── tasks
    │   │   └── templates
    │   │   └── vars
    │   └── rpm
    │   │   └── defaults
    │   │   └── files
    │   │   └── handlers
    │   │   └── meta
    │   │   └── tasks
    │   │   └── templates
    │   │   └── vars
    └── test_rpm.yml
    └── test_local.yml

Tests will live under tests directory in a dist-git repo. The playbooks directory will define the roles for configuration and execution of the tests. The test_rpm.yml will call roles necessary and dependencies of other roles can be defined there or in the meta of another role. (Well documented on writing ansible playbooks) I put the config as a place holder for configuration files needed or for provisioning (thinking of linch-pin https://github.com/CentOS-PaaS-SIG/linch-pin) Note :This does not mean all these role sub-directories are required this just shows a full example case

Note: The common Ansible roles that can be shared between tests have been consolidated into a standard-test-roles Pagure repository and RPM package.

Evaluation

Instructions: Copy the block below, sign your name and fill in each section with your evaluation of that aspect. Add additional bullet points with overall summary or notes.

Full Name -- SignAture

  • Summary: ...
  • Staging: ...
  • Invocation: ...
  • Discovery: ...

Stef Walter -- Stefw

  • Summary:
    • PRO: Ansible is readable and approachable
    • PRO: Tests are stored in same repo as tests
    • PRO: Inclusion of upstream tests seems to require packaging them as RPMs.
    • CON: Ansible is another technology (in addition to RPM spec files, etc.) that packager is required to learn in order to maintain a package in dist-git.
    • CON: If tests become a core Fedora concept (which we hope), Ansible becomes a core technology that Fedora requires and is built upon.
    • CON: Most Ansible modules require Python 2.x while the distro is trying to move to Python 3.x
      • Python 3 is supported for most common modules since 2.2 --Misc (talk) 12:32, 19 April 2017 (UTC)
    • CON: No standard mechanism for passing a test subject to a test suite implementing the standard test interface
    • CON: No standard mechanism for reporting test log, or test artifacts from standard interface
    • CON: No way to describe whether tests are compatible with or conflict with specific NVR of test subjects.
  • Staging:
    • No mechanism for passing a test subject (eg: a built package, a module, or a container) to the test suite to operate on.
    • No guidance on what Ansible modules should be used to install test dependencies
    • No mechanism for a test system to control which repo of known-good packages to pull test or test suite dependencies from.
    • Requires sudo, dnf, git, ansible, python2-dnf, libselinux-python as well known staging dependencies
  • Invocation:
    • Seems that zero exit code from sudo means success, non-zero exit code means failure? Not described explicitly in standard.
    • The use of sudo seems to imply invocation should happen as a non-root user. Is this correct?
    • Does the standard assume sudo is guaranteed to work? Should the sudo part just be dropped and require invocation as root?
    • No mechanism for reporting logs, or test artifacts has been described.
  • Discovery:
    • Mechanism is simple, but no concrete description of how exactly this works. How does a testing system find tests given a test subject such as an RPM or NVR?
    • MDAPI link is broken: https://apps.fedoraproject.org/mdapi/
      • This has been fixed --Pingou (talk) 08:03, 12 April 2017 (UTC)

Martin Pitt -- mpitt

  • Summary:
    • I agree to what Stef said above, so I just add my "delta" review.
    • PRO: I prefer keeping tests in the sources (like in this proposal) over packaging tests, as it's much less overhead for the packager and avoids having to create a new kind of package archive.
    • CON: My main concern is that the Ansible format/tool might be replaced with something else in a few years, but the test format should be stable for a long time to avoid having to port hundreds/thousands of tests.
    • CON: The ansible format is relatively verbose and too procedural for my taste; I prefer a purely declarative syntax and avoiding boilerplate for installing test deps and invoking the tests.
  • Staging:
    • Not supporting test subjects is a major gap in the prototype - this is one of the core requirements here!
    • Installing the actual tests is unnecessary overhead in the playbook, and clutters the host system with files in /usr that don't belong to a package; this can be rectified though with dropping the "Create folder"/"Install" tasks and replacing the run part with
- name: Execute the tests
  script: files/test-simple
  • Invocation:
    • Getting live logs from the test and also saving it as an artifact is crucial, this is a major gap in the prototype. Can ansible do this somehow?
  • Discovery:
    • Checking out and inspecting hundreds/thousands of dist-gits whether they contain tests does not meet "able to efficiently answer the question..."; this needs a new service which regularly indexes all dist-gits and creates list of source packages that have tests.


Pierre-Yves Chibon -- pingou

  • Disclaimer: I am one of the owners above.
  • Summary:
    • PRO: Ansible is a well-know technology for sys-admin making it easier for them to contribute tests
    • CON: While being well-know for some people, it will be new for others
    • PRO: Very flexible it gives the packagers all the flexibility to install/configure/run their tests as they wish
    • PRO: We could use --tag to allow running just a part of the test suite at certain time (-t PR to run on pull-request -t updates to run on bodhi updates...)
    • CON: We may need to "regulate" the flexibility to suggest a set of standard/gold practices to be used in the test system (using different tags or playbook if we want)
  • Staging:
    • PRO: its flexibility makes it easy to test anything
    • CON: we will need to write policies/guidelines on how to test the different subject (RPM, container, images...)
  • Invocation:
    • PRO: easy to run locally
    • PRO: easy to run as root and switch to a local user or vice-versa
    • PRO: easy to couple with something like vagrant to allow running locally destructive tests
    • CON: May require policy to set expectations and document how to move from one to the other
    • CON: Inter-package dependencies is a challenge that can be overcome with a custom ansible module allowing to git clone other dist-git repo and while allowing us to block other network accesses (to avoid downloading random things from the internet that may be gone tomorrow and thus kill the reproducibility aspect).
  • Discovery:
    • Git hash uniquely identifies a test suite
      • Meaning the identifier may change while the test suite itself hasn't
    • PRO: Relies on the same dependency chain as the artefacts themselves
    • QUESTION: What is the aim here? Do we really want to run all the tests of every perl module for every change made to the perl package? If so, good luck, otherwise repoquery --whatrequires <pkg> should do what we want.
      • MartinPitt: That's what Debian/Ubuntu do, and indeed that triggers thousands of tests (times 5 architectures). This allows landing new Perl versions with confidence and points out modules that need to be adjusted (and believe me, pretty much every new Perl version breaks some module or two!). That said, it should be possible to discover tests for that reason - I don't expect our infra to be scalable and fast enough right from the start to actually do testing at that depth.


Tim Flink -- Tflink

  • Disclaimer: I am one of the owners of this proposal
  • Summary:
    • PRO: Storing tests in this way decouples them from the build process
    • PRO: Ansible has better docs and more examples than Fedora packages or RPM do
    • PRO: non-packager testers don't have to learn RPM syntax
    • PRO: Able to provide a lot more in the way of convenience functions to the test author - galaxy, roles/modules that we provide
    • PRO: easy to change tests during devel, does not require a dedicated path in the filesystem
    • PRO/CON: More easily extendable
    • CON: Adds ansible et. al as a dependency for the test process - what happens if ansible changes or if it becomes unattractive 5-10 years from now?
    • CON: Adds additional thing that packagers have to learn
    • CON: We would have no control over when/how ansible changes
    • It's not incredibly clear what all would be distributed (ansible modules, plugins) or how those would be distributed (galaxy-ish, package, etc.)
  • Staging:
    • There is no obvious way to say what NVR is under test other than looking at what's installed or what's locally available pre-build
  • Invocation:
    • Not sure sudo is required, it would likely be easier to have a plugin (if required) that ran things in a temp dir kind of how we do with libtaskotron today
  • Discovery:
    • While arguably more complex than the -tests package proposal, the additional complexity in terms of code to be written doesn't seem to be much more complex
    • There are systems already doing some parts of this discovery and could likely be re-used to a certain extent (Taskotron's trigger)

Dennis Gilmore -- Ausil

  • Summary:
    • PRO: we could have unique git repos for collections, gnome-desktop, KDE, Atomic Host, Server, etc
    • PRO: Docs are good as is support for the format across platforms
    • PRO: Branching could be separate from package branching, simplifying workflows
      • I believe the idea is to store the tests in dist-git next to the spec files and patches, so branching would be at the same time --Pingou (talk) 08:03, 12 April 2017 (UTC)
    • PRO: should be simple to write validation testing of tests, making sure that people are in compliance.
    • CON: Not clear how we should store tests for same package with different git namespaces. for example Cockpit rpm and cockpit container
      • If they are stored in dist-git the tests for the rpm would be stored next to the spec file and the tests for the container next to the Dockerfile or equivalent --Pingou (talk) 08:03, 12 April 2017 (UTC)
    • CON: getting started with Ansible for those who do not now it is a steep learning curve
    • CON: can not reuse tools like rpmlint, rpmdiff etc
      • Could you expand on why? I don't see anything preventing using these tools. --Pingou (talk) 08:03, 12 April 2017 (UTC)
    • PRO: seems like we should be able to easily setup a template for a tests repo
    • PRO: We should be able to easily put a web interface for adding and editing tests for people not familiar with git
  • Staging:
    • Using VM's and containers seems to have a much clearer path than the -tests package proposal
  • Invocation:
    • use of sudo seems very suboptimal.
  • Discovery:
    • indexing, searching and mapping of tests seems uncovered. Likely we will need to write some tooling to make it useful and easy to find and get for people.


Micah Abbott -- miabbott

  • Summary:
    • Disclaimer: I was pulled into this evaluation later in the game and may be missing some context/pieces of the larger effort.
    • PRO: Ansible feels easier to read/understand/learn
    • PRO: Ansible appears to give more flexibility and options to packagers
    • CON: New requirement on Ansible; not a standard install option like rpm/yum/dnf
    • CON: Easy to do bad things with Ansible + root user
  • Staging:
    • Using Ansible here seems to better support the in-situ and outside-in test approaches. There may still be the issue of multiple, conflicting provisioning solutions.
  • Invocation:
    • Using root has risks, although widely used when running Ansible playbooks.
  • Discovery:
    • Using reqoquery seems reasonable enough, although I'd like to see a more concrete example of the whole process.


Dusty Mabe -- dustymabe

  • Summary: ...
    • I think ansible gives a balance of simple & sophisticated tooling to enable us to write simple tests or write complex tests. If a user is not familiar with ansible then they can use an example yaml file to just execute a shell script. More advanced users can ramp up to ansible's potential.
  • Staging:
    • PRO: storing tests in git and not needing to repacking them into an RPM.
  • Invocation:
    • PRO: simple: can invoke test by cloning repo and running run_tests.yml
  • Discovery:
    • CON: not quite sure how this discovery is going to work. are we baking in the rpm some meta about where the tests live?


Nick Coghlan -- Ncoghlan

  • Summary:
    • Ansible offers a lot more flexibility than RPM in managing complex test resources (VMs, users, etc), as well as installing test dependencies that aren't themselves packaged as RPMs
    • However, it's likely to be overkill for simple projects that just need to re-run their standard tests on a fully installed system
    • Regardless of which option is chosen, a standard shim should be provided to bootstrap the other (so if using packaged tests, have a boilerplate *-tests subpackage definition that bootstraps an Ansible based test)
    • With RPM, two boilerplate templates could be provided: one for running a shell script from the source package, one for running an Ansible playbook from dist-git
  • Staging:
    • CON: An Ansible-only approach introduces additional complexity in running non-intrusive test suites directly on the current system
  • Invocation:
    • CON: Spec file helpers can assist in defining test package definitions, but they'd need to be dist-git aware to help define out-of-band test cases
  • Discovery:
  • CON: Requiring additional metadata outside the RPM database for integration test discovery makes it more difficult to share tests across distributions

Michael Scherer -- Misc

  • Summary:
    • PRO: Ansible is well know among Fedora community (and RH sponsored ones), as well as RH QA, from what I see
    • CON: Ansible tend to still break too often after each major upgrade, and dependency on it is already a issue for a fe Centos SIG, due to reliance on unspecified trick. For example, Ceph deployment was stuck for a long time on 1.9, there is various issue with ansible-openshift, etc Thus this might requires more resources than expected, and might prove to be a issue
    • CON: Lack of metadata to express requirements for tests. I can imagine a need to tests some packages on more than 1 server, or have some tests that are more destructive than others. So we need more than just ansible playbook for that.
    • PRO: written in yaml, thus permitting some form of static analysis