From Fedora Project Wiki

Introduction and Purpose

There have been several frameworks proposed for use in Fedora QA. This is an attempt to document an evaluation of the various possible solutions for our use. Some of these would be a part of the taskbot proposal, others would pretty much replace the taskbot proposal.

Tools We're Potentially Interested In

Buildbot

Homepage: http://buildbot.net/

While buildbot was designed as a continuous integration system, it is flexible enough that it can be used as a more generic task scheduling and execution system. It is currently being used in taskbot's proof-of-concept implementation and has been functioning well in that capacity.

Advantages to using buildbot

  • Written in python which is where most of our experience is and what most of our tools are already written in
  • Configuration is 100% python code: not squirrelled away in the application and its database. everything can be stored in git and easily reviewed for changes if something breaks
  • Very flexible: you can write new modules in the configuration file without dirty hacks by design

Disadvantages/Roadblocks to using buildbot

  • It's not what the cool kids are using
  • There are some complexities in buildbot that aren't hidden as well as they are in jenkins. Note that this isn't always a bad thing


Autotest

Homepage: http://autotest.github.io/

Fedora QA has been using autotest for some time now as part of AutoQA. While autotest was originally designed as a tool for kernel testing, it's capabilities are broader than that and is continuing to expand..

Advantages to using autotest

  • codebase is a decent balance of simple and having features. it doesn't try to do everything but solves some difficult problems as well
  • we're already using it in fedora QA
  • upstream is friendly and interested in many of the directions that we want to go in (ie, good patches are very likely to be accepted upstream)

Disadvantages/Roadblocks to using autotest

  • frontend: it's written in GWT which is not packagable in Fedora and has few methods for auth/authz. The upstream devs want to port it to django which should clear up both issues but it's something to keep in mind.
  • scaling: we're already having issues with database performance on our production instances - this will likely be mitigated by recent changes to autotest and some work on tuning the database but it is a concern
  • disposable clients: at the moment, there isn't much support for disposable clients in autotest. it's something they want to do but the code hasn't been written yet
  • tests in git: with the current autotest structure, it expects all tests to be stored in a specific path on the master instead of in git repos or something else which is easier to use with distributed test maintenance. the maintainers are interested in changing this but again, the code hasn't been written


Beaker

Beaker is a full stack lab automation system developed by Red Hat for use in testing Red Hat Enterprise Linux. It has many listed capabilities (ovirt/openstack integration, bare metal and vm provisioning support etc.)

Advantages to using beaker

  • Red Hat is using it internally, there is a possibility of using some of their tests without having to write everything on our own
  • Has dedicated development resources
  • Likely has fewer scaling issues than autotest
  • Solves some complicated problems like system provisioning and multi-host tests

Disadvantages/Roadblocks to using beaker

  • Execution model doesn't fit with where we are or where we want to go right now - this will get better with openstack integration but that's not done yet and doesn't address the limitations of recipes (written to test a single package or generic; nothing in between and very little ability to pass data or parameters into jobs)
  • It's huge and has many moving parts. While beaker is very powerful, it is also rather complex
  • Beah. Tasks as RPMs, bash-ish syntax. It works but isn't an ideal solution if you're not already using it. Autotest runner support could help with this but will have to wait and see the implementation details

Jenkins

Jenkins is a CI system written in Java. It is used in many places and we would be remiss if we didn't include it for evaluation.

Advantages to using Jenkins:

  • all the cool kids use jenkins
  • It is said to be very flexible

Disadvantages to Jenkins:

  • Java. We don't have much java experience as a team and those of us who do have java experience are not all that excited by the prospect of going back to Java
  • Configuration is stored within the application and its database
  • any functionality extensions need to be written as self-contained plugins


Igor

Homepage: https://gitorious.org/ovirt/igord

Evaluation Details: User:Tflink/AutomationFrameworkEvaluation/igor

Igor was developed as a method for running tests on oVirt node, it is based on a client/server architecture with communication via http. In contrast to both Autotest and Beaker, it is not a full-stack solution but instead focuses more on running tests and gathering status (execution result, logs)

Criteria

In an attempt to compare the various frameworks, I'm listing a set of criteria which I think are important to look at. The criteria are not going to be relevant to all solutions but that's not necessarily a bad thing.

Communication and Community

  • how many projects are using it
  • how old is it
  • how many active devs from how many orgs
  • quality of docs
  • how much mailing list traffic is there?
  • what is the bug tracker?
  • what is the patch process?
  • what is the RFE process?

High level stuff

  • how tightly integrated are the components
  • what license is the project released under
  • how much is already packaged in fedora

API

  • what mechanism does the api use (xmlrpc, json-rpc, restful-ish etc.)
  • can you schedule jobs through the api
  • what scheduling params are available through the api

Results

  • how flexible is the schema for the built in results store
  • what data is stored in the default result
  • is there a difference between failed execution and status based on result analysis
  • what kinds of analysis are supported

VM management

  • does it work with any external systems (ovirt, openstack etc.)
  • does it support rapid cloning
  • how are vms configured post-spawn
  • control over vm configuration (vnc/spice, storage type etc.)
  • ephemeral client support?

Test harness

  • base language
  • how tightly integrated is it with the system as a whole
  • are any non-primary harnesses supported

Test execution

  • how are tests stored
  • support for storing tests in vcs
  • method for passing data into test for execution
  • how are parameters stored for post-failure analysis
  • support for replaying a test
  • can tests be executed locally in a dev env with MINIMAL setup
  • external log shipping?
  • how tightly integrated is result reporting
  • what kind of latency is there between tests?