From Fedora Project Wiki

< User:Tflink

Revision as of 03:26, 6 June 2013 by Tflink (talk | contribs) (→‎Beaker: added link to beaker details)

Introduction and Purpose

There have been several frameworks proposed for use in Fedora QA. This is an attempt to document an evaluation of the various possible solutions for our use. Some of these would be a part of the taskbot proposal, others would pretty much replace the taskbot proposal.

Tools We're Interested In

Buildbot

Homepage: http://buildbot.net/

While buildbot was designed as a continuous integration system, it is flexible enough that it can be used as a more generic task scheduling and execution system. It is currently being used in taskbot's proof-of-concept implementation and has been functioning well in that capacity.

Autotest

Homepage: http://autotest.github.io/

Fedora QA has been using autotest for some time now as part of AutoQA. While autotest was originally designed as a tool for kernel testing, it's capabilities are broader than that and is continuing to expand.

Beaker

Beaker is a full stack lab automation system developed by Red Hat for use in testing Red Hat Enterprise Linux. It has many listed capabilities (ovirt/openstack integration, bare metal and vm provisioning support etc.)

Igor

Homepage: https://gitorious.org/ovirt/igord

Igor was developed as a method for running tests on oVirt node, it is based on a client/server architecture with communication via http. In contrast to both Autotest and Beaker, it is not a full-stack solution but instead focuses more on running tests and gathering status (execution result, logs)

Criteria

In an attempt to compare the various frameworks, I'm listing a set of criteria which I think are important to look at. The criteria are not going to be relevant to all solutions but that's not necessarily a bad thing.

Communication and Community

  • how many projects are using it
  • how old is it
  • how many active devs from how many orgs
  • quality of docs
  • how much mailing list traffic is there?
  • what is the bug tracker?
  • what is the patch process?
  • what is the RFE process?

High level stuff

  • how tightly integrated are the components
  • what license is the project released under
  • how much is already packaged in fedora

API

  • what mechanism does the api use (xmlrpc, json-rpc, restful-ish etc.)
  • can you schedule jobs through the api
  • what scheduling params are available through the api

Results

  • how flexible is the schema for the built in results store
  • what data is stored in the default result
  • is there a difference between failed execution and status based on result analysis
  • what kinds of analysis are supported

VM management

  • does it work with any external systems (ovirt, openstack etc.)
  • does it support rapid cloning
  • how are vms configured post-spawn
  • control over vm configuration (vnc/spice, storage type etc.)
  • ephemeral client support?

Test harness

  • base language
  • how tightly integrated is it with the system as a whole
  • are any non-primary harnesses supported

Test execution

  • how are tests stored
  • support for storing tests in vcs
  • method for passing data into test for execution
  • how are parameters stored for post-failure analysis
  • support for replaying a test
  • can tests be executed locally in a dev env with MINIMAL setup
  • external log shipping?
  • how tightly integrated is result reporting
  • what kind of latency is there between tests?