From Fedora Project Wiki
No edit summary
Line 77: Line 77:


== 'control.autoqa' file ==
== 'control.autoqa' file ==
{{admon/important| FIXME | insert some description - kparal?}}
The <code>control.autoqa</code> file allows test to define its scheduling and also enables it to define some requirements or alter input arguments for this test. This file will decide whether to run this test at all, on what architectures/distributions it should run, and so on. It is evaluated on the AutoQA server before the test itself is scheduled and run on AutoQA client.
 
Here is example <code>control.autoqa</code> file:
<pre>
# this test can be run just once and on any architecture,
# override the default set of architectures
archs = ['noarch']
 
# this test may be destructive, let's require a virtual machine for it
labels = ['virt']
 
# we want to run this test just for post-koji-build hook
if hook not in ['post-koji-build']:
    execute = False
</pre>
 
All the variables available in <code>control.autoqa</code> are documented in <code>doc/control.autoqa.template</code>. You can override them to customize your test's scheduling. Basically you can influence:
* For which event (i.e. hook) your test runs and under which conditions.
* On what device your test runs (architecture and what [[Managing autotest labels|autotest labels]] must be present on that hardware).
* Data passed from the hook to the test object.
 
<code>control.autoqa</code> file (same as <code>control</code> file) is a Python script, so you can execute conditional expressions, loops or virtually any other Python statements there. But it is heavily recommended to keep this file as simple as possible and put all the logic to the test object.


== Test Object ==
== Test Object ==

Revision as of 11:53, 25 August 2010

QA.png


Warning.png
This page is a draft only
It is still under construction and content may change. Do not rely on the information on this page.

Introduction

Here's some info on writing tests for AutoQA. There's four parts to a test: the test code, the test object, the Autotest control file, and the AutoQA control file. Typically they all live in a single directory, located in the tests/ dir of the autoqa source tree.

Important.png
Start with a test
Before considering integrating a test into AutoQA or Autotest, create a working test. Creating a working test should not require knowledge of autotest or autoqa. This page outlines the process of integrating an existing test into AutoQA.

Write test code first

I'll say it again: Write the test first. The tests don't require anything from autotest or autoqa. You should have a working test before you even start thinking about AutoQA.

You can package up pre-existing tests or you can write a new test in whatever language you're comfortable with. It doesn't even need to return a meaningful exit code if you don't want it to (even though it is definitely better). You'll handle parsing the output and returning a useful result in the test object.

If you are writing a brand new test, there are some python libraries that have been developed for use in existing AutoQA tests. More information about this will be available once these libraries are packaged correctly, but they are not necessary to write your own tests. You can choose to use whatever language and libraries you want.

Test directory

Create a new directory to hold your test. The directory name will be used as the test name, and the test object name should match that. Choose a name that doesn't use spaces, dashes, or dots. Underscores are fine.

Drop your test code into the directory - it can be a bunch of scripts, a tarball of sources that may need compiling, whatever.

Next, from the autoqa/doc/ directory copy template files control.template, control.autoqa.template and test_class.py.template into your test dir. Rename them to control, control.autoqa and [testname].py.

'control' file

The control file defines some metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from AutoQA, and so on. Here's an example control file:

control file for conflicts test

AUTHOR = "Will Woods <wwoods@redhat.com>"
TIME="SHORT"
NAME = 'conflict'
DOC = """
This test runs potential_conflict from yum-utils to check for possible
file / package conflicts.
"""
TEST_TYPE = 'CLIENT'
TEST_CLASS = 'General'
TEST_CATEGORY = 'Functional'

job.run_test('conflicts', config=autoqa_conf, **autoqa_args)

autoqa_conf variable contains string with autoqa.conf file, usually located at /etc/autoqa/autoqa.conf. Note, though, that some of the values in autoqa_conf are changed by the autoqa harness while scheduling the testrun.

autoqa_args is a dictionary, containing all the hook-specific variables (e.g. kojitag for post-koji-build hook). Documentation on these is to be found in hooks/[hookname]/README files. Some more variables may be also present, as described in the template file.

Important.png
FIXME
Append some real control file to show 'how it looks'

Required data

The following control file items are required for valid AutoQA tests. The first three are important for us, the rest is not so important but still required.

  • NAME: The name of the test. Should match the test directory name, the test object name, etc.
  • AUTHOR: Your name and email address.
  • DOC: A verbose description of the test - its purpose, the logs and data it will generate, and so on.
  • TIME: either 'SHORT', 'MEDIUM', or 'LONG'. This defines the expected runtime of the test - either 15 minutes, less than 4 hours, or more than 4 hours.
  • TEST_TYPE: either 'CLIENT' or 'SERVER'. Use 'CLIENT' unless your test requires multiple machines (e.g. a client and server for network-based testing).
  • TEST_CLASS: This is used to group tests in the UI. 'General' is fine. We may use this field to refer to the test hook in the future.
  • TEST_CATEGORY: This defines the category your test is a part of - usually this describes the general type of test it is. Examples include Functional, Stress, Performance, and Regression.

Optional data

DEPENDENCIES = 'POWER, CONSOLE'
SYNC_COUNT = 1
  • DEPENDENCIES: Comma-separated list of hardware requirements for the test. Currently unsupported.
  • SYNC_COUNT: The number of hosts to set up and synchronize for this test. Only relevant for SERVER-type tests that need to run on multiple machines.

Launching the test object

Most tests will have a line in the control file like this:

job.run_test('conflicts', config=autoqa_conf, **autoqa_args)

This will create a 'conflicts' test object (see below) and pass along the given variables.

Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.

'control.autoqa' file

The control.autoqa file allows test to define its scheduling and also enables it to define some requirements or alter input arguments for this test. This file will decide whether to run this test at all, on what architectures/distributions it should run, and so on. It is evaluated on the AutoQA server before the test itself is scheduled and run on AutoQA client.

Here is example control.autoqa file:

# this test can be run just once and on any architecture,
# override the default set of architectures
archs = ['noarch']

# this test may be destructive, let's require a virtual machine for it
labels = ['virt']

# we want to run this test just for post-koji-build hook
if hook not in ['post-koji-build']:
    execute = False

All the variables available in control.autoqa are documented in doc/control.autoqa.template. You can override them to customize your test's scheduling. Basically you can influence:

  • For which event (i.e. hook) your test runs and under which conditions.
  • On what device your test runs (architecture and what autotest labels must be present on that hardware).
  • Data passed from the hook to the test object.

control.autoqa file (same as control file) is a Python script, so you can execute conditional expressions, loops or virtually any other Python statements there. But it is heavily recommended to keep this file as simple as possible and put all the logic to the test object.

Test Object

The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test code, and sending results to Autotest (and other places).

Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the conflicts test contains a file named conflicts.py, which defines a conflicts class, as follows:

import autoqa.util
from autoqa.test import AutoQATest
from autoqa.decorators import ExceptionCatcher
from autotest_lib.client.bin import utils

class conflicts(AutoQATest):
    ...

The name of the class must match the name given in the run_test() line of the control file, and test classes must be subclasses of the AutoQATest class. But don't worry too much about how this works - the test_class.py.template contains the skeleton of an appropriate test object. Just change the name of the file (and class!) to something appropriate for your test.

AutoQATest base class

This class contains the functionality common to all the tests - i.e. it initializes the variables used for storing results in its __init__ function. The default initialize method then parses the config string passed in the control file into self.config, and prepares self.autotest_url - a url pointing to the autotest storage place, where all the logs will be once the test finishes.

It also contains a postprocess_iteration method, which uses the self.{result, summary, highlights, outputs} to send a pretty formated email to autoqa-results mailing list.

Important.png
Internal result variables
Please make sure, that you DO use the self.{result, summary, highlights, outputs} while writing tests. These variables make the 'central dispatch' of test results possible, and will be used for future enhancements like ResultsDB.

There are two more methods defined - initialize_failed, and run_once failed. These are used by ExceptionCatcher decorator when Exception happens in either initialize or run_once.

ExceptionCatcher decorator

When Exception is raised during the initialize or run_once, the test immediately ends without calling the postprocess_iteration method, which is supposed to send all the gathered data to mailing list.

This behaviour is, of course, not what one would really want, so here comes the ExceptionCatcher decorator. When an Exception is raised, it calls the function passed as an argument to the decorator.

@ExceptionCatcher("self.run_once_failed")
def run_once(self, **kwargs):
    ...

I.E. if any Exception is thrown in the run_once, self.run_once_failed is called. This method than calls sets the result and summary variables (if unset), and calls postprocess iteration (initialize_failed does the same). Once the run_once_failed finishes, the Exception is re-raised, and the test then ends.

Important.png
**kwargs parameter
Because of some nasty Autotest magic, it is required to have the **kwargs argument in the decorated function. This is because Autotest magic can not find out, what is the correct subset of arguments from **autoqa_args to pass, so it passes them all - which causes error, if you don't have them all listed.

Test stages

setup()

This is an optional method of the test class. This is where you make sure that any required packages are installed, services are started, your test code is compiled, and so on. For example:

    def setup(self):
        utils.system('yum -y install httpd')
        if utils.system('service httpd status') != 0:
            utils.system('service httpd start')


initialize()

This does any pre-test initialization that needs to happen. AutoQA tests typically use this method to parse the autoqa config data passed from the server or create some empty result structures. This is an optional method - if you don't need to initialize anything.

All basic initialization is done in the AutoQATest class, so check it out, before you re-define it.

Important.png
Call AutoQATest.initialize
If you re-implement the initialize function, make sure, that you call super(CLASSNAME, self).initialize(config) inside it, so all the required initialization is executed

run_once()

This is where the test code actually gets run. It's the only required method for your test object.

In short, this method should build the argument list and run the test binary, like so:

    @ExceptionCatcher("self.run_once_failed")
    def run_once(self, baseurl, parents, reponame, **kwargs):
        os.chdir(self.bindir)
        cmd = "./sanity.py --scratchdir %s --logdir %s" % (self.tmpdir, self.resultsdir)
        cmd += " %s" % baseurl
        retval = utils.system(cmd)
        if retval != 0:
            raise error.TestFail

This will run the command, and store its exit code into the retval variable.

Although if you want to 'catch' the output of the command, you can use the system_output command:

from autotest_lib.client.common_lib import error

    ...
    try:  
        output = utils.system_output(cmd, retain_output = True)
    except error.CmdError, e:
        output = e.result_obj.stdout
    ...

Or if you want to have both exit code, and command output then try this out:

    ...
    result = utils.run(cmd, ignore_status = True, stdout_tee = utils.TEE_TO_LOGS)
    output = result.stdout
    retval = result.exit_status
    ...

See the section on test object attributes for information about self.bindir, self.tmpdir, etc. Also see Getting proper test results for more information about getting results from your tests.

postprocess_iteration()

This method is implemented in the AutoQATest base class, and it sends the data gathered in the self.{result, summary, highlights, outputs} to the autoqa-results maling list.

You can of course reimplement this function, if you want to (for instance) gather some extra data, or prepare the data gathered in the test before storing them, but please be sure to call the AutoQATest.postprocess_iteration() afterwards. In general, you should not need to reimplement this function at all.

    def postprocess_iteration(self):
        for line in self.outputs:
            if line.startswith('Max transfer speed: '):
                (dummy, max_speed) = line.split('speed: ')
        keyval['max_speed'] = max_speed
        self.write_test_keyval(keyval)

        super(CLASSNAME, self).postprocess_iteration()

(See Returning extra data for details about write_test_keyval.)

This method will be run after each iteration of run_once(), but note that it gets no arguments passed in. Any data you want from the test run needs to be saved into the test object - hence the use of self.outputs.

Useful test object attributes

test objects have the following attributes available[1]:

outputdir       eg. results/<job>/<testname.tag>
resultsdir      eg. results/<job>/<testname.tag>/results
profdir         eg. results/<job>/<testname.tag>/profiling
debugdir        eg. results/<job>/<testname.tag>/debug
bindir          eg. tests/<test>
src             eg. tests/<test>/src
tmpdir          eg. tmp/<tempname>_<testname.tag>

Getting test results

The AutoQATest class provides a set of variables (self.{result, summary, highlights, outputs}) to be used for storing test results. The point of these, is to be able to have one implementation of the results harness - in the AutoQATest class. At the time being, the results are being sent to the autoqa-results mailing list, but in the near future, we'll be using a database-based storage, which will give us a better way of reviewing the results. Proper usage of abovementioned variables is crucial to the seamless transition to this tool.

Overall result (self.result)

Overall result is to be stored in the self.result variable. You should set it in run_once() according to the result of your test. You can choose from these values:

  • 'PASSED'
  • 'FAILED'
  • 'ABORTED'
  • 'CRASHED'
  • 'NEEDS_INSPECTION'

If you don't set the self.result, it's automatically set-up to "NEED_INSPECTION" in the postprocess_iteration().

If an exception happens, and is catched by ExceptionCatcher decorator, self.result is set to "CRASHED" (if not set to some value already from inside the run_once().

Note.png
TIP
The best approach, to setting-up the self.result is to leave the "PASSED" to be set as the last line (or one of the last lines) in the test - once you are really sure, everything really went OK. The other results ('FAILED', 'CRASHED', ...) are best to be set at the moment you find it out to be the proper result.

Summary

The self.summary is used as a "$SUBJ" for the purposes of the autoqa-results mailing list. It is ment to contain a short summary of the testrun - e.g. for Conflicts test, it can be "69 packages with file conflicts in rawhide-i386". So basically, it should be a short string describing the outcome of the test.

postprocess_iteration() then Adds the name of the test, and self.result, so the whole summary (as used for the mailing list autoqa-results) would be "Conflicts: FAILED;69 packages with file conflicts in rawhide-i386".

Highlights

The self.highlights should contain a digest from the stdout/stderr generated by your test. Maybe selecting the important warnings/errors would be a good idea.

This digest will be at the beginning of the report in the autoqa-results mailing list.

Commands output

It is usually a good idea to log stdout/stderr of the commands you run in your run_once(). You should store these in the self.outputs variable, if you want it to be stored for further use.

Note.png
Note
At the moment, all the logs (stder/stdout...) are automagically harvested and stored by Autotest, so you don't really need to worry about it. It's just always a good idea to store it also in the self.outputs, as this variables value will be stored in ResultsDB, once it's up'n'running.


Log files and scratch data

Any files written to self.resultsdir will be saved at the end of the test. Anything written to self.tmpdir will be discarded.

Returning extra data

Further test-level info can be returned by using test.write_test_keyval(dict):

extrainfo = dict()
for line in self.results.stdout:
    if line.startswith("kernel version "):
        extrainfo['kernelver'] = line.split()[3]
    ...
self.write_test_keyval(extrainfo)
  • For per-iteration data (performance numbers, etc) there are three methods:
    • Just attr: test.write_attr_keyval(attr_dict)
      • Test attributes are limited to 100 characters.[2]
    • Just perf: test.write_perf_keyval(perf_dict)
      • Performance values must be floating-point numbers.
    • Both: test.write_iteration_keyval(attr_dict, perf_dict)

How to run AutoQA tests

Install AutoQA from GIT

First of all, you'll need to checkout some version from GIT. You can either use master, or some tagged 'release'.

To checkout master branch:

git clone git://git.fedorahosted.org/autoqa.git autoqa
cd autoqa

To checkout tagged release:

git clone git://git.fedorahosted.org/autoqa.git autoqa
cd autoqa
git tag -l
# now you'll get a list of tags, at the time of writing this document, the latests tag was v0.3.5-1
git checkout -b v0.3.5-1 tags/v0.3.5-1

Add your test

The best way to add your test into the directory structure is to create a new branch, copy your test and make install autoqa.

git checkout -b my_new_awesome_test
cp -r /path/to/directory/with/your/test ./tests
make clean install
Note.png
Dependencies
It's possible, that make install will fail due to missing some python modules (e.g. turbogears2), in that case, install those using yum

Run your test

This is dependent on the hook, your test is supposed to run under. Let's assume, that it is the post-koji-build.

/usr/share/autoqa/post-koji-build/watch-koji-builds.py --dry-run

This command will show you current koji builds e.g.

No previous run - checking builds in the past 3 hours
autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12
autoqa post-koji-build --name kdemultimedia --kojitag dist-f11-updates-candidate --arch x86_64 kdemultimedia-4.3.4-1.fc11
autoqa post-koji-build --name kdeplasma-addons --kojitag dist-f11-updates-candidate --arch x86_64 kdeplasma-addons-4.3.4-1.fc11
autoqa post-koji-build --name cryptopp --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 cryptopp-5.6.1-0.1.svn479.fc12
autoqa post-koji-build --name drupal --kojitag dist-f12-updates-candidate --arch x86_64 drupal-6.15-1.fc12
autoqa post-koji-build --name seamonkey --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 seamonkey-2.0.1-1.fc12
... output trimmed ...

So to run your test, just select one of the lines, and add parameters --test name_of_your_test --local, which will locally execute the test you just wrote. If you wanted to run rpmlint, for example, the command would be

autoqa post-koji-build --name espeak --kojitag dist-f12-updates-candidate --arch x86_64 --arch i686 espeak-1.42.04-1.fc12 --test rpmlint --local
Important.png
--local
It is important to add the --local parameter. If you won't, the test will fail to run, since you don't have autotest server present.

References

Links

Autotest documentation