From Fedora Project Wiki

Revision as of 20:57, 25 August 2009 by Wwoods (talk | contribs) (Partial first draft)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Warning.png
This page is a draft only
It is still under construction and content may change. Do not rely on the information on this page.

Introduction

Here's some info on writing tests for AutoQA. There's three parts to a test: the test code, the control file, and the test object.

Test code

In short, you should have a working test before you even start thinking about AutoQA. You can package up pre-existing tests or you can write a new test in whatever language you're comfortable with. It doesn't even need to return a meaningful exit code if you don't want it to. You'll handle parsing the output and returning a useful result in the test object.

If you are writing a brand new test, there are some python libraries that have been developed for use in existing AutoQA tests. More information about this will be available once these libraries are packaged correctly.

Control Files

The control file defines the metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from AutoQA, and so on. Here's an example template:

control file for conflicts test

TIME="SHORT"
AUTHOR = "Will Woods <wwoods@redhat.com>"
DOC = """
This test runs potential_conflict from yum-utils to check for possible
file / package conflicts.
"""
NAME = 'conflict'
TEST_TYPE = 'CLIENT'
TEST_CLASS = 'General'
TEST_CATEGORY = 'Functional'

job.run_test('conflicts', baseurl=url,
                          parents=parents,
                          reponame=reponame,
                          config=autoqa_conf)

Each hook should contain a file called control.template, which you can use as the template for the control file for your new test.

Required items

TODO

Control files are python scripts

The control file is actually interpreted as a Python script. So you can do any of the normal pythonic things you might want to do, but in general it's best to keep the control file as simple as possible and put all the complicated bits into the test object or the test itself.

Before it reads the control file, Autotest imports all the symbols from the autotest_lib.client.bin.util module.[1] This means the control files can use any function defined in common_lib.utils or bin.base_utils[2]. This lets you do things like:

arch = get_arch()
baseurl = '%s/development/%s/os/' % (mirror_baseurl, arch)
job.run_test('some_rawhide_test', arch=arch, baseurl=baseurl)

since get_arch is defined in common_lib.utils.

Test Objects

The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test, and sending results to Autotest (and other places).

Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the conflicts test contains a file named conflicts.py, which defines a conflicts class, as follows:

from autotest_lib.client.bin import test, utils
from autotest_lib.client.bin.test_config import config_loader

class conflicts(test.test):
    ...

All test objects must be subclasses of the autotest test.test class. But don't worry too much about how this works - each hook should contain a test_class_template.py that contains the skeleton of an appropriate test object for that hook, complete with the usual setup code used by AutoQA tests.

initialize()

TODO

setup()

TODO

run_once()

TODO

Getting test results

First, the basic rule for test results: If your run_once() method does not raise an exception, the test result is PASS. If it raises error.TestFail or error.TestWarn the test result is FAIL or WARN. Any other exception yields an ERROR result.

For simple tests you can just run the test binary like this:

self.results = utils.system_output(cmd, retain_output=True)

If cmd is successful (i.e. it returns an exit status of 0) then utils.system_output() will return the output of the command. Otherwise it will raise error.CmdError, which will immediately end the test.

Some tests always exit successfully, so you'll need to inspect their output to decide whether they passed or failed. That would look more like this:

output = utils.system_output(cmd, retain_output=True)
if 'FAILED' in output:
    raise error.TestFail
elif 'WARNING' in output:
    raise error.TestWarn

Saving log files

TODO

Returning extra data

Further test-level info can be returned by using test.write_test_keyval(dict):

extrainfo = dict()
for line in self.results.stdout:
    if line.startswith("kernel version "):
        extrainfo['kernelver'] = line.split()[3]
    ...
self.write_test_keyval(extrainfo)
  • For per-iteration data (performance numbers, etc) there are three methods:
    • Just attr: test.write_attr_keyval(attr_dict)
    • Just perf: test.write_perf_keyval(perf_dict)
    • Both: test.write_iteration_keyval(attr_dict, perf_dict)

Attributes for directories

test objects have the following attributes available[3]:

outputdir       eg. results/<job>/<testname.tag>
resultsdir      eg. results/<job>/<testname.tag>/results
profdir         eg. results/<job>/<testname.tag>/profiling
debugdir        eg. results/<job>/<testname.tag>/debug
bindir          eg. tests/<test>
src             eg. tests/<test>/src
tmpdir          eg. tmp/<tempname>_<testname.tag>

References

TODO links to autotest wiki