From Fedora Project Wiki
No edit summary
No edit summary
Line 41: Line 41:
</pre>
</pre>


<code>autoqa_conf</code> variable contains string with autoqa.conf file. Note, though, that some of the values in autoqa_conf are changed by the autoqa harness while scheduling the testrun.
<b><code>autoqa_conf</code></b> variable contains string with autoqa.conf file. Note, though, that some of the values in autoqa_conf are changed by the autoqa harness while scheduling the testrun.


<code>autoqa_args</code> is a dictionary, containing all the hook-specific variables (e.g. kojitag for post-koji-build hook). Documentation on these is to be found in <code>hooks/[hookname]/README</code> files.
<b><code>autoqa_args</code></b> is a dictionary, containing all the hook-specific variables (e.g. kojitag for post-koji-build hook). Documentation on these is to be found in <code>hooks/[hookname]/README</code> files.


{{admon/important| FIXME | Append some real control file to show 'how it looks'}}
{{admon/important| FIXME | Append some real control file to show 'how it looks'}}
Line 70: Line 70:
=== Launching the test object ===
=== Launching the test object ===


Most simple tests will have a line in the control file like this:
Most tests will have a line in the control file like this:
<pre>job.run_test('conflicts', baseurl=url, treename=treename, config=autoqa_conf)</pre>
<pre>job.run_test('conflicts', config=autoqa_conf, **autoqa_args)</pre>
This will create a 'conflicts' ''test object'' (see below) and pass along the given variables.
This will create a 'conflicts' ''test object'' (see below) and pass along the given variables.
The test hook defines what variables will be provided. The control file template should list these variables for you, and the template's example <code>run_test()</code> line should already include them.


Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.
Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.


=== Control files are python scripts ===
=== Control files are python scripts ===
{{admon/important| FIXME | Deprecated?}}


The control file is actually interpreted as a Python script. So you ''can'' do any of the normal pythonic things you might want to do, but in general it's best to keep the control file as simple as possible and put all the complicated bits into the test object or the test itself.
The control file is actually interpreted as a Python script. So you ''can'' do any of the normal pythonic things you might want to do, but in general it's best to keep the control file as simple as possible and put all the complicated bits into the test object or the test itself.
Line 87: Line 87:
job.run_test('some_rawhide_test', arch=arch, baseurl=baseurl)</pre>
job.run_test('some_rawhide_test', arch=arch, baseurl=baseurl)</pre>
since <code>get_arch</code> is defined in <code>common_lib.utils</code>.
since <code>get_arch</code> is defined in <code>common_lib.utils</code>.
=== Control.autoqa files ===
{{admon/important| FIXME | insert some description - kparal?}}


== Test Objects ==
== Test Objects ==
Line 93: Line 96:


Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the <code>conflicts</code> test contains a file named <code>conflicts.py</code>, which defines a <code>conflicts</code> class, as follows:
Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the <code>conflicts</code> test contains a file named <code>conflicts.py</code>, which defines a <code>conflicts</code> class, as follows:
{{admon/important| FIXME | Adjust to current state}}


<pre>
<pre>
Line 104: Line 110:
The name of the class ''must'' match the name given in the <code>run_test()</code> line of the control file, and test classes must be subclasses of the autotest <code>test.test</code> class. But don't worry too much about how this works - each hook should contain a <code>test_class_template.py</code> that contains the skeleton of an appropriate test object for that hook, complete with the usual setup code used by [[AutoQA]] tests. Just change the name of the file (and class!) to something appropriate for your test.
The name of the class ''must'' match the name given in the <code>run_test()</code> line of the control file, and test classes must be subclasses of the autotest <code>test.test</code> class. But don't worry too much about how this works - each hook should contain a <code>test_class_template.py</code> that contains the skeleton of an appropriate test object for that hook, complete with the usual setup code used by [[AutoQA]] tests. Just change the name of the file (and class!) to something appropriate for your test.


=== initialize() ===
 
=== AutoQATest base class ===
 
=== ExceptionCatcher decoratod ===
 
=== Test stages ===
 
==== initialize() ====


This is an optional method of the test class. It does any pre-test initialization that needs to happen. AutoQA tests typically use this method to parse the autoqa config data passed from the server:
This is an optional method of the test class. It does any pre-test initialization that needs to happen. AutoQA tests typically use this method to parse the autoqa config data passed from the server:
Line 115: Line 128:
Check out [https://fedorahosted.org/autoqa/browser/autoqa.conf autoqa.conf] to see what data this variable would hold.
Check out [https://fedorahosted.org/autoqa/browser/autoqa.conf autoqa.conf] to see what data this variable would hold.


=== setup() ===
==== setup() ====


This is another optional method of the test class. This is where you make sure that any required packages are installed, services are started, your test code is compiled, and so on. For example:
This is another optional method of the test class. This is where you make sure that any required packages are installed, services are started, your test code is compiled, and so on. For example:
Line 126: Line 139:
</pre>
</pre>


=== run_once() ===
==== run_once() ====


This is where the test code actually gets run. It's the only ''required'' method for your test object.
This is where the test code actually gets run. It's the only ''required'' method for your test object.
Line 141: Line 154:
             raise error.TestFail
             raise error.TestFail
</pre>
</pre>
{{admon/important| FIXME | Add - how to run command, which returns non-zero exit code, without causing Autotest exception}}


See the section on [[#Useful test object attributes|test object attributes]] for information about <code>self.bindir</code>, <code>self.tmpdir</code>, etc. Also see [[#Getting proper test results|Getting proper test results]] for more information about getting results from your tests.
See the section on [[#Useful test object attributes|test object attributes]] for information about <code>self.bindir</code>, <code>self.tmpdir</code>, etc. Also see [[#Getting proper test results|Getting proper test results]] for more information about getting results from your tests.


=== postprocess_iteration() ===
==== postprocess_iteration() ====


This method can be used to gather extra data from the test output - detailed failure info, performance numbers, and so on. For example:
This method can be used to gather extra data from the test output - detailed failure info, performance numbers, and so on. For example:
Line 165: Line 180:
This method will be run after each iteration of <code>run_once()</code>, but note that it gets ''no arguments passed in''. Any data you want from the test run needs to be saved into the test object - hence the use of <code>self.output</code> to hold the output of the command.
This method will be run after each iteration of <code>run_once()</code>, but note that it gets ''no arguments passed in''. Any data you want from the test run needs to be saved into the test object - hence the use of <code>self.output</code> to hold the output of the command.


=== Useful test object attributes ===
==== Useful test object attributes ====
<code>test</code> objects have the following attributes available<ref>http://autotest.kernel.org/browser/branches/0.10.1/client/common_lib/test.py#L9</ref>:
<code>test</code> objects have the following attributes available<ref>http://autotest.kernel.org/browser/branches/0.10.1/client/common_lib/test.py#L9</ref>:
<pre>
<pre>
Line 178: Line 193:


=== Getting proper test results ===
=== Getting proper test results ===
{{admon/important| FIXME | Talk about the self.[result, summary, ...] variables & how to use them.}}
{{admon/important| FIXME | Emphasise, that using it is crucial for simple transition to resultsdb and all the other automated stuff}}


First, the basic rule for test results: If your <code>run_once()</code> method does not raise an exception, the test result will be PASS. If it raises <code>error.TestFail</code> or <code>error.TestWarn</code> the test result is FAIL or WARN. Any other exception yields an ERROR result.
First, the basic rule for test results: If your <code>run_once()</code> method does not raise an exception, the test result will be PASS. If it raises <code>error.TestFail</code> or <code>error.TestWarn</code> the test result is FAIL or WARN. Any other exception yields an ERROR result.

Revision as of 12:30, 11 August 2010

QA.png


Warning.png
This page is a draft only
It is still under construction and content may change. Do not rely on the information on this page.

Introduction

Here's some info on writing tests for AutoQA. There's four parts to a test: the test code, the test object, the Autotest control file, and the AutoQA. Typically they all live in a single directory, located in the tests/ dir of the autoqa source tree.

Important.png
Start with a test
Before considering integrating a test into AutoQA or Autotest, create a working test. Creating a working test should not require knowledge of autotest or autoqa. This page is outlines the process of integrating an existing test into AutoQA.

Write test code first

I'll say it again: Write the test first. The tests don't require anything from autotest or autoqa. You should have a working test before you even start thinking about AutoQA.

You can package up pre-existing tests or you can write a new test in whatever language you're comfortable with. It doesn't even need to return a meaningful exit code if you don't want it to (even though it is definitely better). You'll handle parsing the output and returning a useful result in the test object.

If you are writing a brand new test, there are some python libraries that have been developed for use in existing AutoQA tests. More information about this will be available once these libraries are packaged correctly, but they are not necessary to write your own tests. You can choose to use whatever language and libraries you want.

The test directory

Create a new directory to hold your test. The directory name will be used as the test name, and the test object name should match that. Choose a name that doesn't use spaces, dashes, or dots. Underscores are fine.

Drop your test code into the directory - it can be a bunch of scripts, a tarball of sources that may need compiling, whatever.

Next, copy template files from the autoqa/doc control.template, test_class.py.template, and control.autoqa.template into your test dir. Rename them to control, [testname].py and test_class.py.template.

The control file

The control file defines some metadata for this test - who wrote it, what kind of a test it is, what test arguments it uses from AutoQA, and so on. Here's an example control file:

control file for conflicts test

AUTHOR = "Will Woods <wwoods@redhat.com>"
TIME="SHORT"
NAME = 'conflict'
DOC = """
This test runs potential_conflict from yum-utils to check for possible
file / package conflicts.
"""
TEST_TYPE = 'CLIENT'
TEST_CLASS = 'General'
TEST_CATEGORY = 'Functional'

job.run_test('conflicts', config=autoqa_conf, **autoqa_args)

autoqa_conf variable contains string with autoqa.conf file. Note, though, that some of the values in autoqa_conf are changed by the autoqa harness while scheduling the testrun.

autoqa_args is a dictionary, containing all the hook-specific variables (e.g. kojitag for post-koji-build hook). Documentation on these is to be found in hooks/[hookname]/README files.

Important.png
FIXME
Append some real control file to show 'how it looks'

Required data

The following control file items are required for valid AutoQA tests:

  • AUTHOR: Your name and email address.
  • TIME: either 'SHORT', 'MEDIUM', or 'LONG'. This defines the expected runtime of the test - either 15 minutes, less than 4 hours, or more than 4 hours.
  • NAME: The name of the test. Should match the test directory name, the test object name, etc.
  • DOC: A verbose description of the test - its purpose, the logs and data it will generate, and so on.
  • TEST_TYPE: either 'CLIENT' or 'SERVER'. Use 'CLIENT' unless your test requires multiple machines (e.g. a client and server for network-based testing).
  • TEST_CLASS: This is used to group tests in the UI. 'General' is fine. We may use this field to refer to the test hook in the future.
  • TEST_CATEGORY: This defines the category your test is a part of - usually this describes the general type of test it is. Examples include Functional, Stress, Performance, and Regression.

Optional data

DEPENDENCIES = 'POWER, CONSOLE'
SYNC_COUNT = 1
  • DEPENDENCIES: Comma-separated list of hardware requirements for the test. Currently unsupported.
  • SYNC_COUNT: The number of hosts to set up and synchronize for this test. Only relevant for SERVER-type tests that need to run on multiple machines.

Launching the test object

Most tests will have a line in the control file like this:

job.run_test('conflicts', config=autoqa_conf, **autoqa_args)

This will create a 'conflicts' test object (see below) and pass along the given variables.

Those variables will be inserted into the control file by the autoqa test harness when it's time to schedule the test.

Control files are python scripts

Important.png
FIXME
Deprecated?

The control file is actually interpreted as a Python script. So you can do any of the normal pythonic things you might want to do, but in general it's best to keep the control file as simple as possible and put all the complicated bits into the test object or the test itself.

Before it reads the control file, Autotest imports all the symbols from the autotest_lib.client.bin.util module.[1] This means the control files can use any function defined in common_lib.utils or bin.base_utils[2]. This lets you do things like:

arch = get_arch()
baseurl = '%s/development/%s/os/' % (mirror_baseurl, arch)
job.run_test('some_rawhide_test', arch=arch, baseurl=baseurl)

since get_arch is defined in common_lib.utils.

Control.autoqa files

Important.png
FIXME
insert some description - kparal?

Test Objects

The test object is a python file that defines an object that represents your test. It handles the setup for the test (installing packages, modifying services, etc), running the test code, and sending results to Autotest (and other places).

Convention holds that the test object file - and the object itself - should have the same name as the test. For example, the conflicts test contains a file named conflicts.py, which defines a conflicts class, as follows:


Important.png
FIXME
Adjust to current state
from autotest_lib.client.bin import test, utils
from autotest_lib.client.bin.test_config import config_loader

class conflicts(test.test):
    ...

The name of the class must match the name given in the run_test() line of the control file, and test classes must be subclasses of the autotest test.test class. But don't worry too much about how this works - each hook should contain a test_class_template.py that contains the skeleton of an appropriate test object for that hook, complete with the usual setup code used by AutoQA tests. Just change the name of the file (and class!) to something appropriate for your test.


AutoQATest base class

ExceptionCatcher decoratod

Test stages

initialize()

This is an optional method of the test class. It does any pre-test initialization that needs to happen. AutoQA tests typically use this method to parse the autoqa config data passed from the server:

    def initialize(self, config):
        self.config = config_loader(config, self.tmpdir)

Check out autoqa.conf to see what data this variable would hold.

setup()

This is another optional method of the test class. This is where you make sure that any required packages are installed, services are started, your test code is compiled, and so on. For example:

    def setup(self):
        utils.system('yum -y install httpd')
        if utils.system('service httpd status') != 0:
            utils.system('service httpd start')

run_once()

This is where the test code actually gets run. It's the only required method for your test object.

In short, this method should build the argument list and run the test binary, like so:

    def run_once(self, baseurl, parents, reponame):
        os.chdir(self.bindir)
        cmd = "./sanity.py --scratchdir %s --logdir %s" % (self.tmpdir, self.resultsdir)
        cmd += " %s" % baseurl
        retval = utils.system(cmd)
        if retval != 0:
            raise error.TestFail
Important.png
FIXME
Add - how to run command, which returns non-zero exit code, without causing Autotest exception

See the section on test object attributes for information about self.bindir, self.tmpdir, etc. Also see Getting proper test results for more information about getting results from your tests.

postprocess_iteration()

This method can be used to gather extra data from the test output - detailed failure info, performance numbers, and so on. For example:

    def run_once(self, testtype):
        cmd = './transfer-test --some --flags --testtype=%s' % testtype
        self.output = utils.system_output(cmd, retain_output=True)

    def postprocess_iteration(self):
        for line in self.output:
            if line.startswith('Max transfer speed: '):
                (dummy, max_speed) = line.split('speed: ')
        keyval['max_speed'] = max_speed
        self.write_test_keyval(keyval)

(See Returning extra data for details about write_test_keyval.)

This method will be run after each iteration of run_once(), but note that it gets no arguments passed in. Any data you want from the test run needs to be saved into the test object - hence the use of self.output to hold the output of the command.

Useful test object attributes

test objects have the following attributes available[3]:

outputdir       eg. results/<job>/<testname.tag>
resultsdir      eg. results/<job>/<testname.tag>/results
profdir         eg. results/<job>/<testname.tag>/profiling
debugdir        eg. results/<job>/<testname.tag>/debug
bindir          eg. tests/<test>
src             eg. tests/<test>/src
tmpdir          eg. tmp/<tempname>_<testname.tag>

Getting proper test results

Important.png
FIXME
Talk about the self.[result, summary, ...] variables & how to use them.
Important.png
FIXME
Emphasise, that using it is crucial for simple transition to resultsdb and all the other automated stuff

First, the basic rule for test results: If your run_once() method does not raise an exception, the test result will be PASS. If it raises error.TestFail or error.TestWarn the test result is FAIL or WARN. Any other exception yields an ERROR result.

For simple tests you can just run the test binary like this:

self.results = utils.system_output(cmd, retain_output=True)

If cmd is successful (i.e. it returns an exit status of 0) then utils.system_output() will return the output of the command. Otherwise it will raise error.CmdError, which will immediately end the test with an ERROR result. If you want to FAIL the test instead, try this:

testfail = False
try:
    # Add "2>&1" to cmd to include stderr in output
    out = utils.system_output(cmd + " 2>&1", retain_output=True)
except error.CmdError, e:
    testfail = True
    out = e.result_obj.stdout

# Do other post-testing stuff here, and then...
if testfail:
    raise error.TestFail

Some tests don't return a useful exit status - they always return 0 - so you'll need to inspect their output to decide whether they passed or failed. That would look more like this:

output = utils.system_output(cmd, retain_output=True)
if 'FAILED' in output:
    raise error.TestFail
elif 'WARNING' in output:
    raise error.TestWarn

Log files and scratch data

Any files written to self.resultsdir will be saved at the end of the test. Anything written to self.tmpdir will be discarded.

Returning extra data

Further test-level info can be returned by using test.write_test_keyval(dict):

extrainfo = dict()
for line in self.results.stdout:
    if line.startswith("kernel version "):
        extrainfo['kernelver'] = line.split()[3]
    ...
self.write_test_keyval(extrainfo)
  • For per-iteration data (performance numbers, etc) there are three methods:
    • Just attr: test.write_attr_keyval(attr_dict)
      • Test attributes are limited to 100 characters.[4]
    • Just perf: test.write_perf_keyval(perf_dict)
      • Performance values must be floating-point numbers.
    • Both: test.write_iteration_keyval(attr_dict, perf_dict)

References

Links

Autotest documentation