From Fedora Project Wiki

Introduction

This page is intended to be a comprehensive list of all the ways people will interact with AutoQA. All items will be focused on outlining the steps required to accomplish a specific task. This page will detail activities for test developers, administrators, release engineers and package maintainers. This page follows a similar design to the Fedora_Talk_Admin_Cases and related pages.

  • If the use case works today, there should be a link to a wiki page explaining how to do it
  • If the use case does not exist yet, there should be a link to an AutoQA Ticket.

Test Developer Use Cases

The following use cases are aimed at Roger. Roger is active in the QA community and would like to help QA team to catch as many problems as possible before they reach Fedora users. He decided to write a new test and integrate it into AutoQA framework to increase the test coverage.

Decide what to test

  1. Roger examines the list of available tests to see which tests are already executed. For each test he looks into the control file to read a short description of that test.
  2. Roger examines the list of available events to see which events are monitored by these watchers. For each event he reads the README file to read a short description of the event and the testlist file to see which tests are executed when corresponding event happens.
  3. Roger finds a problem area that is not currently covered by tests and he would like to cover it. Roger takes into consideration if the new test would be triggered by an existing watcher (easier) or a new watcher/event would have to be written (harder). He also considers if the new test would utilize existing standalone tools and scripts (like rpmlint, repoclosure, etc.) or new tools would have to be written to achieve the goal.
  4. Roger now has a clear idea what he wants to test.

Collect prerequisites for the test

  1. Roger needs a tool that would get the required work done (eg. traverse the repository, validate the packages, etc).
  2. The tool is either already existing (eg. repoclosure, rpmlint, etc) or must be yet written.
  3. If the tool doesn't already exist Roger creates one. It is a standalone tool written in arbitrary programming language and using arbitrary libraries. The only important things is that it makes reasonable use of standard input, standard output and command-line arguments and therefore it can be easily run in an automated fashion.
  4. Roger now has all the standalone tools needed for his task to be done.

Determine when the test should run

  1. Roger examines the list of available events to see which events are monitored. For each event he reads the README file to read a short description of the event.
  2. Roger examines the autoqa.cron file to see how often there is a check for new events through running different watchers.
  3. If Roger needs more detailed information he examines the watchers (watcher.py files in watcher's directories) to see the current implementation. By seeing the source code he can find out some important details.
  4. Roger now knows which event is most suitable to trigger his new test.
  5. If no existing event suits Roger's requirements, a new event has to be created. Roger creates one according to Writing AutoQA Events and Watchers. Because that is not an easy task to do, he consults the AutoQA team on qa-devel mailing list and receives important advices or even help with doing that.

Integrate the test into AutoQA

  1. Roger reads through AutoQA architecture to have a basic overview how AutoQA works.
  2. Roger installs autotest and AutoQA - FIXME. Roger chooses to install autotest-client+autoqa if his test can be run locally on his own machine and he doesn't want to spend time configuring autotest-server. Roger chooses to install autotest-server+autoqa if his test is more sophisticated or he is willing to configure autotest-server and its clients.
  3. Roger creates control file and test object according to Writing AutoQA Tests.

Verify the test works

  1. Roger has created a new AutoQA test. He is sure that the standalone tools he's using inside the test are working well, but he wonders if he has written all the AutoQA control files correctly and if the test as a whole will function properly. He needs to verify that.
  2. Roger verifies his new test according to Verifying AutoQA tests.
  3. Roger has now verified that his new test is working properly in the AutoQA environment.

Contribute the test

  1. Roger has a new test, which seems to be working well. He wants to publish it in the AutoQA project upstream.
  2. Roger creates a new ticket in AutoQA Trac and provides a patch or a link to his git branch there. AutoQA Patch Process can help him with creating patches.
  3. Roger can also start a discussion in qa-devel mailing list about his work.
  4. Roger waits until his new work is accepted.


Administrator Use Cases

These use cases are aimed at Nancy. Nancy is a member of the Fedora Infrastructure team and has been asked to help the AutoQA project with sysadmin tasks that require access to infrastructure systems and tools.

Create a AutoQA system from scratch

  1. Her first task is to prepare a new Fedora system
  2. Once the system is prepared, Nancy reads and follows the instructions for how to Install_and_configure_autotest
  3. Now, Nancy must install AutoQA packages - [1]

Add a new test system to AutoQA

  1. Nancy first started by setting up an AutoQA system from scratch (see #Create a AutoQA system from scratch)
  2. Next, Nancy wants to add one or more systems that Autotest will use as test systems. She does so by following the instructions at How_to_add_autotest_clients

Recover a failed test system

Remove a test system

Update puppet configuration

Package Maintainer Use Cases

Ned is the maintainer of several packages in Fedora. After having dealt with a several reoccurring bugs in the last round of updates to his packages, Ned would like to write some tests to help capture the failures before they happen.

View existing test coverage

Write a test

  1. See #Write_a_test perhaps?
  2. Where do they store the tests? In CVSDist?

Run the test(s) manually

Test for proper integration of the test(s)

Subscribe for notifications to a selected test (or test/package combination)

Received notification of test failure ... need more details?

  1. FIXME