From Fedora Project Wiki

This SOP describes the process for creating a test plan for a specific package. Following the procedure outlined on this page is important as it ensures we use a similar process for testing all packages in Fedora, and makes it simpler to integrate these tests into the update release process.


A test plan, for Fedora, is a co-ordinated set of test cases that, together, test the functionality of some particular piece of software, or process.


In most cases, the best way to go about creating a test plan for a particular package is to consider the tasks or functions the package is capable of performing. For instance, if the package you wished to create a test plan for was dnf, you might identify the following functions:

  • install a package
  • remove a package
  • provide information on a package
  • search for a package
  • update the system
  • set up a repository
  • disable a repository

Identify, if any, some or all of the functions that constitute part of the Fedora critical path. For our yum example, update the system would be a critical path function. You may also want to split the functions into 'core' and 'extended' groups: those that are functions so basic the package would be considered severely broken if they were to fail, and those that are less important. Of course, if you are creating a test plan for a very complex package, it may be impractical to create test cases for every possible function, or even to identify them all. In this case simply concentrate on the most vital functions first of all, particularly Fedora critical path functions. Even if you can only create one test case, or a very small number, this is better than having none at all!

Test case creation

Once you have identified the functions you wish to create test cases for, create a test case for each function, according to the SOP for creating test cases.


To link the cases together and ensure they can easily be found both manually and automatically (for the potential use of tools such as Bodhi, Fedora Easy Karma or others), use categories. You should always create or use the appropriate category whenever creating a package-specific test case, no matter what the immediate purpose of the test case is, or if there is only one test case: the category is still necessary in this case to help people locate the test case.

Simple (required) categorization

Add each test case for your package to the category named Category:Package_(packagename)_test_cases, where (packagename) is the name of the package. Use the .src.rpm name, not binary package names. For our yum example, the category would be named Category:Package_yum_test_cases. Make this category a sub-category of Category:Test_Cases, and use the feature that lets you adjust the sorting order so it appears under the appropriate letter. For our yum example, we would add [[Category:Test_Cases|Y]] to the Category:Package_yum_test_cases page, which would make it appear under 'Y' in the Category:Test_Cases page, not under 'P'. If you identified any critical path functions, add the test case(s) for those function(s) to the category named Category:Critical_path_test_cases.

For our yum example, consider two test cases:

  • QA:Testcase_yum_upgrade - to test upgrading the system
  • QA:Testcase_yum_history - to test yum's history function

Both must be in the category Category:Package_yum_test_cases, and that category should be a sub-category of Category:Test_Cases. However, QA:Testcase_yum_upgrade must also be in the category Category:Critical_path_test_cases.

Advanced (optional) categorization

If you want to split the test cases into groups, you can add sub-categories of the main package category, such as Category:Package_(packagename)_core_test_cases and Category:Package_(packagename)_extended_test_cases. This split can be helpful for testers, if there are many test cases for a package; it could also be expressed by tools such as Fedora Easy Karma.

For our yum example, consider a third test case added to our previous two test cases:

  • QA:Testcase_yum_install - to test installing a package
  • QA:Testcase_yum_upgrade - to test upgrading the system
  • QA:Testcase_yum_history - to test yum's history function

As before, all three must be in the category Category:Package_yum_test_cases and QA:Testcase_yum_upgrade must be in the category Category:Critical_path_test_cases. Additionally, you could place QA:Testcase_yum_install and QA:Testcase_yum_upgrade in a category Category:Package_yum_core_test_cases and QA:Testcase_yum_history in a category Category:Package_yum_extended_test_cases, and make both those categories sub-categories of Category:Package_yum_test_cases, if you wished to separate core and extended test cases for yum.

Test plan instructions

You do not have to write explicit test plan instructions if it is not necessary for the package in question; the existence of the categories will serve to make the tests discoverable by testers. You can, of course, link to the category page or the individual tests from anywhere else you choose, as well.

You can, if it would be useful for the package in question and if you choose, add instructions or hints for running the complete test plan for a package to the Category:Package_(packagename)_test_cases page; remember, category pages can be edited and used just as normal content pages within the Wiki system. You can then link to the category page from wherever else would be appropriate.

You could also include instructions for running some or all of the tests in some other page, on the Wiki or outside it, and then link either to the individual tests or to the category page.

You may also want to write a test plan specific to a particular event, such as a Test Day, in which case it would make most sense to include it in or link to it from the page for that event.

As you can see, there are many possible different scenarios for test plan instructions, so we leave this up to the discretion of those creating the test plan and test cases and test events.


If any of the test cases you create would be amenable to automated testing - that is, they could be tested without the need for manual hand-holding - you may look into integrating those test cases with the AutoQA system. If you contact the AutoQA team, they will be able to help and advise you with writing a script to perform the testing, and hooking it into the AutoQA system so that these tests can be run automatically whenever you change the package.