From Fedora Project Wiki

Revision as of 19:36, 16 April 2012 by Jforbes (talk | contribs) (→‎Status)

The kernel testing initiative aims to increase the quality of the Fedora kernel though both manual and automated testing in. We plan to build a framework that will:


  • Allow users to easily run a regression test suite on bare hardware to catch driver issues
  • Provide automated kernel regression testing against every build
  • Provide automated performance testing of new kernel releases as specified.

As we are in the early stages of getting this into place, more details will be fleshed out as we get them. You can always check our progress on the bottom of this page.

Regression Testing

The goal of this is to have a simple regression test suite than any user can run against their running kernel. The tests should be fast, and non destructive. If there are certain tests with destruction potential, they should be marked separately so that they will not run in the common case, but can be run as part of an extended run where the user doesn't fear data loss. These tests will be run as part of the automated testing process, but should be easy for anyone to run without having to set up autotest. A number of single check regression tests should be created to test for common regressions in the kernel. These can be individual executable tests with the most important criteria that each test have a common reporting format as explained in KernelRegressionTestGuidelines. As the results of potentially dozens of tests blow by quickly, it is important to be able to quickly identify failures. They will be called by a master test script from a kernel-testing subpackage or possibly a makefile within the fedora kernel git tree. As many of the individual tests will be driver related, the control file can check to see that a specific driver is loaded, and simply skip the test if it is not or if hardware is not available.

Automated Testing

The autotest framework will allow for automated testing of kernel builds. This will fit into the autoqa framework or could be run stand alone on a user's systems to test kernel releases without a need to reboot the running machine. Phase 1 of the autotest implementation will include instructions for users to download, install, and configure autotest to run the basic kernel boot and regression tests against a koji build or a local build. The goal is to have people up and running quickly without having to delve into a full blown autotest configuration. The initial instructions should be similar to:

  • git clone autotest
  • edit the configuration file for the appropriate local test directories
  • run autotest against a "setup" control file to install a guest for autotests use.
  • run autotest against the kernel control file as needed to test new releases

It seems simple, and it should be. Our goal is to make it so. A few pieces still need to be worked on to make that possible. All autotest enhancements will be submitted upstream. This includes the ability to automatically install koji builds into the guest, and the ability to use a centralized config file for all of the basic parameters so that users do not have to modify multiple test cases to fit into their environment.

The next step will be to tie this into koji and autoqa so that the basic regression tests are automatically run every time a kernel build completes.

Finally, we want to be able to tie in automated performance workloads for testing on specific builds. This will allow us to catch performance regressions more easily, but as these will be more heavyweight tests, we do not want to waste cycles running them on debug kernels or minor updates. A system to determine which kernels get automatically performance tested will need to be devised.

Performance Testing

More in-depth performance testing should be tied into autotest. As this is the last phase of the project, many of the requirements have not been set. There are a few key elements that we do know:

  • Testing should use a common platform with kernel being the only changes to provide meaningful results.
  • Testing should be limited to release kernels, there is no benefit to performance testing on debug kernels
  • Comparative results should be graphed for quick verification.

Status

  • Regresssion testing: See KernelRegressionTests Currently coming up with a list of test cases to be written.
  • Automated Testing: working with upstream to get the guest koji builds module included. Documentation is in progress. Config file integration still to do.
  • Performance Testing: Not yet started