From Fedora Project Wiki

(Flock presentation link seems dead -> removed it)
 
(6 intermediate revisions by 4 users not shown)
Line 11: Line 11:
The goal of this is to have a simple regression test suite than any user can run against their running kernel.  The tests should be fast, and non destructive.  If there are certain tests with destruction potential, they should be marked separately so that they will not run in the common case, but can be run as part of an extended run where the user doesn't fear data loss.  These tests will be run as part of the automated testing process, but should be easy for anyone to run without having to set up autotest.
The goal of this is to have a simple regression test suite than any user can run against their running kernel.  The tests should be fast, and non destructive.  If there are certain tests with destruction potential, they should be marked separately so that they will not run in the common case, but can be run as part of an extended run where the user doesn't fear data loss.  These tests will be run as part of the automated testing process, but should be easy for anyone to run without having to set up autotest.
A number of single check regression tests should be created to test for common regressions in the kernel.  These can be individual executable tests with the most important criteria that each test have a common reporting format as explained in [[KernelRegressionTestGuidelines]]. As the results of potentially dozens of tests blow by quickly, it is important to be able to quickly identify failures.  They will be called by a master test script from a kernel-testing subpackage or possibly a makefile within the fedora kernel git tree.  As many of the individual tests will be driver related, the control file can check to see that a specific driver is loaded, and simply skip the test if it is not or if hardware is not available.
A number of single check regression tests should be created to test for common regressions in the kernel.  These can be individual executable tests with the most important criteria that each test have a common reporting format as explained in [[KernelRegressionTestGuidelines]]. As the results of potentially dozens of tests blow by quickly, it is important to be able to quickly identify failures.  They will be called by a master test script from a kernel-testing subpackage or possibly a makefile within the fedora kernel git tree.  As many of the individual tests will be driver related, the control file can check to see that a specific driver is loaded, and simply skip the test if it is not or if hardware is not available.
If you are simply interested in running the regression tests, instructions can be found on the [[QA:Testcase_kernel_regression]] page.


If you would be interested in helping out and writing a test, please add it to the [[KernelRegressionTests]] page.
If you would be interested in helping out and writing a test, please add it to the [[KernelRegressionTests]] page.
Line 19: Line 21:


== Automated Testing ==
== Automated Testing ==
The [http://www.fedmsg.com Fedora Message Bus] shares information on completed builds. Using a client sitting on the fedmsg bus, we can get instant notification of a completed build. This allows our KVM host to launch the appropriate guests and begin testing immediately. Currently, all builds are being tested with the regression test suite. More details can be found in the [http://flocktofedora.org/wp-content/uploads/2013/07/flock-2013_Kernel_Regression_Testing.pdf Flock Presentation.]
The [http://www.fedmsg.com Fedora Message Bus] shares information on completed builds. Using a client sitting on the fedmsg bus, we can get instant notification of a completed build. This allows our KVM host to launch the appropriate guests and begin testing immediately. Currently, all builds are being tested with the regression test suite.  


Finally, we want to be able to tie in automated performance workloads for testing on specific builds.  This will allow us to catch performance regressions more easily, but as these will be more heavyweight tests, we do not want to waste cycles running them on debug kernels or minor updates.  An automated performance regression test framework will be in place soon to allow performance testing of non debug rawhide kernels, with fairly strict controls to ensure kernels are fairly compared.
Finally, we want to be able to tie in automated performance workloads for testing on specific builds.  This will allow us to catch performance regressions more easily, but as these will be more heavyweight tests, we do not want to waste cycles running them on debug kernels or minor updates.  An automated performance regression test framework will be in place soon to allow performance testing of non debug rawhide kernels, with fairly strict controls to ensure kernels are fairly compared.
Line 31: Line 33:


== Status ==
== Status ==
* Regresssion testing: See [[KernelRegressionTests]]  Some test cases have been written, more are always appreciated. The framework is now on fedorahosted. Packaging and fedmsg/badges integration coming soon.
* Regression testing: See [[KernelRegressionTests]]  Some test cases have been written, more are always appreciated. The framework with tests is on [https://pagure.io/kernel-tests Pagure]. Packaging and fedmsg/badges integration coming soon.
* Automated Testing: A working test harness which interacts with the fedmsg bus and libvirt is in place on dedicated hardware.  Documentation is in progress. Public hosting of the harness git repository is located on [https://github.com/jmflinuxtx/kerneltest-harness github].
* Automated Testing: A working test harness which interacts with the fedmsg bus and libvirt is in place on dedicated hardware.  Documentation is in progress. Public hosting of the harness git repository is located on [https://github.com/jmflinuxtx/kerneltest-harness github].
* The Front End is available for viewing test results as well as submitting your own test results: [https://apps.fedoraproject.org/kerneltest/ Kernel Tests]
* The Front End is available for viewing test results as well as submitting your own test results: [https://apps.fedoraproject.org/kerneltest/ Kernel Tests]
* Performance Testing: Hardware is in place. Testing framework is not started.
* Performance Testing: Hardware is in place. Testing framework is not started.

Latest revision as of 22:58, 15 March 2018

The kernel testing initiative aims to increase the quality of the Fedora kernel though both manual and automated testing in. We plan to build a framework that will:


  • Allow users to easily run a regression test suite on bare hardware to catch driver issues
  • Provide automated kernel regression testing against every build
  • Provide automated performance testing of new kernel releases as specified.

As we are in the early stages of getting this into place, more details will be fleshed out as we get them. You can always check our progress on the bottom of this page.

Regression Testing

The goal of this is to have a simple regression test suite than any user can run against their running kernel. The tests should be fast, and non destructive. If there are certain tests with destruction potential, they should be marked separately so that they will not run in the common case, but can be run as part of an extended run where the user doesn't fear data loss. These tests will be run as part of the automated testing process, but should be easy for anyone to run without having to set up autotest. A number of single check regression tests should be created to test for common regressions in the kernel. These can be individual executable tests with the most important criteria that each test have a common reporting format as explained in KernelRegressionTestGuidelines. As the results of potentially dozens of tests blow by quickly, it is important to be able to quickly identify failures. They will be called by a master test script from a kernel-testing subpackage or possibly a makefile within the fedora kernel git tree. As many of the individual tests will be driver related, the control file can check to see that a specific driver is loaded, and simply skip the test if it is not or if hardware is not available.

If you are simply interested in running the regression tests, instructions can be found on the QA:Testcase_kernel_regression page.

If you would be interested in helping out and writing a test, please add it to the KernelRegressionTests page.

The regression test suite will be packaged for Fedora, making it easy for end users to install and run. It will also have an optional tie in to the fedmsg bus, which will allow result reporting and badges to be awarded.

The current results are visible on The Kerneltest Web App. We encourage you to submit your results.

Automated Testing

The Fedora Message Bus shares information on completed builds. Using a client sitting on the fedmsg bus, we can get instant notification of a completed build. This allows our KVM host to launch the appropriate guests and begin testing immediately. Currently, all builds are being tested with the regression test suite.

Finally, we want to be able to tie in automated performance workloads for testing on specific builds. This will allow us to catch performance regressions more easily, but as these will be more heavyweight tests, we do not want to waste cycles running them on debug kernels or minor updates. An automated performance regression test framework will be in place soon to allow performance testing of non debug rawhide kernels, with fairly strict controls to ensure kernels are fairly compared.

Performance Testing

More in-depth performance testing should be tied into autotest. As this is the last phase of the project, many of the requirements have not been set. There are a few key elements that we do know:

  • Testing should use a common platform with kernel being the only changes to provide meaningful results.
  • Testing should be limited to release kernels, there is no benefit to performance testing on debug kernels
  • Comparative results should be graphed for quick verification.

Status

  • Regression testing: See KernelRegressionTests Some test cases have been written, more are always appreciated. The framework with tests is on Pagure. Packaging and fedmsg/badges integration coming soon.
  • Automated Testing: A working test harness which interacts with the fedmsg bus and libvirt is in place on dedicated hardware. Documentation is in progress. Public hosting of the harness git repository is located on github.
  • The Front End is available for viewing test results as well as submitting your own test results: Kernel Tests
  • Performance Testing: Hardware is in place. Testing framework is not started.