From Fedora Project Wiki

No edit summary
No edit summary
Line 6: Line 6:
== Use cases for the AutoQA resultsdb ==
== Use cases for the AutoQA resultsdb ==


=== Placeholder ===
'''Some of these use cases are not specific to resultdb, they must be moved to [[AutoQA Use Cases]].'''
 
=== Package maintainer ===
 
* Michael wants to push a new update for his package.
*# Michael just submits his package update to Fedora Update System and it takes care of the rest. Michael is informed if there is some problem with the update.
 
* Michael wants to know if there is something he could do to improve the quality of his package.
*# Michael observes AutoQA results for recent updates of his package in a web front-end. He goes through the output of advisory tests and also through the ouput of other tests that is marked as "attention recommended".
 
* Michael was warned that his package update failed automated QA testing and wants to see the cause.
*# Michael inspects AutoQA results for the last update of his package in a web front-end. He goes through the mandatory and introspection tests output and reads the summary. If additional information are needed, he opens the full tests' logs.
 
* Michael wants to waive errors related to his package update, because he's sure the errors are false.
*# Michael logs with his Fedora account to a AutoQA web front-end to prove he has priviledges to waive errors on a particular package. He inspects the errors on his package update and waives them if they are false.
 
* Michael wants to report a false error from one of AutoQA test cases and ask AutoQA maintainers for a fix.
*# Michael follows instructions on AutoQA wiki page (or web front-end) <https://fedoraproject.org/wiki/AutoQA> and uses a mailing list or bug tracker to report his problem.
 
=== Tester ===
 
* Stanley
 
=== QA member ===
 
* Caroline knows that some recent tests failed because of hardware problems with test clients. She wants to re-run the tests again.
*# Caroline logs in into the AutoQA web front-end, selects desired tests and instructs AutoQA to run them again.
 
* Caroline declares a war on LSB-incompliant initscripts. She wants to have all packages tested for this.
*# Caroline uses CLI to query AutoQA Results DB for results of 'Initscripts check' test for all recently updated packages. If she is not satisfied, she will instruct AutoQA to run this test on all available packages. '''<<FIXME - how?>>'''
 
=== Developer ===
 
* Nathan is curious about possible ways to make his software better.
*# Nathan checks the web front-end of AutoQA and displays a list of tests that were performed on his software recently. He checks the tests' descriptions and chooses those tests which are related to the software itself, not to packaging. He inspects the results of these tests and looks for errors, warnings or other comments on software (wrong) functionality. Some serious errors should have been probably already reported to Nathan by corresponding package maintainer, but there may be other less-serious issues that were not.
 
* Kate is heavily interested in Rawhide composition. She wants to be every day notified if composing of Rawhide succeeded or failed, eventually why.
*# Kate creates a script that remotely queries AutoQA Results DB and asks for current Rawhide compose testing status. She runs this script every day. In case Rawhide composition fails, the script notifies her and also presents her with a link leading to a AutoQA web front-end with more detailed results of the whole test.
 
=== AutoQA test developer ===
 
* Brian
 
=== Fedora tools ===
 
 
=== General public ===
 
* Karen

Revision as of 17:40, 5 March 2010

This page is a draft only
It is still under construction and content may change. Do not rely on the information on this page.

Task: [kparal + jskladan] - Define personas and write use cases
Example: No_frozen_rawhide_announce_plan#Use_Cases

Use cases for the AutoQA resultsdb

Some of these use cases are not specific to resultdb, they must be moved to AutoQA Use Cases.

Package maintainer

  • Michael wants to push a new update for his package.
    1. Michael just submits his package update to Fedora Update System and it takes care of the rest. Michael is informed if there is some problem with the update.
  • Michael wants to know if there is something he could do to improve the quality of his package.
    1. Michael observes AutoQA results for recent updates of his package in a web front-end. He goes through the output of advisory tests and also through the ouput of other tests that is marked as "attention recommended".
  • Michael was warned that his package update failed automated QA testing and wants to see the cause.
    1. Michael inspects AutoQA results for the last update of his package in a web front-end. He goes through the mandatory and introspection tests output and reads the summary. If additional information are needed, he opens the full tests' logs.
  • Michael wants to waive errors related to his package update, because he's sure the errors are false.
    1. Michael logs with his Fedora account to a AutoQA web front-end to prove he has priviledges to waive errors on a particular package. He inspects the errors on his package update and waives them if they are false.
  • Michael wants to report a false error from one of AutoQA test cases and ask AutoQA maintainers for a fix.
    1. Michael follows instructions on AutoQA wiki page (or web front-end) <https://fedoraproject.org/wiki/AutoQA> and uses a mailing list or bug tracker to report his problem.

Tester

  • Stanley

QA member

  • Caroline knows that some recent tests failed because of hardware problems with test clients. She wants to re-run the tests again.
    1. Caroline logs in into the AutoQA web front-end, selects desired tests and instructs AutoQA to run them again.
  • Caroline declares a war on LSB-incompliant initscripts. She wants to have all packages tested for this.
    1. Caroline uses CLI to query AutoQA Results DB for results of 'Initscripts check' test for all recently updated packages. If she is not satisfied, she will instruct AutoQA to run this test on all available packages. <<FIXME - how?>>

Developer

  • Nathan is curious about possible ways to make his software better.
    1. Nathan checks the web front-end of AutoQA and displays a list of tests that were performed on his software recently. He checks the tests' descriptions and chooses those tests which are related to the software itself, not to packaging. He inspects the results of these tests and looks for errors, warnings or other comments on software (wrong) functionality. Some serious errors should have been probably already reported to Nathan by corresponding package maintainer, but there may be other less-serious issues that were not.
  • Kate is heavily interested in Rawhide composition. She wants to be every day notified if composing of Rawhide succeeded or failed, eventually why.
    1. Kate creates a script that remotely queries AutoQA Results DB and asks for current Rawhide compose testing status. She runs this script every day. In case Rawhide composition fails, the script notifies her and also presents her with a link leading to a AutoQA web front-end with more detailed results of the whole test.

AutoQA test developer

  • Brian

Fedora tools

General public

  • Karen