From Fedora Project Wiki

< FWN‎ | Beats

Revision as of 22:50, 16 June 2010 by Adamwill (talk | contribs) (create fwn 230 qa beat)

QualityAssurance

In this section, we cover the activities of the QA team[1]. For more information on the work of the QA team and how you can get involved, see the Joining page[2].

Contributing Writer: Adam Williamson

AutoQA initscript testing

Josef Skladanka updated[1] the status of the automated initscripts test effort[2]. He explained that 30% of initscripts had now been reviewed, and again asked for people to help in completing the process.

Fedora 14 QA schedule

John Poelstra posted[1] the QA group schedule for Fedora 14[2], including all the significant dates for the team in the run-up to the next Fedora release.

Virtualized testing

Bob Lightfoot asked[1] if there was a consensus on the use of virtual machines as opposed to real systems in testing, and whether it is acceptable to run tests of the install media in virtual machines. Richard Ryniker's well-considered response[2] pointed out that "just as an error observed on "real" hardware might be attributed to a quirk or fault in that platform, so too an error in a VM might be the result of some bug in the implementation of the VM," and that "errors observed in a VM environment...should be subjected to the same triage process that might elevate them to "critical" status because they seriously impact operation on many (real or virtual) platforms, or reduce them to "future consideration" status because they have little impact, they occur only on platforms rare enough to suggest a quirk or platform fault is their cause". Adam Williamson said[3] that virtual testing is valuable, but testing on real hardware is also necessary, in both cases.

NSS dependency issue

During the QA weekly meeting of 2010-06-07[1], Adam Williamson brought up the problem with dependencies in the nss-softokn package which had caused dependency issues during updates for many users of the 64-bit edition of Fedora 13. The group concluded that there had been no failure in the QA processes, but also agreed that it would be a good idea to make sure the AutoQA dependency checks will be able to catch this particular type of problem when they go live. Adam promised to send Will Woods a summary of the issue for this purpose.

Triage metrics

During the Bugzappers weekly meeting of 2010-06-08[1], Adam Williamson recapped the previous efforts to produce a system for monitoring the triage process and providing metrics on triage work, and proposed an alternative approach of producing some simple Bugzilla queries that would provide some basic information in the short term and without a lot of complex work. Jeff Raber stepped in and volunteered to attempt this.