From Fedora Project Wiki
(→‎Potential Future Package Checks: adding another check idea (robatino's idea))
(No difference)

Revision as of 20:24, 11 November 2013

Summary

This is a high level description of the different subprojects which would make up the future qa automation system and some of the decisions that need to be made. This is a living document and will likely change during the initial phases


Scheduling

We're currently using a very primitive script that piggybacks on fedmsg-hub to trigger generic package level checks. The biggest concern right now is whether fedmsg is reliable enough to use for scheduling checks. It's not that fedmsg is unreliable, it's just that the consequences of not scheduling a check if we're gating builds or updates based on automation results are bad enough that the rare case of dropped messages may be worth worrying about.

The first step is figuring out whether the case of dropped messages is even worth worrying about - some investigation will be needed. If it turns out that this is worth worrying about, that will bring up the question of whether we want to be running regular queries against koji and/or bodhi or if we shift to doing all scheduling based off of regular queries instead of relying on fedmsg reception.

Assuming that there are no major issues, the scheduling is likely to be something that we can delay working on much for now. As we get more checks and especially when we start evolving to support more complicated scheduling, we will likely need to revisit the scheduling mechanism.


Task Execution

This is going to need a decent amount of work; we're not even clear on the best way to start moving forward with execution yet, much less how long it's going to take. The first step is an investigation into possible methods. The ones that we're currently considering are:

Static Executable Names

This is a rather simplistic way of executing tasks and implies that everything would work on the command line. While it is simple, it's also rather restrictive and may not work as a long term solution.

It's worth noting that we've had issues with executing tests in AutoQA with regards to command length. This may require some form of command storage in a central system and only passing a jobid to the actual executable

Job Generation

Similar (in concept, at least) to what openstack is doing with [jjb]

  • has the potential to be more language and execution environment agnostic
  • requires a method for translating yaml to executable job
  • would likely allow for changes to utility methods without input from job maintainers

Base Job in Python

This is conceptually similar to the autotest harness and runner system (assuming that a python job is kicked off from the control file) where a job is built from a base class and various methods are superseded as needed to achieve the desired result

  • may allow for reuse of the autotest harness/runner
  • would require some knowledge of python for job maintainers
  • may allow for better integration with execution framework since there is some standardization to the code without any extra generation


Refactoring AutoQA Libraries

While we are aiming to replace AutoQA, we'd be crazy to just throw out all of the code and lessons learned there. The library will need refactoring and reworking for use in taskbot but we aim to reuse as much of that code as possible.

Task Execution Coordination

For now, this role is being played by buildbot. I'm not sure it's the best choice for this role but I haven't really seen any compelling reasons to either not use it or use anything different. Unless something changes in the near future, I don't see replacing us using anything different soon.

As taskbot evolves, we'll probably want to start either writing custom components for buildbot or start contributing upstream. beyond the packaging issues that exist for buildbot on el6 right now, there are some reasonably large changes coming in the next version that may help us out - namely the shift from a generated html gui to a more client/server api approach with the default interface using angular.

I don't see this needing an incredible amount of work right away - what's already present in buildbot should work fine for now.


Checks

At some point, we'll want to start including additional checks but I think many of them can wait a little while until things start stabilising more. The immediate issue is depcheck - it currently does not work on later than f18 and relies heavily on yum's internal API which will be a problem as fedora switches over to dnf.

Once enough support systems are implemented (most but not all listed on this page), it will be possible to do any number of checks that we currently cannot support. A list of some that have come up in conversation are:

Potential Future Package Checks

  • abi change
  • abi breakage within a release
  • static analysis
  • installability (to check scriptlets and other quirks that can get into stable updates)

Potential Future System Checks

  • graphical installation checks with one or more of [infinity], [Xpresser] or [openqa]
  • multi host server checks (httpd, other servers)
  • cloud image checks
  • gnome self-checks
  • simple DE checks for functionality


Support Tools

At some point, we're likely to want the ability to spin up test isos and/or test trees. We don't have a preference on whether this happens within our infrastructure and tools or if we request stuff from releng.


Beaker Integration

We have the opportunity to get some checks currently being run inside redhat but these tests are written for beaker. If we want to go in this direction, we'd need some integration with beaker from taskbot and likely some sort of automation for updating the tasks in beaker when those jobs change.

One remaining question is whether or not we want to commit the resources to running a beaker infra. The beaker devs are interested in seeing this happen and may be willing to help with maintenance but we still need to figure out if this is a direction that we want to go


Fedora Integration

The biggest item here is what to do about integration with bodhi. while we could replicate what autoqa is currently doing wrt comments - I don't see this as a particularly good way to report results.

This will depend on any timetables for bodhi 2.0 and requirements for how/if we want to start gating builds/updates based on check results.


Result Storage

Work on a new, [simplified version of resultsdb] has already been started and [deployed to the fedora cloud]. The new resultsdb is written in flask and uses a restful json interface instead of the xmlrpc interface used by the current production resultsdb.

This will work for at least the short term, but we have some longer-term ideas about integration with tcms and alternate methods for result storage. For now, we want to keep things **simple**. The amount of required work here will likely depend on how we're integrating with bodhi and the rest of fedora.


Package maintainer interaction/notification

autoqa doesn't have a lot of interaction with package maintainers right now beyond bodhi comments. if we're going to start doing things like gating updates based on check results, we're going to need to revisit some of this. Some initial thoughts are:

  • PM on irc
  • ping on irc in some channel
  • email
  • some kind of fedmsg client?

I'd like to do some sort of poll/survey among package maintainers to see how they want to receive automated check results - this will more than likely require some sort of per-user configuration at some point.


Automation Infrastructure

This has been an ongoing topic with a couple of different levels. There are various reasons as to why things are the way they are but there are costs associated with doing any of these things and we need to have some discussion around what we need, what we want and what we're willing to pay for (mostly in terms of people to maintain stuff).

Infrastructure Maintenance

We have been mostly deploying and maintaining our own infrastructure that isn't tied to the rest of the Fedora infra. Is this something that we want to continue or do we want to start integrating more with their tools and processes?

Possible parts of this include:

  • lockbox clone
  • CI system
  • openstack
  • database(s)
  • backups
  • monitoring

For the record, I really don't want to be getting into the business of infrastructure maintenance. Unless there is a good reason, we should be defaulting to existing services/processes already maintained by fedora infra.

Devel Tools

This is something that may need to strike a balance between what would be ideal and what's practical given our current constraints and resources

  • gerrit/reviewboard
  • phabricator
  • bug tracking

Another question in here is whether we want to host code on github/bitbucket or fedorahosted. There are a lot of users already in systems like github but the question of whether we want to be relying on partially closed tools still remains.


Self Check

One thing that's always been missing in autoqa is any self-checks, unit tests or ability to be somewhat certain things wouldn't explode when deployed to production. Some of this is pretty unavoidable due to the nature of what we're doing but that's no excuse to not have almost no self-checks.

I want to have unit tests and some functional/integration tests from the very beginning that are enforced. Code reviews will be required as will CI and sane automation for pushing out updates.