From Fedora Project Wiki

< Changes

Revision as of 17:02, 5 August 2015 by Ralph (talk | contribs) (Created page with "Raw notes from composedb brainstorming session (still to be formatted) August 5th, 16:00UTC https://lists.fedoraproject.org/pipermail/rel-eng/2015-August/020562.html We shoul...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Raw notes from composedb brainstorming session (still to be formatted) August 5th, 16:00UTC https://lists.fedoraproject.org/pipermail/rel-eng/2015-August/020562.html

We should post these notes on the wiki afterwards for posterity.

The idea is to have something that knows what goes into every compose and what comes out of it. the atomic repos, the live cds, etc.. what's in them. what's in cloud, server, workstation, etc. so that we have a place where we can go and say what changed between this compose and this compose. so we can easily visualize what's different between primary this and s390 that.

Once upon a time we were thinking about keeping track of what's in development what's EOL, when was the last nightly compose that worked.

Give you a way to visualize when the last updates push was done.

Should really look at PDC since sharing tools is great.

it would be cool if when we're doing the rawhide compose,

we can look at see that nothing has changed in XFCE
so we don't rebuild that, but we do rebuild other things where things actually changed.

with that we can do things as they're needed, instead of once a night or only when release time happens.

for CI for rpm dependencies, what about koschei?

 well, it's a CI at the RPM level, but not one at the compose level.

it would be cool to produce reporting on the different editions over time:

 - show how the rpm size of workstation is growing (so we can fix it)
 - show how the rpm size of the cloud image is shrinking (so we can cheer it on)

supporting rings stuff properly

 we need a way to say what's in the different rings (so they can have different policies and processes)
 (anyway, there's lots of things that can come from having this information that we can't do today)

let's open source PDC -- they want to do it and we want to do it.

beyond that, let's think about a system that runs continuously to rebuild things as needed. internally, it seems unlikely that anything like this exists already. they have a mostly manual process now involving sign-off, etc.

maybe - build things to as-complete as they can be, but require human signoff to make things public

leverage taskotron to create side-tags to rebuild stuff (if soname bumps) also to auto-gate things and keep them from reaching the next step in the process. say, stuff in ring 0 and ring 1 require tests X, Y, and Z, but ring 2 requires less. we could make sure that "rawhide is never broken".

publish fedmsg messages about failures, etc..

have all actual build processes running in koji (other options are less secure or less supported, jenkins, taskotron?, tunir?)

outhouse could be a place where the policy glue (gating) comes into play by figuring out what goes into artifacts and what "ring" things are in. then we could block things appropriately if such and such input doesn't pass depcheck (for instance).

https://twitter.com/TheMaxamillion/status/608040785829871616



Requirements


- Solve all the problems -

Design/implementation notes


- Written in python -