OpenShift in Fedora Infrastructure
We want to stand up at least a proof of concept OpenShift instance in Fedora infrastucture. There's a number of reasons for this, inclusing (but not limited to):
- A chance to stay on the leading edge of tech
- Ability to deploy things that are expecting to be deployed in OpenShift
- Allow us to change/improve our deployment workflow -- shorten fix -> delivery time
- Save resources by moving applications into pods instead of vm's.
- Help dogfood software our primary sponsor makes and give them feedback.
- Reduce required sysadmin/ops cycles for application deployment/updating.
1. OpenShift Origin or OpenShift Container platform?
- OpenShift Container Platform was decided upon due to the scale and complexity of the system and the constant churn of the upstream project, using OCP will give us bugfixes and security updates without the rapid lifecycle.
2. Install to vms or bare metal?
- Virtual Machines - Having read Red Hat Reference Architectures and speaking with the OpenShift Online Operations Team (openshift.com), virtual machines are perfectly fine.
3. Atomic host or Normal?
- RHEL Atomic Host
4. RHEL or Fedora?
- RHEL (mostly as a side effect of the choice for OCP vs Origin, OCP is only officially supported on RHEL)
5. How many: controllers, nodes, routers?
- 3 Masters, 3 Nodes, 3 Infra Nodes (Routers and Registries on Infra Nodes)
6. Storage? NFS? gluster?
- NFS - We've discussed this with other groups running OpenShift (internal Red Hatters as well as the CentOS Infra team) on "traditional VMs" instead of IaaS Clouds and NFS seems to be the defacto recommendation at this time. Also as a point of note, gluster is the direction people seem to be moving as that solution becomes more mature.
7. Databases: in or outside?
- Depends on load and performance needs. Should always default to database inside the OpenShift cluster, move to external if/when necessary.
- Something to consider: https://www.crunchydata.com/products/crunchy-postgresql-container-suite
8. Setup things with ansible? what level? just the OpenShift? Any/all apps? Changing things like replicants?
- Ansible literally everything, the OpenShift Online Operations Team does this and recommended it.
9. Keep proxy network haproxy -> openshift apps?
- Use the internal OpenShift router for ingress traffic to apps, in HA configuration.
App Policy Questions
1. All rpms?
- Default to all RPMs, simple pre-built image deploy (should be built in FLIBS)
- Allow for non-RPM built applications, this will require a license audit by $INFRA_TEAM (and $LEGAL?)
2. Trigger builds from git? Just some branches?
- Per application decision to be made, this can be configured from the openshift client.
3. Does prod vs stg vs dev distinction exist anymore?
- Yes, it does. (At least for now)
4. Do freezes exist anymore?
- Yes, until we have full gating tests on infra apps. This can change in the future once confidence in tests is high.
5. whats the workflow?
- Depends on app
- Example: upstream dev -> release -> rpm package -> Container layered image build -> deploy
- Example: git commit upstream -> build -> test -> deploy (slowly) ?
- Example: git commits upstream on release branch -> build -> test -> deploy (slowly)?
6. CI? when and where?
- App specific, this should be done with OpenShift built-in pipelines as these are Infra-specific apps and not things shipping into the distro and therefore not inherently appropriate for Fedora CI.
7. "alive tests" ?
- Required to exist in prod, strongly recommended in stage, and is the responsibility of the application developer or owner.
8. what monitoring do we do on instances/apps?
- what initial apps would be good to move to it?
- I'd be interested in looking at blockerbugs or taskotron as candidates to move to openshift (tflink)