From Fedora Project Wiki

(add some more info)
(Add in section that this page is dead.)
 
(26 intermediate revisions by 6 users not shown)
Line 1: Line 1:
 +
{{admon/warning|This page is obsolete and should be removed by 2021-06. There is currently no Fedora Infrastructure Cloud and no plans to bring it back. Currently Fedora Infrastructure relies on gifted cloud units from Amazon for this.}}
 +
 
= Background =
 
= Background =
  
Fedora Infrastructure is looking to setup a private eucalyptus cloud instance in 2012. This cloud instance will be used in a number of ways to benefit Fedora. We evaluated a number of cloud technologies and decided (at least for now) on eucalyptus as the best fit for our needs.  
+
Fedora Infrastructure is running a private cloud infrastructure for various infrastructure and community projects. This infrastructure is currently running RHOSP 5 (Red Hat Open Stack Platform 5).
 +
 
 +
= Two Cloudlets =
 +
 
 +
We have things setup in 2 cloudlets to allow us to serve existing cloud needs, while still having the ability to test new software and tech. From time to time we may migrate uses from one to the other as a newer version or kind of setup is determined to meet our production needs more closely.  
 +
 
 +
= Current primary setup =
  
= Why Eucalyptus =
+
* fed-cloud09 is the main controller node
 +
* fed-cloud03,04,05,06,07,08,10,11,12,13,14,15 and fed-cloud-ppc02 are compute nodes.
  
* Open Source
+
= Setup / deployment =
  
* Active Community
+
This hardware is setup on the 'edge' of the network and not connected to the rest of Fedora Infrastructure except via external networks. This allows us to us external ip's and make sure the cloud instance doesn't have access to anything in the regular Fedora Infrastructure. Storage will be on the local servers.
  
* Deployable now
+
We have 17 physical servers total. Currently 14 of them are in production, and 3 are being used for testing purposes by the infrastructure team.
  
* Instances can be VLAN private so they cannot interfere with each other.
+
Storage is provided by 2 dell equalogics boxes (one with ~20TB space, the other with ~10TB)
  
= Use cases =
+
The current setup has only 1 controller node so outages can and will occur when upgrades are being done, etc.
  
* Fedora QA may use instances with it's AutoQA setup. Instances would be created, tests run and destroyed. It's unknown how many instances we would need here.  
+
Nodes in this cloud use the 'fedorainfracloud.org' domain in most cases (with some few exceptions).  
  
* Infrastructure Development hosts may be moved to this cloud. These instances could possibly be 'on demand' when development needs to take place. Currently we have about 8 development instances.  
+
x86_64 and ppc64 and ppc64le instances are provided by the current cloud.
  
* Infrastructure Staging hosted may be moved to this cloud. Some of these may be 'always on' and some may be on demand. Currently we have about 13 of these instances.
+
= Upcoming plans =
  
* Chainbuilding / Kopers may use this cloud to build chains of packages that are not yet in Fedora and thus cannot be build via scratch builds in the existing buildsystem. This may be open to Fedora contributors or restricted to a subset such as packagers.  
+
As many of the existing hardware boxes are reaching end of life/support, we have ordered some new hardware in Q2 of 2017. We plan to setup RHOSP10 (or later) on this new hardware and then recreate instances in this new cloud and retire the old one. This is planned for mid/late 2017. This new install should allow us to setup 2 head nodes with HA so we can do upgrades or the like without much in the way of outages. Additionally we hope to add armv7 and aarch64 support via openstack ironic.  
  
* Test instances may be used for testing new tech or applications as a proof of concept before persuing a RFR. We currently have several publictest instances.
+
= Policies =
  
* We may want to move some of our one-off instances that are outside phx2 into the cloud for easier management. Things like keyservers, unbound instances, listservers or hosted resources.  
+
Users or groups that need rare one off images can simply request one via a infrastructure ticket.  
  
For initial deployment, we would need to be able to run ~30 or so instances at a time with ability to grow rapidly above that for qa and building needs.
+
Users or groups that often need instances may be granted accounts to spin up and down their own images.  
  
= Dependencies =
+
Instances may be rebooted at any time. Save your data off often.
  
* Need a way to easily provision new instances with limited admin intervention. Looking at ansible for this task.  
+
Persistent storage may be available as seperate volumes. Data retention and Quotas may be imposed on this data.  
  
* Would like to be able to create images via kickstart and normal install/deployment methods if needed.  
+
Instances are assist in furthering the work related to the Fedora Project. Please don't use them for unrelated activities.  
  
* Hardware needs to be ordered and installed.  
+
We reserve the right to shutdown, delete or revoke access to any instances at any time for any reason.
  
* Public IP addresses need to be made available.
+
= Images =
  
* Would be nice to get full EPEL packages to deploy with.  
+
We will provide fedora, centos and rhel images.  
  
= Setup / deployment =
+
If you need to add images, please name them the same as their filename. Ie, "Fedora 22 Beta TC 2" is fine, please don't use 'test image' as we have no idea what it might be.
 +
 
 +
= Major users =
 +
 
 +
* The copr buildsystem is housed entirely in the Fedora Infrastructure Private cloud.
 +
 
 +
* jenkins. Fedora infrastructure provides a jenkins instance to run tests on some open source projects.
 +
 
 +
* Many Infrastructure dev instances are housed in the Fedora Infrastructure private cloud.
 +
 
 +
* Fedora magazine and community blogs are hosted here.
  
We are looking at a set of blades in a bladecenter as an initial hardware. This will allow us to expand and has high density.
+
* The twisted project runs some buildbot tests.  
This hardware will be on the 'edge' of the network and not connected to the rest of Fedora Infrastructure except via external networks. This will allow us to us external ip's and make sure the cloud instance doesn't have access to anything in the regular Fedora Infrastructure. Storage will be on the local blades for caching with additional netapp space for images and data.
 
  
= Implementation overview / timelines =
+
= hardware access =  
  
2012-04 - Hardware is being determined and finalized.  
+
ssh access to the bare nodes will be for sysadmin-cloud and possibly fi-apprentice (with no sudo).  
  
2012-05 - Initial hardware setup and install
+
= maint windows =
  
2012-06 - Initial use cases setup and tested
+
We are reserving the right to update and reboot the cloud when and as needed. We will schedule these outages as we do for any outage and will spin back up any persistent cloud instances we have in our ansible inventory after the outage is over. It's up to owners of any other instances to spin up new versions of them after the outage and make sure all updates are applied.
  
2012-07 - Announce availability and collect more use cases.
+
= Contact / more info =
  
2012-08 - Evaluate load and expansion needs.
+
Please contact the #fedora-admin channel or the fedora infrastructure list for any issues or questions around our private cloud.

Latest revision as of 12:18, 4 April 2021

Warning.png
This page is obsolete and should be removed by 2021-06. There is currently no Fedora Infrastructure Cloud and no plans to bring it back. Currently Fedora Infrastructure relies on gifted cloud units from Amazon for this.

Background

Fedora Infrastructure is running a private cloud infrastructure for various infrastructure and community projects. This infrastructure is currently running RHOSP 5 (Red Hat Open Stack Platform 5).

Two Cloudlets

We have things setup in 2 cloudlets to allow us to serve existing cloud needs, while still having the ability to test new software and tech. From time to time we may migrate uses from one to the other as a newer version or kind of setup is determined to meet our production needs more closely.

Current primary setup

  • fed-cloud09 is the main controller node
  • fed-cloud03,04,05,06,07,08,10,11,12,13,14,15 and fed-cloud-ppc02 are compute nodes.

Setup / deployment

This hardware is setup on the 'edge' of the network and not connected to the rest of Fedora Infrastructure except via external networks. This allows us to us external ip's and make sure the cloud instance doesn't have access to anything in the regular Fedora Infrastructure. Storage will be on the local servers.

We have 17 physical servers total. Currently 14 of them are in production, and 3 are being used for testing purposes by the infrastructure team.

Storage is provided by 2 dell equalogics boxes (one with ~20TB space, the other with ~10TB)

The current setup has only 1 controller node so outages can and will occur when upgrades are being done, etc.

Nodes in this cloud use the 'fedorainfracloud.org' domain in most cases (with some few exceptions).

x86_64 and ppc64 and ppc64le instances are provided by the current cloud.

Upcoming plans

As many of the existing hardware boxes are reaching end of life/support, we have ordered some new hardware in Q2 of 2017. We plan to setup RHOSP10 (or later) on this new hardware and then recreate instances in this new cloud and retire the old one. This is planned for mid/late 2017. This new install should allow us to setup 2 head nodes with HA so we can do upgrades or the like without much in the way of outages. Additionally we hope to add armv7 and aarch64 support via openstack ironic.

Policies

Users or groups that need rare one off images can simply request one via a infrastructure ticket.

Users or groups that often need instances may be granted accounts to spin up and down their own images.

Instances may be rebooted at any time. Save your data off often.

Persistent storage may be available as seperate volumes. Data retention and Quotas may be imposed on this data.

Instances are assist in furthering the work related to the Fedora Project. Please don't use them for unrelated activities.

We reserve the right to shutdown, delete or revoke access to any instances at any time for any reason.

Images

We will provide fedora, centos and rhel images.

If you need to add images, please name them the same as their filename. Ie, "Fedora 22 Beta TC 2" is fine, please don't use 'test image' as we have no idea what it might be.

Major users

  • The copr buildsystem is housed entirely in the Fedora Infrastructure Private cloud.
  • jenkins. Fedora infrastructure provides a jenkins instance to run tests on some open source projects.
  • Many Infrastructure dev instances are housed in the Fedora Infrastructure private cloud.
  • Fedora magazine and community blogs are hosted here.
  • The twisted project runs some buildbot tests.

hardware access

ssh access to the bare nodes will be for sysadmin-cloud and possibly fi-apprentice (with no sudo).

maint windows

We are reserving the right to update and reboot the cloud when and as needed. We will schedule these outages as we do for any outage and will spin back up any persistent cloud instances we have in our ansible inventory after the outage is over. It's up to owners of any other instances to spin up new versions of them after the outage and make sure all updates are applied.

Contact / more info

Please contact the #fedora-admin channel or the fedora infrastructure list for any issues or questions around our private cloud.