From Fedora Project Wiki
(Created page with "{{QA/Test_Case |description=Install Kubernetes on Fedora Atomic Host using [https://github.com/kubernetes/contrib.git kubeadm]. |setup= * Install one or more Fedora Atomic Hos...")
 
mNo edit summary
Line 5: Line 5:


|actions=
|actions=
'''''NOTE: the kubelet is currently broken on F28 due to https://bugzilla.redhat.com/show_bug.cgi?id=1558425 -- I'm not aware of a workaround at the moment.'''''


* Use package layering to install kubeadm on each host:
* Use package layering to install kubeadm on each host:
Line 15: Line 19:
</nowiki>
</nowiki>


* kubernetes wants to create a flex volume driver dir at <code>/usr/libexec/kubernetes</code>, but this is a read-only location on atomic hosts. Modify <code>/etc/systemd/system/kubelet.service.d/kubeadm.conf</code> to match the following line, and then run <code>systemctl daemon-reload</code> to pick up the change:
<nowiki>
Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=systemd --volume-plugin-dir=/etc/kubernetes/volumeplugins"
</nowiki>


* Start the kubelet and initialize the kubernetes cluster. We specify a pod-network-cidr because flannel, which we'll use in this test, requires it, and we skip preflight checks because FAH27 includes docker 1.13, which kubernetes does not yet explicitly support, and because kubeadm looks in the [https://github.com/kubernetes/kubernetes/pull/49410 wrong place] for kernel config.
* Start the kubelet and initialize the kubernetes cluster. We specify a pod-network-cidr because flannel, which we'll use in this test, requires it, and we skip preflight checks because because kubeadm looks in the [https://github.com/kubernetes/kubernetes/pull/49410 wrong place] for kernel config.


  <nowiki>
  <nowiki>

Revision as of 00:16, 11 April 2018

Description

Install Kubernetes on Fedora Atomic Host using kubeadm.

Setup

  • Install one or more Fedora Atomic Hosts.

How to test

NOTE: the kubelet is currently broken on F28 due to https://bugzilla.redhat.com/show_bug.cgi?id=1558425 -- I'm not aware of a workaround at the moment.


  • Use package layering to install kubeadm on each host:
 rpm-ostree install kubernetes-kubeadm ethtool ebtables -r
  • Unfortunately, as of 1.7.3, SELinux again needs to be in permissive mode for kubeadm to work:
# setenforce 0

  • kubernetes wants to create a flex volume driver dir at /usr/libexec/kubernetes, but this is a read-only location on atomic hosts. Modify /etc/systemd/system/kubelet.service.d/kubeadm.conf to match the following line, and then run systemctl daemon-reload to pick up the change:
Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=systemd --volume-plugin-dir=/etc/kubernetes/volumeplugins"
 
  • Start the kubelet and initialize the kubernetes cluster. We specify a pod-network-cidr because flannel, which we'll use in this test, requires it, and we skip preflight checks because because kubeadm looks in the wrong place for kernel config.
# systemctl enable --now kubelet

# kubeadm init --pod-network-cidr=10.244.0.0/16 --skip-preflight-checks

  • Follow the directions in the resulting output to configure kubectl:
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

  • Deploy the flannel network plugin:
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  • By default, your cluster will not schedule pods on the master for security reasons. If you want to be able to schedule pods on the master, e.g. for a single-machine Kubernetes cluster run:
# kubectl taint nodes --all node-role.kubernetes.io/master-
  • If desired, join additional nodes to the master using the kubeadm join command provided in the kubeadm init output. For instance:
# kubeadm join --token 2a247c.f357bc09c56b12c8 atomic1:6443
  • Check on the install:
# kubectl get nodes
NAME                                           STATUS    AGE       VERSION
atomic1   Ready     6m        v1.7.3

# kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY     STATUS    RESTARTS   AGE
kube-system   etcd-atomic1                      1/1       Running   0          5m
kube-system   kube-apiserver-atomic1            1/1       Running   0          6m
kube-system   kube-controller-manager-atomic1   1/1       Running   0          5m
kube-system   kube-dns-2425271678-lpqlt         3/3       Running   0          6m
kube-system   kube-flannel-ds-fcmbb             1/1       Running   0          4m
kube-system   kube-proxy-mrdf4                  1/1       Running   0          6m
kube-system   kube-scheduler-atomic1            1/1       Running   0          6m


  • Run some test apps
# kubectl run nginx --image=nginx --port=80 --replicas=3
deployment "nginx" created

# kubectl get pods -o wide
NAME                    READY     STATUS    RESTARTS   AGE       IP            NODE
nginx-158599303-dbkjw   1/1       Running   0          19s       10.244.0.3    atomic1
nginx-158599303-g4q7c   1/1       Running   0          19s       10.244.0.4    atomic1
nginx-158599303-n0mwm   1/1       Running   0          19s       10.244.0.5    atomic1

# kubectl expose deployment nginx --type NodePort
service "nginx" exposed

# kubectl get svc
NAME         CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   10.254.0.1      <none>        443/TCP        40m
nginx        10.254.52.120   <nodes>       80:32681/TCP   14s

# curl http://atomic1:32681
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

Expected Results

  1. kubeadm runs without error.
  2. You're able to run Kubernetes apps using the cluster.