Monday, January 6, 2020

Kubernetes with vSphere CSI and CPI Part 1

About a year ago I wrote an article outlining steps to follow to get vSphere and Kubernetes working together. At that time I mentioned that cloud providers within Kubernetes had been deprecated but the replacements weren't ready yet. Well that's changed, so I thought I'd get an updated article outlining the new steps.
Unlike the last time I did this, there's decent documentation out there, and I'd encourage you to have a read through it, but a couple of things bothered me.
  • The document uses outdated APIs. Kubernetes still has a long way to go (in my opinion) and the pace of change is remarkable, so I guess this is to be expected
  • It isn't explained why some of the operations need to be done, so I'm going to try to explain the why with the minimum required steps

Documentation Links


Kubernetes Installation

I'm actually going to assume you have at least two Linux machine available to install kubernetes on, one master and one worker. If you follow the guide above you should be in a pretty good position to deploy kubernetes, so to start, we'll need a configuration file. This is required because you can't specify a cloud provider from the command line, and because of that you need to specify everything within a config file. What I'd really like to do is just add a "--cloud-provider" and be done, but no, at least, not for now.

The vSphere guide includes a lot of things I don't like. It specifies a specific etcd and coreDNS version, it also specifies a specific kubernetes version, none of which I wanted. Here is a minimal configuration. It's using the current apiVersion, v1beta2, although you can check for later with this link, package kubeadm.
ubuntu@k8s-master:~$ cat kubeadminit.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: external
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
networking:
  podSubnet: "192.168.0.0/16"
We've actually got two configuration options in this file. An Init Configuration which simply tells kubernetes to use an external cloud provider, and because I want to use Calico with a 192.168.0.0/16 subnet, that's in the Cluster Configuration section. The official guide also has a bootstrap token, which you can specify if you like (we'll need it later), but I let kubeadm generate one that we can use when joining nodes. It doesn't matter where this file goes, your home folder is just fine.

We then use this config file to initialize the cluster
ubuntu@k8s-master:~$ sudo kubeadm init --config kubeadminit.yaml
You do need to run this as root (or sudo in this case) and need to pay attention to a couple of things in the output
  • Setup kubectl, which will look like this; do these now, you'll need kubectl for all the other commands:
  • mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
  • The join command which has our token
  • kubeadm join 10.2.44.53:6443 --token fa5p9m.j4qygsv5t601ug62 \
        --discovery-token-ca-cert-hash sha256:c1653ee75b86dcff36cd006730d5989048ab54e29c30290e8826aeaa752b3428 
    
Note the highlighted token that was generated for you, or if you specified one, it should be also listed here.

Normally, we'd just run the cluster join command on all the workers, but because we need to tell them to use an external cloud provider we have a chicken and egg problem as outlined in the Kubernetes Cloud Controller Manager link above. To get around this, we need to export discovery information from the master which includes address and certificate information with this command.
ubuntu@k8s-master:~$ kubectl -n kube-public get configmap cluster-info -o jsonpath='{.data.kubeconfig}' > discovery.yaml
This will produce a file that looks something like this:
ubuntu@k8s-master:~$ cat discovery.yaml
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <long_cert_will_be_here>
    server: https://10.2.44.53:6443
  name: ""
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
Now you'll need to scp that to all worker nodes. Again, it doesn't matter where it goes. It can also technically be on a web server over https if you'd rather.
ubuntu@k8s-master:~$ scp discovery.yaml ubuntu@k8s-worker1:/home/ubuntu/

Joining Worker Nodes

Like the master, we need to specify an external cloud provider, and because there isn't a command line option, we need a new configuration file. There are three important parts to this file:
  • A path to our discovery file
  • The tls bootstrap token when we initialized the cluster
  • Tell the worker to use an external cloud provider (the point of all of this)
To do that we'll have a file like this:
ubuntu@k8s-master:~$ cat kubeadminitworker.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
discovery:
  file:
    kubeConfigPath: /home/ubuntu/discovery.yaml
  tlsBootstrapToken: fa5p9m.j4qygsv5t601ug62
nodeRegistration:
  kubeletExtraArgs:
    cloud-provider: external
And then it's a simple command to join:
ubuntu@k8s-worker-1:~$ sudo kubeadm join --config /home/ubuntu/kubeadminitworker.yaml

Node Verification

Back on the master, make sure any new nodes show up and that they have a tainted flag applied
ubuntu@k8s-master:~$ kubectl describe nodes | egrep "Taints:|Name:"
Name:               k8s-master
Taints:             node-role.kubernetes.io/master:NoSchedule
Name:               k8s-worker1
Taints:             node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
Name:               k8s-worker2
Taints:             node.cloudprovider.kubernetes.io/uninitialized=true:NoSchedule
Not quite in a running state but we'll finish things off in part 2 of this post.

No comments:

Post a Comment