Friday, May 24, 2019

Kubernetes Logging With Fluent Bit

Centralized logging is a requirement in any kubernetes installation. With the defaults in place, logs will be kept on each node and won't persist as pods come and go. This is a big problem, as you really need your logs when things go wrong, not when they're working well, so I set out to establish a central log. All of the examples I could find referenced keeping logs in /var/log/containers/*.log which is great, but not in use on a systemd system. Since pretty much every linux distribution uses systemd these day, this is my attempt to provide a logging configuration to support my base install.

There are several ways to log, which can seem confusing, as to me, most of these methods are way too complicated and error prone. Kubernetes.io has this to say:
"Because the logging agent must run on every node, it’s common to implement it as either a DaemonSet replica, a manifest pod, or a dedicated native process on the node. However the latter two approaches are deprecated and highly discouraged."
So, using a DaemonSet is the way to go. With what? We'll fluent bit is the easiest way I've found, once you have a working configuration file for your setup. Originally I wanted to use Graylog as my collection and presentation layer, but have found that the the fluent pieces just aren't mature enough to deal with GELF correctly, so I eventually settled on an ELK stack; much better.

There are some excellent tutorials on how to install elastic search, logstash, and kibana on the web. If you're using the repositories, it can't get much easier so please follow one of those, https://computingforgeeks.com/how-to-install-elk-stack-on-centos-fedora/ is a good example.

Namespace And Role Setup

I like namespaces, I like roles, so the first thing to do is set those up. Because we're going to be running a daemon set to catch all pods and node activities we need to setup a cluster role and corresponding cluster role binding like this:
$ cat fluent-bit-role.yaml 
apiVersion: v1
kind: Namespace
metadata:
  name: logging
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: fluent-bit
  namespace: logging
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: fluent-bit-read
rules:
- apiGroups: [""]
  resources:
  - namespaces
  - pods
  verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: fluent-bit-read
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: fluent-bit-read
subjects:
- kind: ServiceAccount
  name: fluent-bit
  namespace: logging

Fluent Bit Configuration

A ConfigMap seems to be the most popular way to manage Fluent Bit's configuration. This is by no means exhaustive but should provide a pretty good template. A couple of general notes:
  • The name of the input or filter determines its type. It feels like just a string but it isn't, you're actually defining type with this field
  • Documentation is pretty good once you figure out which plugin you're dealing with, for example, Systemd document
  • There's no magical way to get the logs, the daemon set needs access to the nodes logs, including systemd. For centos7 this is kept under /run/log/journal/<UUID>/*.journal. We'll talk about this again when building the daemon set itself
I wanted to "enrich" my node logs with kubernetes information as without it you'll be missing some key information like namespace, container name, and other app specific labels. To do this you need a kubernetes filter, with the name kubernetes, remember this is a type not just a name, and a match entry within that needs to align with the input you'd like to pair it with. In this example there's only one input and one filter so hopefully it makes sense. The Kube_URL entry is whatever URL you can reach the kubernetes management API. It'll be queried to fill in the missing pieces merging that data with this log entry. It should be visible with 'kubectl get svc'
$ cat fluent-bit-configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
  name: fluent-bit-config
  namespace: logging
  labels:
    k8s-app: fluent-bit
data:
  # Configuration files: server, input, filters and output
  # ======================================================
  fluent-bit.conf: |
    [SERVICE]
        Flush              1
        Log_Level          info
        Daemon             off
        HTTP_Server        On
        HTTP_Listen        0.0.0.0
        HTTP_Port          2020

    @INCLUDE input-kubernetes.conf
    @INCLUDE filter-kubernetes.conf
    @INCLUDE output-elasticsearch.conf

  input-kubernetes.conf: |
    [INPUT]
        Name                systemd
        Tag                 kube_systemd.*
        Path                /run/log/journal
        DB                  /var/log/flb_kube_systemd.db
        Systemd_Filter      _SYSTEMD_UNIT=docker.service
        Read_From_Tail      On
        Strip_Underscores   On

  filter-kubernetes.conf: |
    [FILTER]
        Name                kubernetes
        Match               kube_systemd.*
        Kube_URL            https://kubernetes.default.svc:443
        Annotations         On
        Labels              On
        Merge_Log           On
        K8S-Logging.Parser  On
        Use_Journal         On
    
  output-elasticsearch.conf: |
    [OUTPUT]
        Name                es
        Match               *
        Host                elasticsearch.prod.int.com
        Port                9200
        Index               k8s-lab

Fluent Bit DaemonSet

At last we'll configure fluent bit as a daemon set. Some general notes here:
  • I found it's possible to debug quickly by running a -debug variant, but anything newer than 1.0.4-debug lacked the ability to run /bin/sh, replacing it instead with busybox, which seemed more complicated for my needs.
    • To run a debug version you'd use an image like this image: fluent/fluent-bit:1.0.4-debug
  • As mentioned in the config map section, you need to mount the node's systemd location; in my case, /run/log
$ cat fluent-bit-daemon-set.yaml 
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluent-bit
  namespace: logging
  labels:
    k8s-app: fluent-bit-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      name: fluent-bit
  template:
    metadata:
      labels:
        name: fluent-bit
        k8s-app: fluent-bit-logging
        version: v1
        kubernetes.io/cluster-service: "true"
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "2020"
        prometheus.io/path: /api/v1/metrics/prometheus
    spec:
      containers:
      - name: fluent-bit
        image: fluent/fluent-bit:1.1.1
        imagePullPolicy: Always
        ports:
          - containerPort: 2020
        volumeMounts:
        - name: systemdlog
          mountPath: /run/log
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc/
      terminationGracePeriodSeconds: 10
      volumes:
      - name: systemdlog
        hostPath:
          path: /run/log
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config
      serviceAccountName: fluent-bit
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule

Deploying

Now comes the easy part, deploying all the yaml files:
$ kubectl create -f fluent-bit-role.yaml
$ kubectl create -f fluent-bit-configmap.yaml
$ kubectl create -f fluent-bit-daemon-set.yaml
And that's it. You should start to see logs coming into your elasticsearch instance and consequentially Kibana which you can start to visualize and create dashboards for.

Clean Up

If you need to remove everything for testing or other purposes, namespaces make this really easy. The only piece remaining is the role / binding, all of which can be removed like this:
$ kubectl delete namespace logging
$ kubectl delete clusterrolebinding fluent-bit-read
$ kubectl delete clusterrole fluent-bit-read

Wednesday, May 15, 2019

Adding A New Kubectl Client

In the previous article I outlined how to create a new user and assign it one of two roles, either a global or a namespace admin. In this post I'll show you how to setup a new client from scratch to make use of that new user.

Setting Up A New Client

Assuming the client machine hasn't connected to this cluster before you'll need to setup the configuration from scratch. You really only need kubectl which you can download and install from https://kubernetes.io/docs/tasks/tools/install-kubectl/. It's just a single binary, so in my example, I've downloaded it onto a windows host and added that folder to my $env:PATH variable.
PS C:\Users\mengland> kubectl config set-cluster kubernetes --server=https://10.9.176.25:6443 --certificate-authority=fake-ca-file
This should create a .kube folder and a config file with the basics if they didn't already exist. After that command the file should look like this:
PS C:\Users\mengland\.kube> Get-Content config
apiVersion: v1
clusters:
- cluster:
    certificate-authority: fake-ca-file
    server: https://10.9.176.25:6443
  name: kubernetes
contexts: []
current-context: ""
kind: Config
preferences: {}
users: []
I've grabbed the crt and key file created in the previous post. Let's add our user credentials:
PS C:\Users\mengland.EACANADA\.kube> kubectl config set-credentials mengland@kubernetes --client-certificate="$HOME\kubernetes\mengland.crt" --client-key="$HOME\kubernetes\mengland.key" --embed-certs=true
User "mengland@kubernetes" set.
PS C:\Users\mengland.EACANADA\.kube> Get-Content config
apiVersion: v1
clusters:
- cluster:
    certificate-authority: fake-ca-file
    server: https://10.9.176.25:6443
  name: kubernetes
contexts: []
current-context: ""
kind: Config
preferences: {}
users:
- name: mengland@kubernetes
  user:
    client-certificate-data: ...<long_encoded_cert>...
    client-key-data: ...<long_encoded_cert>...
You can see how this has updated the user section of the configuration file. Now we've got one last problem, our fake-ca-file certificate. You can point it to the actual cert file, which is available on the master server under /etc/kubernetes/pki/ca.crt but I like to keep the cert within the file. To do that we need to base64 encode the cert and place the contents in our config file:
[root@k8s-master ~]# cat /etc/kubernetes/pki/ca.crt | base64 -w 0
...<long_encoded_cert>...
PS C:\Users\mengland> notepad .kube/config
Modify the file with the following:
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ...<long_encoded_cert>...
    server: https://10.9.176.25:6443
  name: kubernetes
contexts: []
current-context: ""
kind: Config
preferences: {}
users:
- name: mengland@kubernetes
  user:
    client-certificate-data: ...<long_encoded_cert>...
    client-key-data: ...<long_encoded_cert>...
Note the addition of -data as that allows us to use the base64 encoded key. You can find out more from https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/
Our last step is to set a context for the cluster and use it:
PS C:\Users\mengland.EACANADA> kubectl config set-context mengland@kubernetes --cluster=kubernetes --user=mengland@kubernetes
PS C:\Users\mengland.EACANADA> kubectl config use-context mengland@kubernetes
Switched to context "mengland@kubernetes"
The final result is a file that looks like this:
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: ...<long_encoded_cert>...
    server: https://10.9.176.25:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: mengland@kubernetes
  name: mengland@kubernetes
current-context: mengland@kubernetes
kind: Config
preferences: {}
users:
- name: mengland@kubernetes
  user:
    client-certificate-data: ...<long_encoded_cert>...
    client-key-data: ...<long_encoded_cert>...
You should be able to perform operations against the cluster now. A quick kubectl get pods or kubectl get pods -n monitoring, depending on the role you assigned, should succeed.

Sunday, May 12, 2019

Kubernetes RBAC

Role based access control (RBAC) is an important part of any multi user or multi tenant system, and kubernetes is no different. Once you get things out of the lab and into the real world you'll need an ability to control portions of the system, so this guide is meant to create a couple of the basics; a global administrator and a namespace specific administrator. There are two key elements to achieving this:
  • Role - Connects a resource or set of resources, the item, to verbs, or operations that can be performed against those items
  • RoleBinding - Connects subjects, either a user, group, or service account to a role

Identifying A User

Kubernetes doesn't actually handle user identity. The options can be long and confusing, but kubernetes relies on external mechanisms to do this such as certificates, tokens, OAuth2, or others (https://kubernetes.io/docs/reference/access-authn-authz/authentication/). Hopefully, better, free tools will start to emerge to manage this ecosystem over time, but for now we'll be using certificate based authentication. Doing this can be a little complicated at first, but basically what's happening here is we're generating a certificate signing request and signing it with the cluster's root certificate. The user is identified with the common name (CN) and the group is identified using the organization (O). The process looks like this:
[root@k8s-master ~]# openssl genrsa -out mengland.key 2048
Generating RSA private key, 2048 bit long modulus
...+++
...................................+++
e is 65537 (0x10001)
[root@k8s-master ~]# openssl req -new -key mengland.key -out mengland.csr -subj "/CN=mengland/O=cluster-admins"
[root@k8s-master ~]# openssl x509 -req -in mengland.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out mengland.crt -days 730
Signature ok
subject=/CN=mengland/O=cluster-admins
Getting CA Private Key
The above will create a certificate valid for two years to the user mengland who belongs to the group cluster-admins. We can then distribute these keys to the user which they can add to their kubernetes client configuration. Two files should have been created which need to be transferred to the client, in this case, mengland.crt and mengland.key.

The last thing we'll need is the certificate for the cluster itself. There are a couple ways you can do this. I prefer keeping the certificate within kubectl conf file as it's done by default when you install a new cluster, but you can also reference a file which will be covered in a future blog post. The file is located as /etc/kubernetes/pki/ca.crt.

Global Admin Role

Because I want my user to be a global admin, I'll be using a ClusterRole and ClusterRoleBinding rather than just a role and role binding. My two yaml files will look like this and be applied to the cluster:
[root@k8s-master ~]# cat global-admin.yaml 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: global-admin
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
[root@k8s-master ~]# cat global-admin-binding.yaml 
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: global-admin-binding
subjects:
- kind: User
  name: mengland
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: global-admin
  apiGroup: rbac.authorization.k8s.io
[root@k8s-master ~]# kubectl create -f global-admin.yaml
[root@k8s-master ~]# kubectl create -f global-admin-binding.yaml
If you wanted multiple users in the global admin role, you'd modify the global-admin-binding.yaml file with multiple -kind: User subjects and apply the changes with kubectl apply -f global-admin-binding.yaml. A lot of examples have the apiGroups set to [""] which indicates the core API group but this doesn't give the user access to the whole cluster.

Namespace Admin Role

Creating a role and binding giving users access to everything within a namespace is very similar to the global admin above, however, we're no longer at the cluster level so we'll use just a role and rolebinding. We'll also need to identify the namespace to be used like this:
[root@k8s-master ~]# cat monitoring-admin.yaml 
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: monitoring-admin
  namespace: monitoring
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
[root@k8s-master ~]# cat monitoring-admin-binding.yaml 
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: monitoring-admin-binding
  namespace: monitoring
subjects:
- kind: User
  name: mengland
  apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: Role
  name: monitoring-admin
  apiGroup: rbac.authorization.k8s.io
[root@k8s-master ~]# kubectl create namespace monitoring
namespace/monitoring created
[root@k8s-master ~]# kubectl create -f monitoring-admin.yaml 
krole.rbac.authorization.k8s.io/monitoring-admin created
[root@k8s-master ~]# kubectl create -f monitoring-admin-binding.yaml 
rolebinding.rbac.authorization.k8s.io/monitoring-admin-binding created
If users are admins in a specific namespace, they'll need to specify that in their command line with -n or --namespace. In the next post I'll show you how to configure the users kubectl client to make use of this user and role.

Tuesday, May 7, 2019

On Premise Kubernetes With CentOS 7 And vSphere

There are a few guides to getting Kubernetes running within a vSphere environment, however, I was having trouble putting everything together so I thought I'd make a guide here in case others were having problems.
CentOS 7 is popular, at least within the enterprise in North America, so that was chosen as the base OS. Calico will be used as the CNI provider (Container Network Interface) although you could substitute another provider if you feel it necessary. VMware is the underlying infrastructure for this setup, and with that, I wanted to be able to handle automatic provisioning of persistent volumes so setting up vsphere as a cloud provider is the final step.

Node Installation

I've got a total of 3 VMs but you can add worker nodes as required. When you're creating the VMs, make sure the name in vcenter matches the name of the guest exactly. In my case, I'll have k8s-master, k8s-n1, and k8s-n2. To start, install a minimal installation of CentOS 7.





















Node Setup

The official documentation says to enable disk by UUID, however, you shouldn't need to do this anymore. The way to check is to see if you have a /dev/disk/by-uuid folder, so if you don't have that, I'll leave this step in for reference. Once you've got the OS running, the first step is to power down and enable disk UUID. VMware documentation is available from https://kb.vmware.com/s/article/52815 but the steps are pretty easy to do. As mentioned, make sure the VM is powered off your you won't be able to add a custom VM option. For those of you not on at least vSphere 6.7, you'll need to use the flash version of the GUI.

  • vSphere > VM > Edit Settings > VM Options (a tab) > Advanced > Edit Configuration...
  • Name: disk.EnableUUID
  • Value: TRUE
  • Click Add and then OK. Repeat this for all nodes.





Disable SELinux and FirewallD
[root@k8s-master ~]# sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=permissive/g' /etc/sysconfig/selinux
[root@k8s-master ~]# systemctl disable firewalld
Comment out swap space from fstab if you have it
[root@k8s-master ~]# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
Traversing iptables rules should be the default but will be enable as per kubernetes network plugin requirements but I've set it anyway:
[root@k8s-master ~]# bash -c 'cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF'
Add Kubernetes Repository
[root@k8s-master ~]# bash -c 'cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF'
Install and enable kubernetes
[root@k8s-master ~]# yum install kubeadm kubelet kubectl docker -y
[root@k8s-master ~]# systemctl enable docker
[root@k8s-master ~]# systemctl enable kubelet
Reboot as this will disable swap, SELinux, firewalld, and start docker and kubelet services. Repeat this process for all nodes.

Setup Kubernetes

Getting kubernetes up and running should be fairly straight forward. Start by initializing the cluster on the master node. If you're running a windows cluster you need to use the default 10.244.0.0/16 network. For me, this range has the potential to cause conflict within the organization so I'll be using 192.168.0.0 as shown below:
[root@k8s-master ~]# kubeadm init --pod-network-cidr=192.168.0.0/16
This is going to return a lot of information. You should see a few key pieces of information
  1. How to setup kubectl
    • This is the primary mechanism to talk to your cluster. You can do this on your laptop, or, just on the master node for now
      [root@k8s-master ~]# mkdir ~/.kube
      [root@k8s-master ~]# cp /etc/kubernetes/admin.conf ~/.kube/config
      [root@k8s-master ~]# chown $(id -u):$(id -g) ~/.kube/config
      
  2. How to join worker nodes
    • Copy and paste this to a safe place for now. If you lose it you can regenerate a key (they only last for a few hours anyway) with the following command:
      [root@k8s-master ~]# kubeadm token create --print-join-command 
      
You should join any worker nodes now with the command provided.

If you check the status of your nodes now, you'll notice that they're not ready. And if you look at the pods, coredns will be in a pending state. This is because we haven't installed Calico yet.
[root@k8s-master ~]# kubect get nodes
NAME         STATUS     ROLES    AGE     VERSION
k8s-master   NotReady   master   6m47s   v1.14.1
k8s-n1       NotReady   <none>   78s     v1.14.1
k8s-n2       NotReady   <none>   41s     v1.14.1
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-2fd9b              0/1     Pending   0          64m
coredns-fb8b8dccf-ddn42              0/1     Pending   0          64m
etcd-k8s-master                      1/1     Running   0          63m
kube-apiserver-k8s-master            1/1     Running   0          63m
kube-controller-manager-k8s-master   1/1     Running   0          64m
kube-proxy-lbptw                     1/1     Running   0          64m
kube-proxy-mx4gl                     1/1     Running   0          59m
kube-proxy-mzbmw                     1/1     Running   0          59m
kube-proxy-wxctq                     1/1     Running   0          59m
kube-scheduler-k8s-master            1/1     Running   0          63m
Calico is installed as a pod under the kube-system namespace with a simple command:
[root@k8s-master ~]# kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml
All nodes should report ready and coredns should be in a running state. Congratulations!

vSphere Integration

vSphere is part of the built in cloud providers, with documentation, however confusing and limited, available at the vSphere Cloud Provider site. It should be noted that all cloud providers have been deprecated within kubernetes but the cloud controller replacement seems to be in either an alpha or beta state. You can find our more and track component progress at these sites:

vSphere Configuration

On the master node, you'll need a configuration file which tells kubernetes where to find your vCenter server. To do this you'll need some information from your specific installation.
  • The vCenter IP address or DNS name and the name of the data centre hosting your ESX cluster
    • For my setup this is 10.9.178.236
    • My data centre is named Vancouver, this would be listed above your cluster in the vSphere Client
  • The name of a datastore to host your kubernetes volumes (which will host VMDK files)
  • The network name you're using for the kubernetes VMs
    • You might be using a default network which is usually called "VM Network" but I've got a specific one that we name after the VLAN number
  • A resource pool
    • If you don't have a resource pool or would like to use the defaults, that's OK, that's what I've used here, otherwise you'll need the resource location.
If you have a more complicated setup, like multiple vCenter servers you can find additional options under Creating the vSphere cloud config file
[root@k8s-master ~]# cat /etc/kubernetes/vsphere.conf
[Global]
port = "443"
insecure-flag = "1"
datacenters = "Vancouver"
secret-name = "vsphere-credentials"
secret-namespace = "kube-system"

[VirtualCenter "10.9.178.236"]

[Workspace]
server = "10.9.178.236"
datacenter = "Vancouver"
default-datastore="esx_labcluster2_ds02"
resourcepool-path="<cluster_name>/Resources"
folder = "kubernetes"

[Disk]
scsicontrollertype = pvscsi

[Network]
public-network = "VNET1805"
A special note about the folder entry. This has nothing to do with storage but rather then VM folder your kubernetes nodes are in within vcenter. If you don't have one, now is a good time to create one. From your vsphere client, select the VMs and Templates icon or tab, right click on your data center, and select New Folder > New VM and Templates Folder... In the file above I used a folder named kubernetes. Create it and drag your master and all nodes into this folder.

If you're trying to provision a persistent volume and keep getting an error like the following, the VM folder is a likely problem
I0718 21:39:08.761621       1 vsphere.go:1311] Datastore validation succeeded
E0718 21:39:09.104965       1 datacenter.go:231] Failed to get the folder reference for kubernetes. err: folder 'kubernetes' not found
E0718 21:39:09.104997       1 vsphere.go:1332] Failed to set VM options required to create a vsphere volume. err: folder 'kubernetes' not found
If you have datastore clusters or a storage folder, be sure to reference the documentation reference above on what that format might look like.

Secrets File

In order to provide authentication from kubernetes to vSphere you'll need a secrets file. Technically you can do this in your vsphere.conf file but that has a couple of problems:
  1. Your passwords are stored in plain text
  2. If you have special characters in the username or password, like a domain account with a backslash in it, you'll have problems
Creating a secrets file is pretty easy. First, we'll encode, not encrypt, our username and password, and second, we'll store that in a secrets file within our kubernetes cluster.
[root@k8s-master ~]# echo -n 'my_username' | base64
[root@k8s-master ~]# echo -n 'my_super_password' | base64
[root@k8s-master ~]# bash -c 'cat <<EOF > ~/vsphere-credentials.yaml
apiVersion: v1
kind: Secret
metadata:
  name: vsphere-credentials
type: Opaque
data:
  10.9.178.236.username: <encoded_user>
  10.9.178.236.password: <encoded_password>
EOF'
[root@k8s-master ~]# kubectl apply -f ~/vsphere-credentials.yaml --namespace=kube-system
[root@k8s-master ~]# kubectl get secrets -n kube-system

Enabling vSphere

Remember that diagram at the top of the page that you glanced over? Well it shows the key components deployed on the master node, which we now need to edit in order to tell them about vSphere. We'll start with the controller manifest file with the changes highlighted in blue:
[root@k8s-master ~]# cat /etc/kubernetes/manifests/kube-controller-manager.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-controller-manager
    tier: control-plane
  name: kube-controller-manager
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-controller-manager
    - --allocate-node-cidrs=true
    - --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf
    - --bind-address=127.0.0.1
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --cluster-cidr=192.168.0.0/16
    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
    - --controllers=*,bootstrapsigner,tokencleaner
    - --kubeconfig=/etc/kubernetes/controller-manager.conf
    - --leader-elect=true
    - --node-cidr-mask-size=24
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --root-ca-file=/etc/kubernetes/pki/ca.crt
    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
    - --use-service-account-credentials=true
    - --cloud-provider=vsphere
    - --cloud-config=/etc/kubernetes/vsphere.conf
    image: k8s.gcr.io/kube-controller-manager:v1.14.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /healthz
        port: 10252
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-controller-manager
    resources:
      requests:
        cpu: 200m
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/controller-manager.conf
      name: kubeconfig
      readOnly: true
    - mountPath: /etc/kubernetes/vsphere.conf
      name: vsphere-config
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/controller-manager.conf
      type: FileOrCreate
    name: kubeconfig
  - hostPath:
      path: /etc/kubernetes/vsphere.conf
      type: FileOrCreate
    name: vsphere-config
status: {}
Next we need to modify the API Server manifest file, again with the changes marked in blue:
[root@k8s-master ~]# cat /etc/kubernetes/manifests/kube-apiserver.yaml 
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=10.9.176.25
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/kubernetes/pki/etcd/ca.crt
    - --etcd-certfile=/etc/kubernetes/pki/apiserver-etcd-client.crt
    - --etcd-keyfile=/etc/kubernetes/pki/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    - --cloud-provider=vsphere
    - --cloud-config=/etc/kubernetes/vsphere.conf
    image: k8s.gcr.io/kube-apiserver:v1.14.1
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 10.9.176.25
        path: /healthz
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
        cpu: 250m
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /etc/kubernetes/vsphere.conf
      name: vsphere-config
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /etc/kubernetes/vsphere.conf
      type: FileOrCreate
    name: vsphere-config
status: {}
On any worker nodes you'll need to tell it to use vsphere as a cloud provider. No need to worry about a vsphere.conf file here as that's handled by the controller. This is one of the poorest and most conflicted pieces of documentation, but this should work.
[root@k8s-n1 ~]# cat /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml --cloud-provider=vsphere"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

Reload Configuration

To get all of this configuration loaded you can either reboot all nodes or you can restart the kubelet service which will in turn reload the api and controller pods on the master or just kubelet on the worker nodes:
[root@k8s-n1 ~]# systemctl daemon-reload
[root@k8s-n1 ~]# systemctl restart kubelet
[root@k8s-master ~]# reboot

UUID Problem

One of the issues I had was getting kubernetes to locate the UUID of my master node. This prevented vSphere from finding the node in vcenter which prevented anything from working. The error looks something like this:
[root@k8s-master ~]# kubectl -n kube-system logs kube-controller-manager-k8s-master | grep -B 2 "failed to add node"
E0507 16:55:20.983794       1 datacenter.go:78] Unable to find VM by UUID. VM UUID: 
E0507 16:55:20.983842       1 vsphere.go:1408] failed to add node &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-master,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-master,UID:f02a92f5-70e3-11e9-bd37-005056bc31df,ResourceVersion:3895,Generation:0,CreationTimestamp:2019-05-07 16:19:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-master,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,projectcalico.org/IPv4Address: 10.9.176.25/23,projectcalico.org/IPv4IPIPTunnelAddr: 192.168.235.192,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:192.168.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[{node-role.kubernetes.io/master  NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{38216605696 0} {}  BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8201801728 0} {} 8009572Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{2 0} {} 2 DecimalSI},ephemeral-storage: {{34394945070 0} {} 34394945070 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{8096944128 0} {} 7907172Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-05-07 16:55:09 +0000 UTC 2019-05-07 16:55:09 +0000 UTC CalicoIsUp Calico is running on this node} {MemoryPressure False 2019-05-07 16:54:57 +0000 UTC 2019-05-07 16:19:44 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-07 16:54:57 +0000 UTC 2019-05-07 16:19:44 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-07 16:54:57 +0000 UTC 2019-05-07 16:19:44 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-07 16:54:57 +0000 UTC 2019-05-07 16:52:05 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 10.9.176.25} {Hostname k8s-master}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:d772fbc266f3456387d01f1914e2a33c,SystemUUID:7B283C42-6E10-4736-8616-A6D505FB3D73,BootID:cd4f0ad4-bb18-4ca9-b3e9-d0f64465514c,KernelVersion:3.10.0-957.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://1.13.1,KubeletVersion:v1.14.1,KubeProxyVersion:v1.14.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:bb3e3264bf74cc6929ec05b494d95b7aed9ee1e5c1a5c8e0693b0f89e2e7288e k8s.gcr.io/kube-apiserver:v1.14.1] 209878057} {[k8s.gcr.io/kube-controller-manager@sha256:5279e0030094c0ef2ba183bd9627e91e74987477218396bd97a5e070df241df5 k8s.gcr.io/kube-controller-manager:v1.14.1] 157903081} {[docker.io/calico/node@sha256:04806ef1d6a72f527d7ade9ed1f2fb78f7cb0a66f749f0f2d88a840e679e0d4a docker.io/calico/node:v3.7.1] 154794504} {[docker.io/calico/cni@sha256:cc72dab4d80a599913bda0479cf01c69d37a7a45a31334dbc1c9c7c6cead1893 docker.io/calico/cni:v3.7.1] 135366007} {[k8s.gcr.io/kube-proxy@sha256:44af2833c6cbd9a7fc2e9d2f5244a39dfd2e31ad91bf9d4b7d810678db738ee9 k8s.gcr.io/kube-proxy:v1.14.1] 82108455} {[k8s.gcr.io/kube-scheduler@sha256:11af0ae34bc63cdc78b8bd3256dff1ba96bf2eee4849912047dee3e420b52f8f k8s.gcr.io/kube-scheduler:v1.14.1] 81581961} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}: No VM found
To fix that I had to modify the node information as it was missing the providerID. To get the UUID of the node and update it in kubernetes you can run the following:
[root@k8s-master ~]# cat /sys/class/dmi/id/product_serial | sed -e 's/^VMware-//' -e 's/-/ /' | awk '{ print toupper($1$2$3$4 "-" $5$6 "-" $7$8 "-" $9$10 "-" $11$12$13$14$15$16) }'
[root@k8s-master ~]# kubectl patch node <Node name> -p '{"spec":{"providerID":"vsphere://<vm uuid>"}}'
Make sure you do this right the first time as setting it is easy, changing it can be problematic. It's likely you'll need to do this for all nodes. You can find out more from a bug report on github: https://github.com/kubernetes/kubernetes/issues/65933

Logging

In the event you have problems, the logs are poor and hard to decipher but here are a few helpful hints that worked for me. To see these logs you'll need to find the docker container ID for your controller. Ignore the /pause container
[root@k8s-master ~]# docker container ls | grep controller-manager
4796191d16d0        efb3887b411d           "kube-controller-m..."   5 hours ago         Up 5 hours                              k8s_kube-controller-manager_kube-controller-manager-k8s-master_kube-system_2a375eee0a4d25cea1a1eb0509d792c7_1
5a4de92a7ff2        k8s.gcr.io/pause:3.1   "/pause"                 5 hours ago         Up 5 hours                              k8s_POD_kube-controller-manager-k8s-master_kube-system_2a375eee0a4d25cea1a1eb0509d792c7_1
And to view the log output similar to tail -f you can run this:
[root@k8s-master ~]# docker logs -f 4796191d16d0
You're looking for lines like the following:
I0506 18:31:39.573627       1 vsphere.go:392] Initializing vc server 10.9.178.236
I0506 18:31:39.574195       1 vsphere.go:276] Setting up node informers for vSphere Cloud Provider
I0506 18:31:39.574238       1 vsphere.go:282] Node informers in vSphere cloud provider initialized
I0506 18:31:39.581044       1 vsphere.go:1406] Node added: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{<A lot of node information>}
I'd also recommend increasing the log level for the controller by adding the following to /etc/kubernetes/manifests/kube-controller-manager.yaml with a restart of the machine or the container. Without this success will go silently unnoticed, which is OK when you're up and running but makes things tough when you're trying to set things up.
  - --v=4
Success looks like this with one entry for each node:
I0507 17:17:29.380235       1 nodemanager.go:394] Invalid credentials. Cannot connect to server "10.9.178.236". Fetching credentials from secrets.
I0507 17:17:30.393408       1 connection.go:138] SessionManager.Login with username "itlab\\mengland"
I0507 17:17:30.563076       1 connection.go:209] New session ID for 'ITLAB\mengland' = 523502d1-7796-7de1-5d03-99733a3390c1
I0507 17:17:30.570269       1 nodemanager.go:166] Finding node k8s-master in vc=10.9.178.236 and datacenter=Vancouver
I0507 17:17:30.575057       1 nodemanager.go:194] Found node k8s-master as vm=VirtualMachine:vm-364036 in vc=10.9.178.236 and datacenter=Vancouver
I0507 17:17:30.575150       1 vsphere.go:1406] Node added: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:k8s-n1,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/k8s-n1,UID:8adb895b-70e4-11e9-bd37-005056bc31df,ResourceVersion:5719,Generation:0,CreationTimestamp:2019-05-07 16:24:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: k8s-n1,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,projectcalico.org/IPv4Address: 10.9.176.32/23,projectcalico.org/IPv4IPIPTunnelAddr: 192.168.215.64,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:192.168.1.0/24,DoNotUse_ExternalID:,ProviderID:vsphere://423CDDFE-1970-4958-56B7-415BEE001167,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{38216605696 0} {}  BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16657203200 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{34394945070 0} {} 34394945070 DecimalSI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{16552345600 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-05-07 16:25:16 +0000 UTC 2019-05-07 16:25:16 +0000 UTC CalicoIsUp Calico is running on this node} {MemoryPressure False 2019-05-07 17:16:30 +0000 UTC 2019-05-07 16:24:09 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2019-05-07 17:16:30 +0000 UTC 2019-05-07 16:24:09 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2019-05-07 17:16:30 +0000 UTC 2019-05-07 16:24:09 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2019-05-07 17:16:30 +0000 UTC 2019-05-07 16:48:45 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{ExternalIP 10.9.176.32} {InternalIP 10.9.176.32} {Hostname k8s-n1}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:ce61f472a8d74d9ba195df280e104030,SystemUUID:FEDD3C42-7019-5849-56B7-415BEE001167,BootID:54c28221-7d74-41c0-ac97-2f5e95fa3bf8,KernelVersion:3.10.0-957.el7.x86_64,OSImage:CentOS Linux 7 (Core),ContainerRuntimeVersion:docker://1.13.1,KubeletVersion:v1.14.1,KubeProxyVersion:v1.14.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[docker.io/calico/node@sha256:04806ef1d6a72f527d7ade9ed1f2fb78f7cb0a66f749f0f2d88a840e679e0d4a docker.io/calico/node:v3.7.1] 154794504} {[docker.io/calico/cni@sha256:cc72dab4d80a599913bda0479cf01c69d37a7a45a31334dbc1c9c7c6cead1893 docker.io/calico/cni:v3.7.1] 135366007} {[k8s.gcr.io/kube-proxy@sha256:44af2833c6cbd9a7fc2e9d2f5244a39dfd2e31ad91bf9d4b7d810678db738ee9 k8s.gcr.io/kube-proxy:v1.14.1] 82108455} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472}],VolumesInUse:[],VolumesAttached:[],Config:nil,},}

Add Storage Class

You thought we were done, didn't you? Nope, we need to add a storage class so it can be used by pods. Some of our defaults were defined in the vsphere.conf file, like destination datastore, but you can also specify that here. You can call the disk type anything you want and you can have multiple, for example you've got a datastore on SSD and one on SATA you could have two different classes that can be used. Here's a sample of a class and deploying it to the server.
[root@k8s-master ~]# cat vmware-storage.yaml 
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: vsphere-ssd
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
parameters:
  diskformat: thin
  datastore: esx_labcluster2_ds02
provisioner: kubernetes.io/vsphere-volume
[root@k8s-master ~]# kubectl apply -f vmware-storage.yaml
If you're using block based storage (e.g. fibre channel or iSCSI) you probably want to use a thick provisioned disk format as initial write performance can be terrible on thin provisioned block but that's for a different discussion.

All of your VMDK files will be placed in a folder kubevols at the root of your datastore.

Using Storage

Last step! Actually making use of our storage and thereby testing things out. This is just a quick persistent volume claim which should allocate a persistent volume through vsphere.
[root@k8s-master ~]# cat pvc_test.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
[root@k8s-master ~]# kubectl create -f pvc_test.yaml
It might take a few seconds but you should end up with a persistent volume
[root@k8s-master ~]# kubectl get pvc
NAME   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test   Bound    pvc-9cc548db-a9d3-11e9-a5fe-005056bc8d93   5Gi        RWO            vsphere-ssd    59m
[root@k8s-master ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM          STORAGECLASS   REASON   AGE
pvc-9cc548db-a9d3-11e9-a5fe-005056bc8d93   5Gi        RWO            Delete           Bound    default/test   vsphere-ssd             27m

Cleaning Up

If you make mistakes or have problems, here are some helpful commands to clean up:
[root@k8s-master ~]# kubectl delete pvc test
To reset kubernetes back to scratch you can run these commands:
[root@k8s-master ~]# kubectl delete node --all
[root@k8s-master ~]# kubeadm reset
[root@k8s-master ~]# rm -rf /var/lib/etcd/*
[root@k8s-master ~]# rm -rf /etc/kubernetes
[root@k8s-master ~]# rm -rf ~/.kube
[root@k8s-master ~]# docker image rm `docker images -q`
If you'd like to cleanup a node:
[root@k8s-master ~]# kubectl delete node <node_name>
[root@k8s-n1 ~]# rm -rf /var/lib/kubelet
[root@k8s-n1 ~]# rm -rf /etc/kubernetes
either reboot or kill kubelet process if it's running

Links

Kevin Tijssen's Blog
- One of the better written guides I've seen. It really, really helped getting Kubernetes running quickly and easily
- http://blog.kevintijssen.eu/how-to-install-kubernetes-on-centos-7/
vSphere Cloud Provider
- Not my favourite but it's what's available
- https://vmware.github.io/vsphere-storage-for-kubernetes/documentation/overview.html
Kubernetes Cloud Controller (the future of cloud providers)
- https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/
vSphere Container Storage Interface
- https://github.com/kubernetes-sigs/vsphere-csi-driver
Cloud Controller Manager for vSphere (future of kubernetes and vSphere)
- https://github.com/kubernetes/cloud-provider-vsphere