Wednesday, February 5, 2020

Elasticsearch ECK Snapshot To S3

Elasticsearch has published an API, Elastic Cloud on Kubernetes or ECK, which I think is really, really great. It simplifies a lot, especially for those of us looking to setup relatively simple clusters. I did, however, have quite a bit of trouble following the documentation to get an S3 bucket connected for snapshot purposes, so I thought I'd document my solution in a quick write up.

The guide is written using GCS (Google Cloud Storage), and while there is information on the S3 plugin, I just couldn't get things talking correctly. I wanted to use a specific access ID and key as it was easier for me to control in our environment. To do that, the documentation requires you to inject both the plugin and a secrets file which looks something like this:
ubuntu@k8s-master:~$ cat s3.client.default.credentials_file 
{
  "s3.client.default.access_key": "HLQA2HMA2FG3ABK4L2FV"
  "s3.client.default.secret_key": "3zmdKM2KEy/oPOGZfZpWJR3T46TxwtyMxZRpQQgF"
}
ubuntu@k8s-master:~$ kubectl create secret generic s3-credentials --from-file=s3.client.default.credentials_file
I kept getting errors like this:
unknown secure setting s3.client.default.credentials_file
If you look at what the above command created, it setup a secret where the key is s3.client.default.credentials_file and the value is the json from the file. What I think it's supposed to look like is a key of s3.client.default.access_key with the value being the actual key. So, I came up with this; please pay attention to the namespace:
ubuntu@k8s-master:~$ cat s3-credentials.yaml
apiVersion: v1
kind: Secret
metadata:
  name: s3-credentials
  namespace: elastic-dev
type: Opaque
data:
  s3.client.default.access_key: SExRQTJITUEyRkczQUJLNEwyRlYK
  s3.client.default.secret_key: M3ptZEtNMktFeS9vUE9HWmZacFdKUjNUNDZUeHd0eU14WlJwUVFnRgo=
ubuntu@k8s-master:~$ kubectl apply -f s3-credentials.yaml
You'll also notice that the access key and secret don't match the previous entry. That's because a kubernetes secret requires them to be base64 encoded, which is awesome as it gets around all the special character exceptions. That's simple enough:
ubuntu@k8s-master:~$ echo "HLQA2HMA2FG3ABK4L2FV" | base64
ubuntu@k8s-master:~$ echo "3zmdKM2KEy/oPOGZfZpWJR3T46TxwtyMxZRpQQgF" | base64
The last piece is the elasticsearch configuration itself. There are two notable pieces; the secureSettings entry and the S3 plugin itself. I've also elected to disable TLS as I'm terminating my encryption at an ingest point with this very basic configuration, but hopefully gets the basics going for further customization.
ubuntu@k8s-master:~$ cat elasticsearch.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
  name: elastic-dev
  namespace: elastic-dev
spec:
  version: 7.5.2
  secureSettings:
  - secretName: s3-credentials
  nodeSets:
  - name: default
    count: 1
    config:
      node.master: true
      node.data: true
      node.ingest: true
      node.store.allow_mmap: false
    volumeClaimTemplates:
    - metadata:
        name: elasticsearch-data
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 3Gi
    podTemplate:
      spec:
        initContainers:
        - name: install-plugins
          command:
          - sh
          - -c
          - |
            bin/elasticsearch-plugin install --batch repository-s3
        containers:
        - name: elasticsearch
          # specify resource limits and requests
          resources:
            limits:
              memory: 1Gi
              cpu: 0.5
          env:
          - name: ES_JAVA_OPTS
            value: "-Xms512m -Xmx512m"
  http:
    tls:
      selfSignedCertificate:
        disabled: true