Installing and configuring operators

In this exercise you will learn about the Etcd Operator.

Go to the console page "[Installed Operators](%console_url%/ns/%project_namespace%/clusterserviceversions)" where the Etcd Operator should be seen. If it is not visible, then the OpenShift Platform Administrator needs to subscribe to it and ensure it’s available for all namespaces on the cluster.

First, check what’s running in your project:

oc get po

<!-- Clean up the project:

oc delete all --all

-→

With the following command, we can observe the pods of the Etcd Cluster in the lower terminal:

watch "oc get pods | grep -v -e ' Completed ' -e \-deploy"

Leave this command running for the duration of this exercise.

Instantiate an Etcd Cluster by creating the EtcdCluster custom resource:

oc create -f - << END
apiVersion: etcd.database.coreos.com/v1beta2
kind: EtcdCluster
metadata:
  name: example
  annotations:
    etcd.database.coreos.com/scope: clusterwide
spec:
  size: 3
  version: 3.2.13
  pod:
    persistentVolumeClaimSpec:
      storageClassName: gp2
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
END
  • Note: If you see the error message no matches for kind "EtcdCluster" this means the EtcdCluster Custom Resource is unknown to the system and that probably means the Etcd Operator has not been configured yet.

Note that version 3.2.13 will be created with a cluster size of 3 instances (each instance in a pod) and each pod will be provisioned with one persistent volume.

As the Etcd cluster is being created, observer the steps taken in the upper terminal with this command:

oc get events -w | grep /example
  • Note: During the provisioning process you may see some warning about volume provisioning. This is normal and means that OpenShift needs to wait for all three volumes to be created.

You should be able to observe the steps taken to create the Etcd Cluster. E.g. New member example-xxxyyyzzz added to cluster.

After all three pods of the Etcd cluster have been created (see them in the lower terminal), stop the command in the upper terminal:

<ctrl+c>

Now, view the Custom Resource:

oc get EtcdCluster

View the details about the Custom Resource:

oc describe EtcdCluster example

To access the Etcd Cluster, launch a separate pod containing the etcdctl command:

oc run --rm -it testclient --image quay.io/coreos/etcd --restart=Never -- /bin/sh

A $ command prompt should appear.

Inside the pod, run the following commands:

Set the Etcd version to use:

export ETCDCTL_API=3

Using the etcdctl command, add a value to the Etcd cluster:

etcdctl --endpoints http://example-client:2379 put foo bar

Read a value from the Etcd cluster:

etcdctl --endpoints http://example-client:2379 get foo

Delete a value from the Etcd cluster:

etcdctl --endpoints http://example-client:2379 del foo

Try to read a value that does not exist from the Etcd cluster:

etcdctl --endpoints http://example-client:2379 get foo

Exit from the etcdctl pod:

exit

Delete one instance of the Etcd cluster:

oc delete pod `oc get pod | grep example | awk '{print $1}' | tail -1`

In the bottom terminal, the Etcd cluster is repaired by the Etcd Operator. This is similar to what a Kubernetes deployment controller would do, but there is a lot more to it. The Etcd operator needs to create a new Etcd cluster member, add it back into the Etcd cluster and initialize (re-distribute) it with the data that already exists in the Etcd Cluster.

You will see that the Etcd Operator restored the Etcd Cluster back to how it was.

It is also possible to expand the Etcd Cluster by increasing the size:

oc patch EtcdCluster example --type merge -p '{"spec":{"size":5}}'

Observe how the Etcd Cluster is scaled out.

Now, to remove the Etcd Cluster, all that’s needed is to remove the custom resource.

oc delete EtcdCluster example

Stop the watch command:

<ctrl+c>

In this exercise you were able to deploy an etcd cluster, connect to it, scale it and watch it self-heal.


That’s the end of this exercise.