This article talks about how to upgrade the Kubernetes version on an EKS cluster which has OpenEBS running on it. To learn how to set up an EKS cluster, install OpenEBS, configure eksctl and aws cli refer to this article. This upgrade requires eksctl and aws cli pre-configured.
This article uses a 3 node EKS cluster deployed through eksctl. This cluster is running in the AWS region ap-south-1. So, the first node is deployed in ap-south-1a, second in ap-south-1b and the third one is in ap-south-1c zone. This cluster has one volume attached to each node from its respective available zones. A Percona application is running on cStor volume in this cluster.
Prerequisites to update a cluster
Compare the Kubernetes version of your cluster's control plane to the Kubernetes version of your worker nodes.
To get the Kubernetes version of your cluster's control plane, execute the following command:
kubectl version --short
Sample Output:
Client Version: v1.10.0
Server Version: v1.13.12-eks-eb1860
To get the Kubernetes version of your worker nodes, execute the following command:
kubectl get nodes
Sample output:
NAME STATUS ROLES AGE VERSION
ip-192-168-3-230.ap-south-1.compute.internal Ready <none> 19m v1.13.12
ip-192-168-62-243.ap-south-1.compute.internal Ready <none> 19m v1.13.12
ip-192-168-75-200.ap-south-1.compute.internal Ready <none> 19m v1.13.12
Take the output of below commands and keep it safe.
kubectl get bd -o yaml
kubectl get spc -o yaml
kubectl get csp -o yamls
Update Control Plane Version
Control plane version updates must be done for one minor version at a time.
To update control plane to the next available version run:
eksctl update cluster --name=<clusterName> --approve -r <region>
Sample command:
eksctl update cluster --name=article-eks --approve -r ap-south-1
Sample output:
[ℹ] eksctl version 0.13.0
[ℹ] using region ap-south-1
[ℹ] will upgrade cluster "final-eks" control plane from current version "1.13" to "1.14"
[✔] cluster "final-eks" control plane has been upgraded to version "1.14"
[ℹ] you will need to follow the upgrade procedure for all of nodegroups and add-ons
[ℹ] re-building cluster stack "eksctl-final-eks-cluster"
[ℹ] updating stack to add new resources [IngressDefaultClusterToNodeSG IngressNodeToDefaultClusterSG] and outputs [ClusterSecurityGroupId]
[ℹ] checking security group configuration for all nodegroups
[ℹ] all nodegroups have up-to-date configuration
Now, to verify the Kubernetes version, execute:
kubectl version --short
Sample Output:
Client Version: v1.10.0
Server Version: v1.14.9-eks-c0eccc
Update nodegroups
Next, you have to update your worker nodes.
Note:You should update nodegroup only after you ran eksctl update cluster.
The worker nodes can be updated in 2 ways.
- Migrating to a New Worker Node Group
- Updating an Existing Worker Node Group
However, it is suggested to migrate to a new worker node group, as it is more graceful than simply updating the AMI ID in an existing AWS CloudFormation stack because the migration process taints the old node group as NoSchedule and drains the nodes after a new stack is ready to accept the existing pod workload.
To get the name of old nodegroup, execute:
eksctl get nodegroups --cluster=<clusterName>
Sample command:
eksctl get nodegroups --cluster=article-eks
Sample output:
CLUSTER NODEGROUP CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID
article-eks open-workers 2020-02-15T18:58:49Z 3 3 3 t3.medium ami-08673c021b556d32e
Next, you need to create a new nodegroup.
To create a new nodegroup, execute:
eksctl create nodegroup \
--cluster <cluster name> \
--version <updated kubernetes version> \
--name <name of the node group> \
--node-type <node type> \
--nodes <number of nodes> \
--nodes-min <number of min nodes> \
--nodes-max <number of max nodes> \
--node-ami <ami image id of the kubernetes version to be updated>
Sample command:
eksctl create nodegroup \
--cluster article-eks \
--version 1.14 \
--name k8s-cluster \
--node-type t3.medium \
--nodes 3 \
--nodes-min 3 \
--nodes-max 3 \
--node-ami ami-09128d2180521816f
Note: Different ami image id for different Kubernetes version can be found here.
Once the new nodegroup is deployed, verify the same using the below-mentioned command.
eksctl get nodegroups --cluster=<cluster name>
Sample command:
eksctl get nodegroups --cluster=upgrade-eks
Sample output:
CLUSTER NODEGROUP CREATED MIN SIZE MAX SIZE DESIRED CAPACITY INSTANCE TYPE IMAGE ID
upgrade-eks chandan-workers 2020-02-15T16:33:25Z 3 3 3 t3.medium ami-08673c021b556d32e
upgrade-eks new-cluster 2020-02-15T17:45:35Z 3 3 3 t3.medium ami-09128d2180521816f
Once the new worker nodegroup comes up, check the status of all the OpenEBS pods and application pods. All the pods will be in running state. Some new pods will come up for the new worker nodes.
To know the status of the pods, execute:
kubectl get pods -n openebs
Sample Output:
NAME READY STATUS RESTARTS AGE
cstor-eks-pool-0n8k-68ddc5558-gwqws 3/3 Running 0 45m
cstor-eks-pool-fpe2-6848797cd5-qxcjw 3/3 Running 0 45m
cstor-eks-pool-m7an-788c7cdbd5-vhg4z 3/3 Running 0 45m
maya-apiserver-58c68c9f8f-xfwdz 1/1 Running 2 55m
openebs-admission-server-6b77fd668b-kvfcj 1/1 Running 0 55m
openebs-localpv-provisioner-869fbc885f-qb475 1/1 Running 1 55m
openebs-ndm-6xg4z 1/1 Running 0 55m
openebs-ndm-7v6pc 1/1 Running 0 82s
openebs-ndm-9lf6t 1/1 Running 0 55m
openebs-ndm-jp4pm 1/1 Running 0 51s
openebs-ndm-kz2cn 1/1 Running 0 55m
openebs-ndm-operator-7c6cc67c49-ccj47 1/1 Running 1 55m
openebs-ndm-qq6wc 1/1 Running 0 81s
openebs-provisioner-744bbc9496-s52t7 1/1 Running 2 55m
openebs-snapshot-operator-58cb57bd5b-2cckj 2/2 Running 1 55m
openebs-ubuntu-init-44dp6 1/1 Running 0 53m
openebs-ubuntu-init-68vk5 1/1 Running 0 51s
openebs-ubuntu-init-6xwh7 1/1 Running 0 53m
openebs-ubuntu-init-b52kv 1/1 Running 0 82s
openebs-ubuntu-init-wsjdx 1/1 Running 0 53m
openebs-ubuntu-init-zjh54 1/1 Running 0 81s
pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-target-cb869d444-98jj2 3/3 Running 0 42m
To know the status of the application pod, execute:
kubectl get pods
Sample Output:
NAME READY STATUS RESTARTS AGE
percona-66db7d9b88-k8c6h 1/1 Running 0 46m
Now, delete the old worker nodegroup with below command.
eksctl delete nodegroup --cluster=<cluster name> --name=<nodegroup name>
Once the old worker nodegroup is deleted all the OpenEBS pods will be scheduled on new worker nodes. But the cStor pool pods will still be in a Pending state and application pod will be in ContainerCreating state. That’s because blockdevices will be attached to the old worker nodes. Now, as the old worker nodes are deleted user has to reattach the volumes to the new worker nodes.
To know the status of the pods, execute:
kubectl get pods -n openebs
NAME READY STATUS RESTARTS AGE
cstor-eks-pool-0n8k-68ddc5558-472v2 0/3 Pending 0 3m23s
cstor-eks-pool-fpe2-6848797cd5-bdc8x 0/3 Pending 0 3m23s
cstor-eks-pool-m7an-788c7cdbd5-crhfx 0/3 Pending 0 3m22s
maya-apiserver-58c68c9f8f-z772s 1/1 Running 0 3m23s
openebs-admission-server-6b77fd668b-zrpkn 1/1 Running 0 3m22s
openebs-localpv-provisioner-869fbc885f-mkgxm 1/1 Running 0 3m23s
openebs-ndm-7v6pc 1/1 Running 0 21m
openebs-ndm-jp4pm 1/1 Running 0 20m
openebs-ndm-operator-7c6cc67c49-gcq7x 1/1 Running 0 3m22s
openebs-ndm-qq6wc 1/1 Running 0 21m
openebs-provisioner-744bbc9496-7jh5m 1/1 Running 0 3m23s
openebs-snapshot-operator-58cb57bd5b-cz6tk 2/2 Running 0 3m23s
openebs-ubuntu-init-68vk5 1/1 Running 0 20m
openebs-ubuntu-init-b52kv 1/1 Running 0 21m
openebs-ubuntu-init-zjh54 1/1 Running 0 21m
pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-target-cb869d444-s98bp 3/3 Running 0 3m22s
To know the status of application pod, execute:
kubectl get pods
Sample Output:
NAME READY STATUS RESTARTS AGE
percona-66db7d9b88-wgp8v 0/1 ContainerCreating 0 3m37s
Note: This article only works if the new worker nodes come in the same available zone as old worker nodes.
Users can attach the volumes to the nodes using the below command.
aws ec2 attach-volume --volume-id <volume-id> --instance-id <instance-id> --device <device name>
Note: The user has to attach the volume to the worker nodes form its respective available zone. That instance-id and volume-id details can be found from AWS console. The device name (for example, /dev/sdh or xvdh ).
Import the cStor pool
From the above output it can be seen that cStor pool pods are in Pending state, this is because cStor pool pods are utilizing the old blockdevices. The old blockdevices are in an unknown state as they from old worker nodes that have been deleted. Now the user has to update the new blockdevices in some places which comes up after attaching the volumes to the new worker nodes.
Add the annotation reconcile.openebs.io/disable: "true" on SPC. This can be done using the below command.
kubectl edit spc <spc_name>
Example:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: openebs.io/v1alpha1
kind: StoragePoolClaim
metadata:
annotations:
cas.openebs.io/config: |
- name: PoolResourceRequests
value: |-
memory: 2Gi
- name: PoolResourceLimits
value: |-
memory: 4Gi
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"openebs.io/v1alpha1","kind":"StoragePoolClaim","metadata":{"annotations":{"cas.openebs.io/config":"- name: PoolResourceRequests\n value: |-\n memory: 2Gi\n- name: PoolResourceLimits\n value: |-\n memory: 4Gi\n"},"name":"cstor-eks-pool"},"spec":{"blockDevices":{"blockDeviceList":["blockdevice-20e99af10f332336e6ac5d1bf7a7c704","blockdevice-a764c47f99771e1470f8e862e6ad2f17","blockdevice-c3518649d70b6692f3b683c3a7c89e60"]},"name":"cstor-eks-pool","poolSpec":{"poolType":"striped"},"type":"disk"}}
openebs.io/spc-lease: '{"holder":"","leaderTransition":2}'
reconcile.openebs.io/disable: "true"
creationTimestamp: "2020-02-17T08:27:03Z"
finalizers:
- storagepoolclaim.openebs.io/finalizer
generation: 4
name: cstor-eks-pool
resourceVersion: "5730"
selfLink: /apis/openebs.io/v1alpha1/storagepoolclaims/cstor-eks-pool
uid: 46419b5d-515f-11ea-842f-02954e80f65c
spec:
blockDevices:
blockDeviceList:
- blockdevice-20e99af10f332336e6ac5d1bf7a7c704
- blockdevice-a764c47f99771e1470f8e862e6ad2f17
- blockdevice-c3518649d70b6692f3b683c3a7c89e60
maxPools: null
minPools: 0
name: cstor-eks-pool
poolSpec:
cacheFile: ""
overProvisioning: false
poolType: striped
type: disk
status:
phase: Online
versionDetails:
autoUpgrade: false
desired: 1.6.0
status:
current: 1.6.0
dependentsUpgraded: true
lastUpdateTime: null
state: ""
The same can be verified from the SPC description. To describe the SPC execute,
kubectl describe spc cstor-eks-pool
Sample Output:
Name: cstor-eks-pool
Namespace:
Labels: <none>
Annotations: cas.openebs.io/config:
- name: PoolResourceRequests
value: |-
memory: 2Gi
- name: PoolResourceLimits
value: |-
memory: 4Gi
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"openebs.io/v1alpha1","kind":"StoragePoolClaim","metadata":{"annotations":{"cas.openebs.io/config":"- name: PoolResourceRequ...
openebs.io/spc-lease: {"holder":"","leaderTransition":2}
reconcile.openebs.io/disable: true
API Version: openebs.io/v1alpha1
Kind: StoragePoolClaim
Metadata:
Creation Timestamp: 2020-02-17T08:27:03Z
Finalizers:
storagepoolclaim.openebs.io/finalizer
Generation: 4
Resource Version: 23690
Self Link: /apis/openebs.io/v1alpha1/storagepoolclaims/cstor-eks-pool
UID: 46419b5d-515f-11ea-842f-02954e80f65c
Spec:
Block Devices:
Block Device List:
blockdevice-20e99af10f332336e6ac5d1bf7a7c704
blockdevice-a764c47f99771e1470f8e862e6ad2f17
blockdevice-c3518649d70b6692f3b683c3a7c89e60
Max Pools: <nil>
Min Pools: 0
Name: cstor-eks-pool
Pool Spec:
Cache File:
Over Provisioning: false
Pool Type: striped
Type: disk
Status:
Phase: Online
Version Details:
Auto Upgrade: false
Desired: 1.6.0
Status:
Current: 1.6.0
Dependents Upgraded: true
Last Update Time: <nil>
State:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Update 5s (x2 over 18s) spc-controller reconcile is disabled via "reconcile.openebs.io/disable" annotation
Now the user has to manually create bdc for the new blockdevice.
First, check the status of blockdevices using the below-mentioned command.
kubectl get bd -n openebs
Example output:
NAME NODENAME SIZE CLAIMSTATE STATUS AGE
blockdevice-03199d821af214edbef04d314de51961 ip-192-168-91-218.ap-south-1.compute.internal 85899345920 Unclaimed Active 81s
blockdevice-0b4616bcf19d81ab2f99807e32923d05 ip-192-168-47-245.ap-south-1.compute.internal 85899345920 Unclaimed Active 4s
blockdevice-20e99af10f332336e6ac5d1bf7a7c704 ip-192-168-75-200.ap-south-1.compute.internal 85899345920 Claimed Unknown 114m
blockdevice-9d0654ff34269384270c813aee7c0208 ip-192-168-8-137.ap-south-1.compute.internal 85899345920 Unclaimed Active 38s
blockdevice-a764c47f99771e1470f8e862e6ad2f17 ip-192-168-62-243.ap-south-1.compute.internal 85899345920 Claimed Unknown 114m
blockdevice-c3518649d70b6692f3b683c3a7c89e60 ip-192-168-3-230.ap-south-1.compute.internal 85899345920 Claimed Unknown 114m
Now, manual bdc has to be created for the active and unclaimed blockdevices are going to be used in cStor pool. User can use the below YAML for the same.
apiVersion: openebs.io/v1alpha1
kind: BlockDeviceClaim
metadata:
labels:
openebs.io/storage-pool-claim: <SPC_Name>
name: <bdc-uid_corresponding_bd>
namespace: <openebs_namespace>
ownerReferences:
- apiVersion: openebs.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: StoragePoolClaim
name: <SPC_Name>
uid: <SPC_UID>
finalizers:
- storagepoolclaim.openebs.io/finalizer
spec:
blockDeviceName: <blockdevice_name>
blockDeviceNodeAttributes:
hostName: <host_name>
deviceClaimDetails: {}
deviceType: ""
hostName: ""
resources:
requests:
storage: <size_of_blockdevice>
Details like <SPC_Name>, <SPC_UID> can be found using the below-mentioned command.
kubectl get spc <SPC_Name> -o yaml
Sample command:
kubectl get spc cstor-eks-pool -o yaml
Sample Output:
apiVersion: openebs.io/v1alpha1
kind: StoragePoolClaim
metadata:
annotations:
cas.openebs.io/config: |
- name: PoolResourceRequests
value: |-
memory: 2Gi
- name: PoolResourceLimits
value: |-
memory: 4Gi
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"openebs.io/v1alpha1","kind":"StoragePoolClaim","metadata":{"annotations":{"cas.openebs.io/config":"- name: PoolResourceRequests\n value: |-\n memory: 2Gi\n- name: PoolResourceLimits\n value: |-\n memory: 4Gi\n"},"name":"cstor-eks-pool"},"spec":{"blockDevices":{"blockDeviceList":["blockdevice-20e99af10f332336e6ac5d1bf7a7c704","blockdevice-a764c47f99771e1470f8e862e6ad2f17","blockdevice-c3518649d70b6692f3b683c3a7c89e60"]},"name":"cstor-eks-pool","poolSpec":{"poolType":"striped"},"type":"disk"}}
openebs.io/spc-lease: '{"holder":"","leaderTransition":2}'
reconcile.openebs.io/disable: "true"
creationTimestamp: "2020-02-17T08:27:03Z"
finalizers:
- storagepoolclaim.openebs.io/finalizer
generation: 4
name: cstor-eks-pool
resourceVersion: "23690"
selfLink: /apis/openebs.io/v1alpha1/storagepoolclaims/cstor-eks-pool
uid: 46419b5d-515f-11ea-842f-02954e80f65c
spec:
blockDevices:
blockDeviceList:
- blockdevice-20e99af10f332336e6ac5d1bf7a7c704
- blockdevice-a764c47f99771e1470f8e862e6ad2f17
- blockdevice-c3518649d70b6692f3b683c3a7c89e60
maxPools: null
minPools: 0
name: cstor-eks-pool
poolSpec:
cacheFile: ""
overProvisioning: false
poolType: striped
type: disk
status:
phase: Online
versionDetails:
autoUpgrade: false
desired: 1.6.0
status:
current: 1.6.0
dependentsUpgraded: true
lastUpdateTime: null
state: ""
Details like <bdc-uid_corresponding_bd>, <blockdevice_name>, <host_name> can be found using,
kubectl get bd <one of the new blockdevice name> -n openebs -o yaml
Sample command:
kubectl get bd blockdevice-03199d821af214edbef04d314de51961 -n openebs -o yaml
Sample Output:
apiVersion: openebs.io/v1alpha1
kind: BlockDevice
metadata:
creationTimestamp: "2020-02-17T10:10:59Z"
generation: 1
labels:
kubernetes.io/hostname: ip-192-168-91-218
ndm.io/blockdevice-type: blockdevice
ndm.io/managed: "true"
name: blockdevice-03199d821af214edbef04d314de51961
namespace: openebs
resourceVersion: "27834"
selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/blockdevices/blockdevice-03199d821af214edbef04d314de51961
uid: cb46cead-516d-11ea-b713-0ab77224a61a
spec:
capacity:
logicalSectorSize: 512
physicalSectorSize: 0
storage: 85899345920
details:
compliance: ""
deviceType: ""
firmwareRevision: ""
model: Amazon Elastic Block Store
serial: vol08fe6fc12da266eb8
vendor: ""
devlinks:
- kind: by-id
links:
- /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol08fe6fc12da266eb8
- /dev/disk/by-id/nvme-nvme.1d0f-766f6c3038666536666331326461323636656238-416d617a6f6e20456c617374696320426c6f636b2053746f7265-00000001
- kind: by-path
links:
- /dev/disk/by-path/pci-0000:00:1f.0-nvme-1
filesystem: {}
nodeAttributes:
nodeName: ip-192-168-91-218.ap-south-1.compute.internal
partitioned: "No"
path: /dev/nvme1n1
status:
claimState: Unclaimed
state: Active
From the above output note down the volume ID (high lighted in blue color) for later use.
Once the bdc YAML is ready. Apply the YAML and the blockdevice will be claimed.
Verify the same using below command:
kubectl get bd -n openebs
Example Output:
NAME NODENAME SIZE CLAIMSTATE STATUS AGE
blockdevice-03199d821af214edbef04d314de51961 ip-192-168-91-218.ap-south-1.compute.internal 85899345920 Claimed Active 43m
blockdevice-0b4616bcf19d81ab2f99807e32923d05 ip-192-168-47-245.ap-south-1.compute.internal 85899345920 Unclaimed Active 42m
blockdevice-20e99af10f332336e6ac5d1bf7a7c704 ip-192-168-75-200.ap-south-1.compute.internal 85899345920 Claimed Unknown 157m
blockdevice-9d0654ff34269384270c813aee7c0208 ip-192-168-8-137.ap-south-1.compute.internal 85899345920 Unclaimed Active 42m
blockdevice-a764c47f99771e1470f8e862e6ad2f17 ip-192-168-62-243.ap-south-1.compute.internal 85899345920 Claimed Unknown 157m
blockdevice-c3518649d70b6692f3b683c3a7c89e60 ip-192-168-3-230.ap-south-1.compute.internal 85899345920 Claimed Unknown 157m
Now, the volume id that was obtained and noted down from the block device has to be searched in the "kubectl get csp -o yaml" that user kept safely during the prerequisites and found which csp uses this volume. Else, user can individually get the -o yaml output of each csp and figure out which csp uses the volume id. CSP name can be found from the output.
For example:
- apiVersion: openebs.io/v1alpha1
kind: CStorPool
metadata:
annotations:
openebs.io/csp-lease: '{"holder":"openebs/cstor-eks-pool-m7an-788c7cdbd5-vhg4z","leaderTransition":1}'
creationTimestamp: "2020-02-17T08:27:03Z"
generation: 12
labels:
kubernetes.io/hostname: ip-192-168-75-200
openebs.io/cas-template-name: cstor-pool-create-default-1.6.0
openebs.io/cas-type: cstor
openebs.io/storage-pool-claim: cstor-eks-pool
openebs.io/version: 1.6.0
name: cstor-eks-pool-m7an
ownerReferences:
- apiVersion: openebs.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: StoragePoolClaim
name: cstor-eks-pool
uid: 46419b5d-515f-11ea-842f-02954e80f65c
resourceVersion: "7077"
selfLink: /apis/openebs.io/v1alpha1/cstorpools/cstor-eks-pool-m7an
uid: 465dde34-515f-11ea-842f-02954e80f65c
spec:
group:
- blockDevice:
- deviceID: /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol08fe6fc12da266eb8
inUseByPool: true
name: blockdevice-20e99af10f332336e6ac5d1bf7a7c704
poolSpec:
cacheFile: ""
overProvisioning: false
poolType: striped
status:
capacity:
free: 79.5G
total: 79.5G
used: 15.6M
lastTransitionTime: "2020-02-17T08:27:52Z"
lastUpdateTime: "2020-02-17T08:32:51Z"
phase: Healthy
versionDetails:
autoUpgrade: false
desired: 1.6.0
status:
current: 1.6.0
dependentsUpgraded: true
lastUpdateTime: null
state: ""
Now, edit the same csp and update it with new BlockDevices, new hostname that the user obtained from the new block device. Use the below command.
kubectl edit csp <csp_name>
For example:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: openebs.io/v1alpha1
kind: CStorPool
metadata:
annotations:
openebs.io/csp-lease: '{"holder":"openebs/cstor-eks-pool-m7an-788c7cdbd5-vhg4z","leaderTransition":1}'
creationTimestamp: "2020-02-17T08:27:03Z"
generation: 94
labels:
kubernetes.io/hostname: ip-192-168-91-218
openebs.io/cas-template-name: cstor-pool-create-default-1.6.0
openebs.io/cas-type: cstor
openebs.io/storage-pool-claim: cstor-eks-pool
openebs.io/version: 1.6.0
name: cstor-eks-pool-m7an
ownerReferences:
- apiVersion: openebs.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: StoragePoolClaim
name: cstor-eks-pool
uid: 46419b5d-515f-11ea-842f-02954e80f65c
resourceVersion: "44278"
selfLink: /apis/openebs.io/v1alpha1/cstorpools/cstor-eks-pool-m7an
uid: 465dde34-515f-11ea-842f-02954e80f65c
spec:
group:
- blockDevice:
- deviceID: /dev/disk/by-id/nvme-Amazon_Elastic_Block_Store_vol08fe6fc12da266eb8
inUseByPool: true
name: blockdevice-03199d821af214edbef04d314de51961
poolSpec:
cacheFile: ""
overProvisioning: false
poolType: striped
status:
capacity:
free: 79.5G
total: 79.5G
used: 15.6M
lastTransitionTime: "2020-02-17T08:27:52Z"
lastUpdateTime: "2020-02-17T09:29:14Z"
phase: Healthy
versionDetails:
autoUpgrade: false
desired: 1.6.0
status:
current: 1.6.0
dependentsUpgraded: true
lastUpdateTime: null
Now, get the corresponding deployment of the CSP. The corresponding deployment can be identified with the csp name.
kubectl get deploy -n openebs
Example output:
NAME READY UP-TO-DATE AVAILABLE AGE
cstor-eks-pool-0n8k 0/1 1 0 3h10m
cstor-eks-pool-fpe2 0/1 1 0 3h10m
cstor-eks-pool-m7an 0/1 1 0 3h10m
maya-apiserver 1/1 1 1 3h20m
openebs-admission-server 1/1 1 1 3h20m
openebs-localpv-provisioner 1/1 1 1 3h20m
openebs-ndm-operator 1/1 1 1 3h20m
openebs-provisioner 1/1 1 1 3h20m
openebs-snapshot-operator 1/1 1 1 3h20m
pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-target 1/1 1 1 3h7m
Now edit the deployment and update the node selector with new hostname obtained from new blockdevice. Use the following command.
kubectl edit deploy <deployment_name> -n openebs
Example:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "2"
openebs.io/monitoring: pool_exporter_prometheus
creationTimestamp: "2020-02-17T08:27:03Z"
generation: 2
labels:
app: cstor-pool
openebs.io/cas-template-name: cstor-pool-create-default-1.6.0
openebs.io/cstor-pool: cstor-eks-pool-m7an
openebs.io/storage-pool-claim: cstor-eks-pool
openebs.io/version: 1.6.0
name: cstor-eks-pool-m7an
namespace: openebs
ownerReferences:
- apiVersion: openebs.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: CStorPool
name: cstor-eks-pool-m7an
uid: 465dde34-515f-11ea-842f-02954e80f65c
resourceVersion: "46232"
selfLink: /apis/extensions/v1beta1/namespaces/openebs/deployments/cstor-eks-pool-m7an
uid: 4661790f-515f-11ea-842f-02954e80f65c
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: cstor-pool
strategy:
type: Recreate
template:
metadata:
annotations:
cluster-autoscaler.kubernetes.io/safe-to-evict: "false"
openebs.io/monitoring: pool_exporter_prometheus
prometheus.io/path: /metrics
prometheus.io/port: "9500"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app: cstor-pool
openebs.io/cstor-pool: cstor-eks-pool-m7an
openebs.io/storage-pool-claim: cstor-eks-pool
openebs.io/version: 1.6.0
spec:
containers:
- env:
- name: OPENEBS_IO_CSTOR_ID
value: 465dde34-515f-11ea-842f-02954e80f65c
image: quay.io/openebs/cstor-pool:1.6.0
imagePullPolicy: IfNotPresent
lifecycle:
postStart:
exec:
command:
- /bin/sh
- -c
- sleep 2
livenessProbe:
exec:
command:
- /bin/sh
- -c
- timeout 120 zfs set io.openebs:livenesstimestamp="$(date +%s)" cstor-$OPENEBS_IO_CSTOR_ID
failureThreshold: 3
initialDelaySeconds: 300
periodSeconds: 60
successThreshold: 1
timeoutSeconds: 150
name: cstor-pool
ports:
- containerPort: 12000
protocol: TCP
- containerPort: 3233
protocol: TCP
- containerPort: 3232
protocol: TCP
resources:
limits:
memory: 4Gi
requests:
memory: 2Gi
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /dev
name: device
- mountPath: /tmp
name: tmp
- mountPath: /var/openebs/sparse
name: sparse
- mountPath: /run/udev
name: udev
- env:
- name: OPENEBS_IO_CSTOR_ID
value: 465dde34-515f-11ea-842f-02954e80f65c
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: RESYNC_INTERVAL
value: "30"
image: quay.io/openebs/cstor-pool-mgmt:1.6.0
imagePullPolicy: IfNotPresent
name: cstor-pool-mgmt
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /dev
name: device
- mountPath: /tmp
name: tmp
- mountPath: /var/openebs/sparse
name: sparse
- mountPath: /run/udev
name: udev
- args:
- -e=pool
command:
- maya-exporter
image: quay.io/openebs/m-exporter:1.6.0
imagePullPolicy: IfNotPresent
name: maya-exporter
ports:
- containerPort: 9500
protocol: TCP
resources: {}
securityContext:
privileged: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /dev
name: device
- mountPath: /tmp
name: tmp
- mountPath: /var/openebs/sparse
name: sparse
- mountPath: /run/udev
name: udev
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/hostname: ip-192-168-91-218
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: openebs-maya-operator
serviceAccountName: openebs-maya-operator
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /dev
type: Directory
name: device
- hostPath:
path: /var/openebs/sparse/shared-cstor-eks-pool
type: DirectoryOrCreate
name: tmp
- hostPath:
path: /var/openebs/sparse
type: DirectoryOrCreate
name: sparse
- hostPath:
path: /run/udev
type: Directory
name: udev
status:
conditions:
- lastTransitionTime: "2020-02-17T09:29:18Z"
lastUpdateTime: "2020-02-17T09:29:18Z"
message: Deployment does not have minimum availability.
reason: MinimumReplicasUnavailable
status: "False"
type: Available
- lastTransitionTime: "2020-02-17T08:27:03Z"
lastUpdateTime: "2020-02-17T11:43:51Z"
message: ReplicaSet "cstor-eks-pool-m7an-7c5b9d48c5" is progressing.
reason: ReplicaSetUpdated
status: "True"
type: Progressing
observedGeneration: 2
replicas: 1
unavailableReplicas: 1
updatedReplicas: 1
Now, get the corresponding cvr of the csp. Corresponding can be identified by the csp name. Use the below command.
kubectl get cvr -n openebs
Example output:
NAME USED ALLOCATED STATUS AGE
pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-cstor-eks-pool-0n8k 53.6M 15.2M Healthy 3h30m
pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-cstor-eks-pool-fpe2 53.6M 15.2M Healthy 3h30m
pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-cstor-eks-pool-m7an 53.7M 15.2M Degraded 3h30m
Next, edit the corresponding cvr and update the hostname. Use the following command to edit the cvr.
kubectl edit cvr <cvr_name> -n openebs
Example:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: openebs.io/v1alpha1
kind: CStorVolumeReplica
metadata:
annotations:
cstorpool.openebs.io/hostname: ip-192-168-91-218
isRestoreVol: "false"
openebs.io/storage-class-ref: |
name: openebs-eks-sc
resourceVersion: 5806
creationTimestamp: "2020-02-17T08:29:38Z"
finalizers:
- cstorvolumereplica.openebs.io/finalizer
generation: 160
labels:
cstorpool.openebs.io/name: cstor-eks-pool-m7an
cstorpool.openebs.io/uid: 465dde34-515f-11ea-842f-02954e80f65c
cstorvolume.openebs.io/name: pvc-a1a74277-515f-11ea-ba4a-06f17475aa56
openebs.io/cas-template-name: cstor-volume-create-default-1.6.0
openebs.io/persistent-volume: pvc-a1a74277-515f-11ea-ba4a-06f17475aa56
openebs.io/version: 1.6.0
name: pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-cstor-eks-pool-m7an
namespace: openebs
resourceVersion: "53068"
selfLink: /apis/openebs.io/v1alpha1/namespaces/openebs/cstorvolumereplicas/pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-cstor-eks-pool-m7an
uid: a3228e3a-515f-11ea-842f-02954e80f65c
spec:
capacity: 5G
replicaid: 28E489716E4E744871E8A3D89A044D64
targetIP: 10.100.224.33
zvolWorkers: ""
status:
capacity:
totalAllocated: 15.2M
used: 53.7M
lastTransitionTime: "2020-02-17T11:44:13Z"
lastUpdateTime: "2020-02-17T12:17:38Z"
phase: Degraded
versionDetails:
autoUpgrade: false
desired: 1.6.0
status:
current: 1.6.0
dependentsUpgraded: true
lastUpdateTime: null
state: ""
Now follow the same steps for the other 2 blockdevices and the steps for their corresponding csp, deployment and cvr. Once all of them are updated, all the cStor pool pods will be in running state and the CVRs will be in a healthy state. Also, the application pod will start running.
To check the status of pods running in openebs, execute:
kubectl get pods -n openebs
Output:
NAME READY STATUS RESTARTS AGE
cstor-eks-pool-0n8k-7cd67b9f4d-c28t6 3/3 Running 0 9m34s
cstor-eks-pool-fpe2-69f85dcb8c-b2xrg 3/3 Running 0 75s
cstor-eks-pool-m7an-7c5b9d48c5-j95vg 3/3 Running 0 58m
maya-apiserver-58c68c9f8f-z772s 1/1 Running 0 3h13m
openebs-admission-server-6b77fd668b-zrpkn 1/1 Running 0 3h13m
openebs-localpv-provisioner-869fbc885f-mkgxm 1/1 Running 0 3h13m
openebs-ndm-7v6pc 1/1 Running 0 3h31m
openebs-ndm-jp4pm 1/1 Running 0 3h30m
openebs-ndm-operator-7c6cc67c49-gcq7x 1/1 Running 0 3h13m
openebs-ndm-qq6wc 1/1 Running 0 3h31m
openebs-provisioner-744bbc9496-7jh5m 1/1 Running 0 3h13m
openebs-snapshot-operator-58cb57bd5b-cz6tk 2/2 Running 0 3h13m
openebs-ubuntu-init-68vk5 1/1 Running 0 3h30m
openebs-ubuntu-init-b52kv 1/1 Running 0 3h31m
openebs-ubuntu-init-zjh54 1/1 Running 0 3h31m
pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-target-cb869d444-s98bp 3/3 Running 0 3h13m
To check the status of CVR, execute:
kubectl get cvr -n openebs
Output:
NAME USED ALLOCATED STATUS AGE
pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-cstor-eks-pool-0n8k 52.5M 14.7M Healthy 4h13m
pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-cstor-eks-pool-fpe2 52.5M 14.7M Healthy 4h13m
pvc-a1a74277-515f-11ea-ba4a-06f17475aa56-cstor-eks-pool-m7an 52.5M 14.7M Healthy 4h13m
Now, to verify if the pods are running execute:
kubectl get pods
Sample Output:
NAME READY STATUS RESTARTS AGE
percona-66db7d9b88-wgp8v 1/1 Running 0 3h13m