This article discusses setting up a Microsoft AKS managed Kubernetes cluster with OpenEBS-based cross-zone high-availability storage.
Why cross-zone?
Microsoft's Azure Kubernetes Service (AKS) is one of the better known names in the world of cloud-based Kubernetes service providers. As of today, AKS users are presented with the option of over 50 availability regions (Read more). 10 of said regions offer different 'Availability Zones' within the same availability region. K8s cluster nodes in different availability zones are geographically set apart. In the event of a power outage, or any other form of failure in one of the data centers, cross-zone k8s clusters remain live with negligible down-time. Multi-zone clusters are the next step in achieving high-availability in these failure scenarios, although they are quite rare. Here are some of the benefits of using one:
- With the use of pod-replication practices, there is minimal down-time in the event of a node failure. Your service remains live while being hosted in a remote location, away from the failure zone.
- With node-failure there is always a risk of data-loss for stateful applications, databases, etc. This can be mitigated if your application supports data-replication. CAS solutions like OpenEBS use live replica containers spread across zones for your high-availability storage needs.
Relevant links for getting started with a multi-zone AKS cluster
- Create an Azure Kubernetes Service (AKS) cluster that uses Availability Zones
- Azure Portal
- Sign in with Azure CLI
- Azure Cloud Shell
- Overview of Azure Cloud Shell
- Introduction to Azure managed disks
- What disk types are available in Azure?
- Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes for maintenance or troubleshooting
Note: This tutorial is entirely based on the Azure Cloud Shell. You may want to use the Azure Portal for monitoring purposes. The Azure Portal acts as an intuitive dashboard-like tool while executing functional tasks using the provided Azure Cloud Shell console. The Cloud Shell is accessible from the Azure Portal as well, read more here.
Getting started with OpenEBS
Follow the steps below to install OpenEBS on your cluster.
STEP 1
Execute the following command to install the latest version of OpenEBS:
kubectl apply -f https://openebs.github.io/charts/openebs-operator.yaml
STEP 2
To verify the creation of the openebs
namespace, execute:
kubectl get ns
Sample Output:
NAME STATUS AGE
default Active 15m
kube-node-lease Active 15m
kube-public Active 15m
kube-system Active 15m
openebs Active 14s
To verify the creation and status of the OpenEBS pods, execute the following command:
kubectl get pods -n openebs
Sample Output:
NAME READY STATUS RESTARTS AGE
maya-apiserver-6ffbfbb8b5-clf8c 1/1 Running 2 2m13s
openebs-admission-server-77d5c698fb-pmmnl 1/1 Running 0 2m8s
openebs-localpv-provisioner-7f66548fc-x7mcv 1/1 Running 0 2m7s
openebs-ndm-hzj7w 1/1 Running 0 2m11s
openebs-ndm-operator-7ff846d998-zkqdq 1/1 Running 1 2m10s
openebs-ndm-xtd8s 1/1 Running 0 2m11s
openebs-ndm-zw62j 1/1 Running 0 2m11s
openebs-provisioner-65958fbc48-6w8f9 1/1 Running 0 2m12s
openebs-snapshot-operator-6bc88fcb49-q7nh7 2/2 Running 0 2m12s
An easier way to monitor pods and other components that will be used in further steps, is with OpenEBS Director's online and on-premise solutions. To connect Director with your cluster, click here.
Provisioning OpenEBS volumes
OpenEBS can provision 3 types of volumes. They use different storage engines -- cStor, Jiva and Local PV. Read more about them from OpenEBS docs using the links given below, before proceeding with either one of them:
Prerequisites
cStor requires an unmounted disk (or disks) to a node which does not have a filesystem. If you already have such a disk (or disks) ready, you may skip the following section or else proceed with the following steps to create and attach a disk.
Quickstart guide to disk creation and attaching
Note: cStor requires an unmounted disk with no filesystem on it. All AKS linux VMs come with a data disk mounted at /mnt
formatted in ext4. If this disk meets your capacity requirements, you can use it to deploy cStor volume, after unmounting it and removing its filesystem. Follow the steps below to unmount the default data disk and remove its filesystem.
You can enquire about the capacity and other details of disks attached to a node using the following commands, substituting...
- the name of your resource group for
<resource-group-name>
. - the name of your AKS cluster for
<cluster-name>
. - the instance ID of the particular VM instance which you want to check for disk information (usually they are numbered as 0, 1, 2...) for
<instance-id>
.
Use the following commands to create the
CLUSTER_RESOURCE_GROUP
and SCALE_SET_NAME
variables. These variables will be used in the steps that follow.
CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group <resource-group-name> --name <cluster-name> --query nodeResourceGroup -o tsv)
SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query [0].name -o tsv)
STEP 2
Use the following command to run the
lsblk
command on your VM instance. The lsblk
command lists all block devices attached to the node.
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id <instance-id> --scripts "lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL" --query value -o table
Sample Output:
Code Level DisplayStatus Message
--------------------------- ------- ---------------------- ----------------------------------------------
ProvisioningState/succeeded Info Provisioning succeeded Enable succeeded:
[stdout]
NAME FSTYPE SIZE MOUNTPOINT LABEL
sdb 16G
└─sdb1 ext4 16G /mnt
sr0 694K
sda 40G
└─sda1 ext4 40G / cloudimg-rootfs
[stderr]
Here, there is a disk with an ext4 partition at /dev/sdb1
.STEP 3
Once you find the data disk, execute the following command to unmount its filesystem. Here we run the
umount
command to unmount the filesystem:
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id <instance-id> --scripts "sudo umount /mnt" --query value -o table
STEP 4
Execute the following command to wipe its filesystem. Here we run the
wipefs
command to wipe the device's filesystem:
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id <instance-id> --scripts "sudo wipefs -af /dev/sdb" --query value -o table
STEP 5
To verify, we execute
lsblk
on the instance again. The disk device should not have a mountpoint and should not have any partitions or filesystem:az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id <instance-id> --scripts "lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL" --query value -o table
Sample Output:
Code Level DisplayStatus Message
--------------------------- ------- ---------------------- ----------------------------------------------
ProvisioningState/succeeded Info Provisioning succeeded Enable succeeded:
[stdout]
NAME FSTYPE SIZE MOUNTPOINT LABEL
sdb 16G
sr0 694K
sda 40G
└─sda1 ext4 40G / cloudimg-rootfs
[stderr]
The device /dev/sdb
can now be used to provision cStor volume.Note:
It is recommended that you...It is not recommended that you...
- create disks using the
az disk create
command and attach it to a VM instance using theaz vmss disk attach
command, in Azure Cloud Shell.
- use the 'Storage' setting under the 'Virtual Machine Scale Set' settings in the Azure Portal to create and attach disks, as it is may sometimes work unexpectedly.
- use
az vmss disk attach
to directly create and attach disks, instead of usingaz disk create
to create the disk. This is because it may sometimes work unexpectedly.
Note: If you want to use a partitioned disk device (it must not have a filesystem), then you may follow the detailed guidelines here.
We create the variables
CLUSTER_RESOURCE_GROUP
and SCALE_SET_NAME
. They will be required in the step that follows.CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group <resource-group-name> --name <cluster-name> --query nodeResourceGroup -o tsv)
SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query [0].name -o tsv)
Use the following command to find the path to the disk. Make a note of the Instance ID and the path to the disk device, as it will be required for identifying blockdevice resources for cStor pool.
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id <instance-id> --scripts "lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL" --query value -o table
Deploying cStor volume
Provisioning a cStor volume involves the following 3 steps:
- Creating and deploying a StoragePool spec.
- Creating and deploying a StorageClass spec.
- Creating and deploying a PersistentVolumeClaim spec.
- Creating and deploying a StoragePool spec
Follow the steps below to create a StoragePool for your cStor deployment.
STEP 1
We will recreate theSCALE_SET_NAME
variable. We will use it in the next step. Execute:
SCALE_SET_NAME=$(az vmss list --resource-group $(az aks show --resource-group <resource-group-name> --name <cluster-name> --query nodeResourceGroup -o tsv) --query [0].name -o tsv)
STEP 2
Execute the following command to list the Active and Unclaimed blockdevices in the openebs namespace:kubectl get blockdevice -n openebs | grep -P '^(?=.*Unclaimed)(?=.*Active)'
If there is no output, then there are no available active and unclaimed blockdevices. You cannot provision cStor volume at the moment.
If you see devices listed, execute the following command to see the Node Names of the VMs they are attached to and their/dev
path. The last few digits in the 'Node Name' indicate towards the 'instance id' of the VM. Make a note of the ‘Name’ fields, you will have to copy these names into the StoragePool spec file (further instructions below).
kubectl get blockdevice -n openebs | grep -P '^(?=.*Unclaimed)(?=.*Active)' | awk '{print $1;}' | xargs kubectl describe blockdevice -n openebs | grep -e "\<Name\>" -e 'Node Name' -e 'Path'| sed "0~3 s/$/\n\n/g"
The following is an example of a StoragePool spec. You will have to remove the sample blockdevice names and replace them with the names of your blockdevices. You may make the following changes as per your preference:
metadata:
name:
spec:
name:
spec:
poolSpec:
poolType:
(Read more at: OpenEBS docs)
Create a file called cstor-pool-config.yaml. The contents of the file should be as follows:apiVersion: openebs.io/v1alpha1 kind: StoragePoolClaim metadata: name: cstor-disk-pool annotations: cas.openebs.io/config: | - name: PoolResourceRequests value: |- memory: 2Gi - name: PoolResourceLimits value: |- memory: 4Gi spec: name: cstor-disk-pool type: disk poolSpec: poolType: striped blockDevices: blockDeviceList: ##Replace the following with actual blockDevice CRs from your cluster - blockdevice-66a74896b61c60dcdaf7c7a76fde0ebb - blockdevice-b34b3f97840872da9aa0bac1edc9578a - blockdevice-ce41f8f5fa22acb79ec56292441dc207
STEP 3
Deploy the StoragePool spec by executing the command:
kubectl apply -f cstor-pool-config.yaml
STEP 4
Verify the creation of the StoragePool by executing the following command:kubectl get spc
Sample Output:
NAME AGE cstor-disk-pool 4m53s
You may also verify csp status. Execute:
kubectl get csp
Sample Output:NAME ALLOCATED FREE CAPACITY STATUS TYPE AGE cstor-disk-pool-9pup 155K 15.9G 15.9G Healthy striped 8m51s cstor-disk-pool-m2wx 186K 15.9G 15.9G Healthy striped 8m51s cstor-disk-pool-t4in 197K 15.9G 15.9G Healthy striped 8m51s
You can confirm that the pool pods are running by executing the following command:kubectl get pods -n openebs | grep <poolname>
Sample command:kubectl get pods -n openebs | grep cstor-disk-pool
Sample Output:cstor-disk-pool-9pup-7644964899-2shbf 3/3 Running 0 9m54s cstor-disk-pool-m2wx-668fcfb9f6-hz9mp 3/3 Running 0 9m54s cstor-disk-pool-t4in-7bcb668686-zxl7q 3/3 Running 0 9m54s
- Creating and deploying a StorageClass spec
The next step is to create and deploy a StorageClass. Follow the steps below.
STEP 1
To create a StorageClass spec, create a file called cstor-sc.yaml. The following YAML file works well if you have used the sample StoragePool spec given in the previous section. You may use different values for the following fields in the spec given below:
metadata:
name:
metadata:
annotations:
cas.openebs.io/config: |
- name: ReplicaCount
value: "<replica-count>"
(This is the number of volume replicas you want to create)
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: openebs-sc-statefulset annotations: openebs.io/cas-type: cstor cas.openebs.io/config: | - name: StoragePoolClaim value: "cstor-disk-pool" - name: ReplicaCount value: "3" ##ReplicaCount value should always be less than or equal to the number of pool instances provisioner: openebs.io/provisioner-iscsi
STEP 2
Deploy the StorageClass spec by using the following command:
kubectl apply -f cstor-sc.yaml
STEP 3
To verify, execute:kubectl get sc
Sample Output:NAME PROVISIONER AGE default (default) kubernetes.io/azure-disk 118m managed-premium kubernetes.io/azure-disk 118m openebs-device openebs.io/local 60m openebs-hostpath openebs.io/local 60m openebs-jiva-default openebs.io/provisioner-iscsi 60m openebs-sc-statefulset openebs.io/provisioner-iscsi 10s openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 60m
- Creating and deploying a PersistentVolumeClaim spec
We need not apply the PVC separately. We can include it in the application YAML spec file. In our example deployment we will deploy the PVC YAML in the spec file for the application. A sample PVC is shown here to outline the fields involved in the YAML file. You may change the following fields to meet your preferences:metadata:
name:
spec:
resources:
requests:
storage:
(This is the size of the volume)
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: cstor-pvc-mysql-large spec: storageClassName: openebs-sc-statefulset accessModes: - ReadWriteOnce resources: requests: storage: 10Gi
You can deploy applications in your cluster and provision storage for it using the above PVC. The link below describes the process for deploying and provisioning with some sample applications:
Prerequisites
Jiva requires a disk mounted to a node which is formatted in ext4 or xfs filesystem. If you already have such a disk ready, you may skip this section or else proceed with the following steps to create and mount a disk.
Quickstart guide to disk creation and mounting
Note: Jiva requires a formatted disk with a filesystem. All AKS linux VMs come with a data disk mounted at /mnt formatted in ext4. You can use this disk to provision Jiva storage if the disk meets your requirements. In that case, you do not need to create a separate disk device as described in the following steps. You can enquire about the capacity and other details of disks attached to a node using the following commands, substituting...
- the name of your resource group for
<resource-group-name>
. - the name of your AKS cluster for
<cluster-name>
. - the instance ID of the particular VM you want to check for disk information (usually they are numbered as 0, 1, 2...) for
<instance-id>
.
Use the following commands to create the
CLUSTER_RESOURCE_GROUP
and SCALE_SET_NAME
variables. These variables will be used in the steps that follow.
CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group <resource-group-name> --name <cluster-name> --query nodeResourceGroup -o tsv)
SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query [0].name -o tsv)
STEP 2
Use the following command to run the
lsblk
command on your VM instance. The lsblk
command lists all block devices attached to the node.
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id <instance-id> --scripts "lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL" --query value -o table
Sample Output:
Code Level DisplayStatus Message
--------------------------- ------- ---------------------- ----------------------------------------------
ProvisioningState/succeeded Info Provisioning succeeded Enable succeeded:
[stdout]
NAME FSTYPE SIZE MOUNTPOINT LABEL
sdb 16G
└─sdb1 ext4 16G /mnt
sr0 694K
sda 40G
└─sda1 ext4 40G / cloudimg-rootfs
[stderr]
Here we can use /dev/sdb1
to provision Jiva storage.Note:
It is recommended that you...It is not recommended that you...
- create disks using the
az disk create
command and attach it to a VM instance using theaz vmss disk attach
command, in Azure Cloud Shell.
- use the 'Storage' setting under the 'Virtual Machine Scale Set' settings in the Azure Portal to create and attach disks, as it is may sometimes work unexpectedly.
- use
az vmss disk attach
to directly create and attach disks, instead of usingaz disk create
to create the disk. This is because it may sometimes work unexpectedly.
Creating filesystem and mounting it
STEP 1
Use the following commands to create the
CLUSTER_RESOURCE_GROUP
and SCALE_SET_NAME
variables. These variables will be used in the step that follows.
CLUSTER_RESOURCE_GROUP=$(az aks show --resource-group <resource-group-name> --name <cluster-name> --query nodeResourceGroup -o tsv)
SCALE_SET_NAME=$(az vmss list --resource-group $CLUSTER_RESOURCE_GROUP --query [0].name -o tsv)
STEP 2
Use the following command to find the path to the disk. Make a note of the path to the disk device, as it will be required in the following steps when we format it/mount it.
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id <instance-id> --scripts "lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL" --query value -o table
Sample Output:
Code Level DisplayStatus Message
--------------------------- ------- ---------------------- ----------------------------------------------
ProvisioningState/succeeded Info Provisioning succeeded Enable succeeded:
[stdout]
NAME FSTYPE SIZE MOUNTPOINT LABEL
sdb 16G
└─sdb1 ext4 16G /mnt
sr0 694K
sdc 32G
sda 40G
└─sda1 ext4 40G / cloudimg-rootfs
[stderr]
Here, /dev/sdc
is the disk we are going to format and mount.STEP 3
Execute the following command to format the disk in ext4. Substitute
<disk-path>
for the path to the disk device.
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id <instance-id> --scripts "sudo mkfs.ext4 <disk-path>" --query value -o table
This is optional. Execute the following command to create a new directory to mount the disk. Substitute the name of the new directory for
<new-dir>
:
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id <instance-id> --scripts "sudo mkdir <new-dir>" --query value -o table
STEP 4
Execute the following command to mount the disk to the desired mountpoint
<mount-pt>
(you may use <new-dir>
which was created in the previous step):
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id <instance-id> --scripts "sudo mount <disk-path> <mount-pt>" --query value -o table
Working example:
We are creating a new directory at
/home/openebs-gpd
and using it to mount our disk at /dev/sdc
.
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id 0 --scripts "sudo mkfs.ext4 /dev/sdc" --query value -o table
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id 0 --scripts "sudo mkdir /home/openebs-gpd" --query value -o table
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id 0 --scripts "sudo mount /dev/sdc /home/openebs-gpd" --query value -o table
STEP 5
Use the following command to verify:
az vmss run-command invoke --resource-group $CLUSTER_RESOURCE_GROUP --name $SCALE_SET_NAME --command-id RunShellScript --instance-id <instance-id> --scripts "df -h --output=source,fstype,size,used,avail,pcent,target -x tmpfs -x devtmpfs" --query value -o table
Sample Output:
Code Level DisplayStatus Message
--------------------------- ------- ---------------------- ----------------------------------------------------------------------------------------------------------------------------------------------
ProvisioningState/succeeded Info Provisioning succeeded Enable succeeded:
[stdout]
Filesystem Type Size Used Avail Use% Mounted on
/dev/sda1 ext4 39G 11G 28G 28% /
/dev/sdb1 ext4 16G 44M 15G 1% /mnt
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/6f97a65ad35a29f2d3954b28e22b5b912871a20d01ce89ebb9b8eebeda427d79/merged
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/784dacc44ece519abb6be59112e05bad01e8966e61205f50656aaf0d0726e63b/merged
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/cbf2445fda54dc44e2d52f81cef0da08bd86e9545c206ee4b0df27116e9d4585/merged
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/d7a1fd4c1504ff6162d62adc9d70bb8921b02c7defb87b4cab18f807a1670829/merged
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/fb7739232a4ba579d006002c107ce42d95fbeee97feda21f6f48a71c88c666f3/merged
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/0c8831434a0d9e9402b992288383a20754b7c1b360033c252b657e29bf8142bf/merged
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/b61e766aa6bd8169087251c7d92db90d97625b6ad91951004df57a314318cf09/merged
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/3dcb6beb562173ab1bc2198a39c4dfe3f611292f9af48de324c038e7339cc975/merged
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/49b78379dc1807c8b2061449c0f7849ee274a2496f39c709f21779652eb444b1/merged
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/047727c10c5ff6b2b709dd6dc51df710586f458b024b98eb5a948238dde8c36e/merged
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/0104efb0cac6e45c4c0a555b9b5d246f7434820d36b50b6b23ef13ecff9d4348/merged
overlay overlay 39G 11G 28G 28% /var/lib/docker/overlay2/725d72b5f819bc4c48078ec0251727f2e747161eeac9e78cedae482392694b34/merged
/dev/sdc ext4 32G 48M 30G 1% /home/openebs-gpd
[stderr]
Deploying Jiva volume
Provisioning a jiva volume involves the following 3 steps:
- Creating and deploying a StoragePool spec.
- Creating and deploying a StorageClass spec.
- Creating and deploying a PersistentVolumeClaim spec.
- Creating and deploying a StoragePool spec
Follow the steps below to create a StoragePool for your Jiva deployment. You will need to know the path to the mount point of your disk device<mount-pt>
.
STEP 1
The following is an example of a StoragePool spec. It works well as long as your target disk device is mounted at /home/openebs-gpd and you use the name StoragePool throughout the process. You may make the following changes as per your preference:
metadata:
name:
spec:
path:
(You may use that<mount-pt>
you created in the disk mounting steps)
<mount-pt>
. The contents of the file should be as follows:apiVersion: openebs.io/v1alpha1 kind: StoragePool metadata: name: gpdpool type: hostdir spec: path: "/home/openebs-gpd"
STEP 2
Deploy the StoragePool spec by executing the command:kubectl apply -f jiva-gpd-pool.yaml
STEP 3
Verify the creation of the StoragePool using the following command:kubectl get storagepool
Sample Output:NAME AGE default 32m gpdpool 3m17s
- Creating and deploying a StorageClass spec
The next step is to create and deploy a StorageClass. Follow the steps below.
STEP 1
To create a StorageClass spec, create a file called jiva-gpd-3repl-sc.yaml. The following YAML file works well if you have used the sample StoragePool spec given in the previous section. You may use different values for the following fields in the spec given below:metadata:
name:
metadata:
annotations:
cas.openebs.io/config: |
- name: ReplicaCount
value: "<replica-count>"
(This is the number of volume replicas you want to create. This should be less than or equal to the number of pool instances.)
ReplicaAntiAffinityTopoKey
value should be necessarily set tofailure-domain.beta.kubernetes.io/zone
to prevent replicas from being created in the same availability zone.
For more information on setting up Jiva policies, click here.apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: openebs-jiva-gpd-3repl annotations: openebs.io/cas-type: jiva cas.openebs.io/config: | - name: ReplicaCount value: "3" ##ReplicaCount value should always be less than or equal to the number of pool instances - name: StoragePool value: gpdpool - name: ReplicaAntiAffinityTopoKey value: failure-domain.beta.kubernetes.io/zone provisioner: openebs.io/provisioner-iscsi
Deploy the YAML using the following command:kubectl apply -f jiva-gpd-3repl-sc.yaml
STEP 2
To verify, execute:kubectl get sc
Sample Output:NAME PROVISIONER AGE default (default) kubernetes.io/azure-disk 45m managed-premium kubernetes.io/azure-disk 45m openebs-device openebs.io/local 36m openebs-hostpath openebs.io/local 36m openebs-jiva-default openebs.io/provisioner-iscsi 36m openebs-jiva-gpd-3repl openebs.io/provisioner-iscsi 69s openebs-snapshot-promoter volumesnapshot.external-storage.k8s.io/snapshot-promoter 36m
- Creating and deploying a PersistentVolumeClaim spec
We need not apply the PVC separately. We can include it in the application YAML spec file. In our example deployment we will deploy the PVC YAML in the spec file for the application. A sample PVC is shown here to outline the fields involved in the YAML file. You may change the following fields to meet your preferences:
metadata:
name:
spec:
resources:
requests:
storage:
(This is the size of the volume)
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: demo-vol1-claim spec: storageClassName: openebs-jiva-gpd-3repl accessModes: - ReadWriteOnce resources: requests: storage: 4G
Provisioning an Application
You can deploy applications in your cluster and provision storage for it using the above PVC. The link below describes the process for deploying and provisioning with some sample applications:
Local PV volumes can be provisioned on the OS-disk itself or on a separate disk device. There are two types of storage that can be provisioned -- hostpath and device. Local PV storage cannot be provisioned before an application is deployed. But, it can be ensured that it is dynamically provisioned automatically by setting the
volumeBindingMode
field to WaitForFirstConsumer
in the StorageClass spec file (default storageclasses are already created in the ‘openebs’ namespace).A sample hostpath-based PersistentVolumeClaim spec file will look like this:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-vol1-claim
spec:
storageClassName: openebs-hostpath
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5G
A sample device-based PersistentVolumeClaim spec file will look like this:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: demo-vol1-claim
spec:
storageClassName: openebs-device
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5G
We will deploy the PVC spec along with the application spec file. Read more about deploying sample applications in the links below: