Kubera OnPrem is included in the Kubera Enterprise subscription. It refers to on-premises deployment of Kubera components and services. Enterprise users who are bound by network restriction policies might encounter problems while connecting their Kubernetes cluster to Kubera. In such cases, users can download Kubera OnPrem Director and deploy it at their end.
Prerequisites
1. Kubernetes 1.12.0 or above.
2. Allocating 4 vCPU and 15GB RAM for Kubera OnPrem components is recommended.
3. Ensure iSCSI client is setup and iscsid service is running on worker nodes (a prerequisite for provisioning cStor and Jiva volumes)
NOTE: However, if you are deploying Kubera OnPrem version 2.1.0 or later, and using any one of the following platforms: Ubuntu, CentOS, RHEL, AmazonLinux Kubera does that for you.
For users using platforms other than the ones mentioned above or deploying OnPrem version < 2.1.0 ensure iSCSI client is setup. To know more about it click here.
4. Ensure port 80 of the node, which has Kubera OnPrem deployed must be open and accessible.
5. Pod CIDR should be one of these values:
- 10.0.0.0/8
- 100.64.0.0/10
- 127.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
6. Helm should be installed.
- Helm needs to be installed on the master node.
- The document has information for both Helm 2 as well as Helm 3. However, the recommended version is Helm 3.
Installing Kubera OnPrem:
To get started with Kubera OnPrem follow the below steps:
STEP 1
Register and Download Kubera OnPrem:
If you are an existing user, login to MayaData with your credentials, else SignUp to MayaData.
To know the detailed steps on Sign Up Click Here.
Once you are logged in, go to the MayaData User Portal, now Click on Download OnPrem button on the dashboard under Director section (encircled in the image).
You will receive an email from MayaData Inc containing docker repository credentials. Keep a note of these credentials provided in the email as these will be needed in the later steps of installation.
STEP 2
Install Kubera OnPrem
All the steps mentioned below are written assuming that Kubera needs to be deployed on "kubera" namespace.
Installation of Kubera needs the below mentioned preparatory steps:
1. Creation of a secret
To create "kubera" namespace, execute:
kubectl create ns kubera
Next, you need to create a docker secret using the credentials you received earlier.
kubectl create secret docker-registry directoronprem-registry-secret --docker-server=registry.mayadata.io --docker-username=<username> --docker-password=<password> -n kubera
2. Exposing the Node IP, where Kubera is to be installed.
In the case that you are installing Kubera On Premise, you must think about how you wish to access your installation. The approved solution is to use an external Load Balancer with DNS to point to the hostname you are using for server.url
3. Install Kubera OnPrem
To add the Kubera repo to your local machine, execute:
helm repo add kubera https://charts.mayadata.io/ helm repo update
NOTE:
1. By default, Kubera charts installs OpenEBS into your cluster. In case your setup already has OpenEBS deployed on it, add flag--set type.installOpenebs=false
in the below mentioned command.
2. Kubera usesopenebs-hostpath
as default Storage Class. To configure add the below mentioned flags to the installation command.-- set mysql.storageClass=/*NameOfStorageClass*/ --set elasticSearch.storageClass=/*NameOfStorageClass*/ --set cassandra.storageClass=/*NameOfStorageClass*/ --set mayaStore.storageClass=/*NameOfStorageClass*/ --set grafana.storageClass=/*NameOfStorageClass*/
Ensure, Kubera components, OpenEBS components(if installed manually) and all other related components are deployed in the same namespace.
Next, to install Kubera OnPrem execute the below-mentioned command replacing Node_IP with the exposed Node IP, where Kubera needs to be deployed.
For helm version 3, execute:
helm install kubera kubera/kubera-charts --set server.url=<http://Node_IP> -n kubera
For helm version 2, execute:
helm install kubera/kubera-charts --name kubera --set server.url=<http://Node_IP> --namespace kubera
To view the installed version, execute:
helm ls -n kubera
STEP 3:
Verify Kubera OnPrem pods using the following command.
kubectl get pods -n kubera
Output:
NAME READY STATUS RESTARTS AGE
alertmanager-68f4cc6cb8-kvwtr 1/1 Running 0 5m34s
alertstore-76c5cf7db7-kz2hm 1/1 Running 0 5m34s
alertstore-tablemanager-7bb8d95c8b-bbcbp 1/1 Running 0 5m34s
cassandra-0 1/1 Running 0 5m34s
chat-server-85f458dd5b-2zqbc 1/1 Running 0 5m33s
cloud-agent-6b9f5f4b98-c4fjr 1/1 Running 0 5m33s
configs-6597cc95b7-c2kk9 1/1 Running 0 5m32s
configs-db-57bf44bbfd-4z7tz 1/1 Running 0 5m33s
consul-c6ffdf59b-vxv7p 1/1 Running 0 5m35s
distributor-68587d6c48-lxkrg 1/1 Running 0 5m35s
elastalert-c58bcfbfd-9r75t 1/1 Running 0 5m34s
ingester-74f85f9bcc-nrksg 1/1 Running 0 5m34s
kubera-kubera-charts-admission-server-d4cb87664-njq7n 1/1 Running 0 5m32s
kubera-kubera-charts-apiserver-6984fc9d9d-cknsq 1/1 Running 2 5m34s
kubera-kubera-charts-localpv-provisioner-64cb455886-dphnl 1/1 Running 0 5m32s
kubera-kubera-charts-ndm-9dl5z 1/1 Running 0 5m35s
kubera-kubera-charts-ndm-operator-56487bb96-v6hsq 1/1 Running 1 5m35s
kubera-kubera-charts-provisioner-86c4696784-hd99j 1/1 Running 0 5m32s
kubera-kubera-charts-snapshot-operator-84f76b5c56-pr6r4 2/2 Running 0 5m33s
kubera-nginx-ingress-controller-nc47r 1/1 Running 0 5m35s
kubera-nginx-ingress-default-backend-7b69db9f7b-lwxpq 1/1 Running 0 5m35s
kubera-nginx-ingress-default-backend-7b69db9f7b-tvpwb 1/1 Running 0 5m34s
maya-grafana-559496cddb-vdltj 2/2 Running 0 5m33s
maya-io-7f455dcb5b-tkpth 1/1 Running 0 5m32s
maya-ui-6d5fc8dd8b-x5hxk 1/1 Running 0 5m35s
memcached-78844679fc-9j7jc 1/1 Running 0 5m32s
mysql-0 2/2 Running 0 5m34s
od-elasticsearch-logging-0 1/1 Running 0 5m33s
od-kibana-logging-96bc75658-qmzgk 1/1 Running 0 5m34s
querier-668b78b688-89b89 1/1 Running 0 5m31s
ruler-679bd4777b-tp4qw 1/1 Running 0 5m34s
table-manager-767dfd9b94-ht7pp 1/1 Running 0 5m31s
The pods may take some time to come to running state, based on the underlying storage. In case the pod(s) do not come up or you face any other problem, refer to the troubleshooting section. If the problem still persists feel free to contact our customer success team.
Once all the pods are up and running you can access OnPrem portal from a browser using the Worker Node IP provided in Step 2, in the format http://NodeExternalIP
Example:
http://35.232.101.174
NOTE:By default username is set as "Administrator" and password as "password".
- Unable to install Director OnPrem on GKE
- Configs-db pod in crashloopbackoff
- Ruler pod stuck in init state
For more information feel free to contact our customer success team.