Director OnPrem is now Kubera OnPrem.
Prerequisites
The `type` for all the Kubera related services, should be set to
ClusterIP.
To list all the services, execute:
kubectl get svc -n <kubera_namespace>
Next, edit the services whose service-type is set to NodePort
. To edit, execute:
kubectl edit <service_name> -n <kubera_namespace>
As an example, we are considering maya-io
svc, the texts in red, in the below mentioned yaml are edited.
apiVersion: v1
kind: Service
metadata:
annotations:
meta.helm.sh/release-name: dop
meta.helm.sh/release-namespace: director
creationTimestamp: "2020-06-17T16:23:11Z"
labels:
app: maya-io
app.kubernetes.io/managed-by: Helm
name: maya-io
namespace: director
resourceVersion: "2805"
selfLink: /api/v1/namespaces/director/services/maya-io
uid: cfd17a62-c0c0-44f2-a1d5-d1cf4b5869aa
spec:
clusterIP: 10.100.72.218
externalTrafficPolicy: Cluster
ports:
- name: mayaport
#nodePort: 31021 //This line is to be removed
port: 8080
protocol: TCP
targetPort: 8080
selector:
app: maya-io
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
STEP 1
To ensure there exists no inconsistency with the previous version, it is advisable to fetch the values.yaml that had been used earlier. To do so, execute:helm get values <release_name> -n <namespace> > values.yamlNext, add the following snippet to values.yaml. Edit the following fields:
installOpenebs
: If OpenEBS is already deployed on your cluster, set the value to false, else set it to true.storageClass:
Edit this field with the name of the storage class being used
type: installDirector: true installOpenebs: /*set value*/ mysql: storageClass: /*storage class name*/ storageCapacity: 50Gi elasticSearch: storageClass: /*storage class name*/ storageCapacity: 50Gi replicas: 1 cassandra: storageClass: /*storage class name*/ storageCapacity: 50Gi replicas: 1 mayaStore: storageClass: /*storage class name*/ storageCapacity: 10Gi grafana: storageClass: /*storage class name*/ storageCapacity: 50Gi
STEP 2
To update the repo with new changes, execute:helm repo add kubera https://charts.mayadata.io/
helm repo update
Output:
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "kubera" chart repository
...Successfully got an update from the "directoronprem" chart repository
Update Complete. ⎈ Happy Helming!⎈
STEP 3
Next, you need to upgrade the installed version of Kubera OnPrem.
Command:
helm upgrade --namespace <namespace> <release_name> --set server.url=http://<NODE_IP> kubera/kubera-charts -f values.yamlOutput:
Release "dop1" has been upgraded. Happy Helming!
NAME: dop1
LAST DEPLOYED: Sat Jun 13 19:09:44 2020
NAMESPACE: director
STATUS: deployed
REVISION: 3
TEST SUITE: None
To verify if the upgrade was successful, execute:
helm ls -n <namespace>
The output must display the new version.
Output:
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
dop1 director 3 2020-06-13 19:09:44.046668014 +0530 IST deployed kubera-charts-1.10.0 1.10.0
Also, execute:
kubectl get pods -n <namespace>
All the pods must come to running state within few minutes.
Output:
NAME READY STATUS RESTARTS AGE
alertmanager-6cd9894dc9-7dh4k 1/1 Running 0 98m
alertstore-7dccdcdf9b-khnrs 1/1 Running 0 8m
alertstore-tablemanager-855647cb67-n79fw 1/1 Running 0 8m
cassandra-0 1/1 Running 0 11m
chat-server-645bcc4fbc-pmg42 1/1 Running 0 9m
cloud-agent-649d77f6d9-gl7ds 1/1 Running 0 9m
configs-6c48f7666b-lnthw 1/1 Running 0 9m
configs-db-57bf44bbfd-ls5lm 1/1 Running 0 11m
consul-c6ffdf59b-xk4jh 1/1 Running 0 11m
distributor-68587d6c48-2n9j4 1/1 Running 0 11m
dop-nginx-ingress-controller-zqc7s 1/1 Running 0 11m
dop-nginx-ingress-default-backend-cf9c64c-5rg6n 1/1 Running 0 11m
dop-nginx-ingress-default-backend-cf9c64c-drz5m 1/1 Running 0 11m
elastalert-6bddb657b7-txv24 1/1 Running 0 7m
elasticsearch-curator-1589979600-hw2c4 0/1 Completed 0 7m
ingester-74f85f9bcc-rjw7x 1/1 Running 0 11m
maya-grafana-6d6cf955db-jtghh 2/2 Running 0 7m
maya-io-5cdb55d597-w95zc 1/1 Running 0 7m
maya-ui-6b9fb88696-x5f4s 1/1 Running 0 7m
memcached-78844679fc-sn5cr 1/1 Running 0 11m
mysql-0 2/2 Running 0 11m
od-elasticsearch-logging-0 1/1 Running 0 11m
od-kibana-logging-76ff4d6b7f-d74tf 1/1 Running 0 7m
querier-668b78b688-2zbpc 1/1 Running 0 11m
ruler-679bd4777b-jndcz 1/1 Running 0 11m
table-manager-767dfd9b94-mbfmp 1/1 Running 0 11m
NOTE: In case the pod(s) do not come up or you face any other problem, refer to the troubleshooting section. If the problem still persists feel free to contact our support team.