QuickStart Guide
Installation and Deployments
Deployment of Litmus Edge Manager on Azure Kubernetes Service (AKS)
9 min
you can deploy litmus edge manager on azure kubernetes service (aks) before you begin before you begin, make sure that you have a valid microsoft azure subscription with permissions to create and manage kubernetes clusters see the azure kubernetes service (aks) documentation for more details familiarity with the azure portal and azure cli installed azure cli on your local machine installed helm cli and kubectl on your local machine available litmus google registry credential key file (for example, lem pull key file json ) to manually deploy litmus edge manager on azure kubernetes service (aks), do the following step 1 set up your aks cluster step 2 connect to your cluster step 3 execute kubectl and helm commands step 4 access the litmus edge manager ui using the obtained ip address step 1 set up your aks cluster in your azure portal account log in to your azure account open azure kubernetes service select your subscription and plan azure kubernetes service (aks) click create in create kubernetes cluster > basics , set the following subscription your subscription is selected already resource group create a new resource group to contain the cluster resources cluster preset configuration select dev/test kubernetes cluster name set any desired name for the cluster region set any or (us) east us kubernetes version set any or 1 32 6 (default) authentication and authorization local accounts with kubernetes rbac other options leave default values or modify as required in the node pools tab confirm that the agentpool node pool is created for you already make sure the node pool is using ubuntu linux and the minimum node count is 2 optionally, create a custom node pool by clicking on add node pool note pay attention to the maximum node count the deployment will fail if this is set to 2 and the max pods per node is set to 40 or below in this case, the cluster would need an additional node (3) to distribute all the pods set the following in the networking tab enable private cluster leave this unselected set authorized ip ranges leave this unselected network configuration azure cni overlay dns name prefix set your custom dns name prefix network policy none or modify is needed in the integrations tab, use the default values in the monitoring tab select enable prometheus metrics azure monitor workspace your workspace is already selected select enable recommended alert rule alert rules review details and confirm the email for alerts use the default values in the security and advanced tabs in the tags tab add your tags as desired add the tags to all resources click review + create note double check minimum vm requirements nodes are using ubuntu linux os architecture is amd64 vm 4 vcpu, 16gb memory, 100gb local storage in review + create review your settings make sure the validation is passed note if you see validation failed required information is missing or not valid return to that tab and add or correct the required information click create note creating a cluster takes a few minutes depending on the settings you choose check the status or notifications for the latest information about your cluster step 2 connect to your cluster open your terminal and log in with azure cli in the terminal window, type in az login in the auth ui dialog , select the microsoft azure account that has access to the aks service enter your credentials in the terminal window, select a subscription and tenant make sure you select the subscription where your aks cluster is created next, connect to your cluster open the azure portal and navigate to kubernetes services in kubernetes services , select your cluster in the cluster page, click connect in the cloud shell tab, copy the second command az aks get credentials resource group \<my resource group> name \<my dev cluster> overwrite existing in the terminal window, paste the command with updated values for \<my resource group> and \<my dev cluster> submit the command you should see similar output in your terminal the cluster setup is now complete step 3 execute kubectl and helm commands in your terminal create a namespace lem kubectl create ns lem create a pull secret change the command accordingly based on the location of your pull key file on the local machine kubectl create secret docker registry lem helm secret \\ \ docker server=us east1 docker pkg dev \\ \ docker username= json key \\ \ docker password="$(cat lem pull key file json)" n lem install litmus edge manager using the following command, where \<lemversion> is the lem version that you are installing, for example version 2 29 0 helm install lem oci //us east1 docker pkg dev/litmus public/lem chart ga/lem version \<lem version> n lem note the command initiates the pull of the required container images this process may take several minutes make sure the secret name is lem helm secret and that you don’t override it with set "imagepullsecrets\[0] name after the deployment is completed, wait approximately five minutes before applying the litmus edge manager (lem) urls and credentials instructions for accessing these urls and credentials print for you in the console to get urls and credentials for litmus edge manager in case you need urls and credentials after the deployment, execute the following helm get notes lem n lem to get litmus edge manager's external ip, run the following, where \<release name> nginx is your service name kubectl get svc lem nginx o jsonpath='{ status loadbalancer ingress\[0] ip}' n lem to get the lem username and password, run # username kubectl get secret namespace lem lem secret o jsonpath="{ data lem user}" | base64 decode \# password kubectl get secret namespace lem lem secret o jsonpath="{ data lem password}" | base64 decode step 4 access the litmus edge manager ui using the obtained ip address see access to litmus edge manager docid\ w3uhzq4tsrh7oljhult0a for details note get the site license key from your litmus account executive to activate your license litmus edge manager is now deployed and operational on your aks cluster you can now access the manager’s ui using the external ip address and perform any additional configuration through the admin console, such as license activation