Tutorial: Kubernetes for Orchestrating IoT Edge Deployments
As Kubernetes transforms into a universal scheduler, its capabilities are exploited to orchestrate deployments across a diverse set of workloads. From virtual machines to containers to edge computing modules, Kubernetes is becoming the preferred platform for managing deployments at scale.
Microsoft’s Virtual Kubelet project takes advantage of the extensibility features of Kubernetes to orchestrate deployments targeting external environments. Virtual Kubelet acts as a bridge between the Kubernetes control plane and a 3rd party resource scheduler. In its current form, it works with Microsoft’s serverless container platform, Azure Container Instances, Azure IoT Edge, and even AWS Fargate. Given the power and simplicity of Virtual Kubelet, we can expect to see other integrations with it.
The most interesting integration of Virtual Kubelet is with Azure IoT Edge where Kubernetes talks to Azure IoT Hub to deploy containers in remote edge devices. The advantage of this integration is the ability to use standard Kubernetes tools and deployment mechanism to orchestrate edge modules at scale.
It makes complete sense to use Kubernetes for orchestrating edge computing modules. Azure IoT Edge heavily relies on a container runtime and containers to perform local processing. Each IoT Edge device may run dozens of containers that work in tandem to handle data processing and business logic. These containers are pushed through an IoT Hub where multiple edge devices are registered. By using tags, containers may be deployed to more than one edge device at a time.
If we observe closely, Azure IoT Hub acts like a typical worker node of Kubernetes. Once Azure IoT control plane instructs to run an IoT edge module as a container on the target device, it packages the container image as a module and hands it over to the remote edge device. Since each edge device may run more than one container, they may be compared to Kubernetes pods.
When an IoT Hub is registered with Kubernetes through Virtual Kubelet, the master nodes will treat IoT Hub as a node. When a deployment is targeted at IoT Hub, Kubernetes control plane simply hands over the scheduling part to IoT Hub. Developers and operators can use familiar manifests declared in YAML that is pushed through the kubectl CLI.
In this tutorial, we are going to extend the Azure IoT use case discussed in the last part to Kubernetes. We will perform blue/green deployments and even rollback and roll forward Azure IoT Edge modules from kubectl.
Before proceeding further, make sure you completed the previous part of the tutorial. You also need access to a Kubernetes cluster. You may use Minikube on your local development machine. Install and test Helm as we will deploy Virtual Kubelet as a chart.
Let’s start by cloning the Virtual Kubelet repo from Github.
1 |
$ git clone https://github.com/Azure/iot-edge-virtual-kubelet-provider.git |
Grab the Azure IoT Hub owner connection string from the portal to create a secret in Kubernetes. You may also run the below command to get the connection string.
1 2 3 |
$ az iot hub show-connection-string --resource-group TNSIoT --hub-name TNSIoTHub $ kubectl create secret generic my-secrets --from-literal=hub0-cs=’HostName=TNSIoTHub.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=oNVfrvc1bsfmrXaofsVQAhvq74xQ/rHiRzClqPOsFgc=’ |
Install Virtual Kubelet Helm Chart in the Kubernetes cluster. Make sure you set the RBAC flag to true.
1 2 3 |
$ cd iot-edge-virtual-kubelet-provider/ $ helm install -n hub0 --set rbac.install=true src/charts/iot-edge-connector |
Verify that IoT Hub is showing up as a node in our Kubernetes cluster.
We are now ready to deploy an IoT edge module from kubectl. Before that, let’s create the YAML file with the deployment manifest.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
apiVersion: apps/v1beta2 kind: Deployment metadata: name: matrix spec: selector: matchLabels: app: matrix replicas: 1 strategy: type: RollingUpdate rollingUpdate: maxSurge: 0% maxUnavailable: 100% template: metadata: labels: app: matrix annotations: isEdgeDeployment: "true" targetCondition: "tags.type='bulb'" priority: "150" loggingOptions: "" spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - matrix topologyKey: "kubernetes.io/hostname" containers: - name: matrix image: "janakiramm/matrix:v1" nodeSelector: type: virtual-kubelet tolerations: - key: azure.com/iotedge effect: NoSchedule --- kind: ConfigMap apiVersion: v1 metadata: name: matrix data: status: running restartPolicy: always version: "1.0" createOptions: | { "HostConfig": { "Privileged": true, "network" : "host" } } |
The above file contains the definition of a standard Kubernetes deployment and a config map. The deployment has a few annotations that are forwarded to IoT Hub by the Virtual Kubelet.
The target condition matches the tags defined in the digital twin of the edge device. If you recall, in the previous tutorial, we added a couple of tags to our device with the below command.
1 |
$ az iot hub device-twin update --device-id Pi1 --hub-name TNSIoTHub --set tags='{"device":"pi1","type":"bulb"}' |
When Kubernetes pushes the deployment through IoT Hub, the control plane will apply the configuration to all the devices with matching tags.
The config map is used to set the module configuration which is used by Azure IoT Edge runtime when creating the containers. In our case, we need to run the container in a privileged mode which is defined in the config map.
We can now create this deployment through kubectl CLI. Notice that we are adding the –record flag to enable rollback and roll forward of deployments.
1 |
$ kubectl apply -f matrix.yaml --record |
Check that the pod is created.
Since the image, janakiramm/matrix:v1, turns the matrix to blue, your Raspberry Pi should light up with all blue LEDs.
Visiting the Azure Portal’s IoT Edge Deployment Section confirms that the module is deployed on the device that matches the tag type=’bulb’.
Now, let’s use the Kubernetes way of updating the image to V2. We can directly update the image defined in the deployment with the below command. We are recording this change as well.
1 |
$ kubectl set image deployment/matrix matrix=janakiramm/matrix:v2 --record |
The second version of the image sets the color of the LED matrix to green. You should be able to see that change in just a few seconds.
Since we are using Kubernetes deployments, we can perform PaaS-style operations on the modules.
The command, kubectl rollout history will show all the changes made to the deployment.
Let’s rollback to the very first step in the history. This should take us back to the previous version of the module which will change the color of the LED matrix to blue.
1 |
$ kubectl rollout undo deployment/matrix --to-revision=1 |
All the deployment changes initiated from Kubernetes are recorded in the Azure IoT Edge Deployment history.
Congratulations! You just did a blue/green (literally) deployment on the edge device! Once a set of devices are tested with a specific version of a module, they can be rolled out to all other devices.
This scenario is just one of the examples of Kubernetes extensibility. We can expect to see many workloads moving to Kubernetes for advanced scheduling capabilities.