Contributed"> Deploy Kubernetes Behind Firewalls Using These Techniques - The New Stack
TNS
VOXPOP
As a JavaScript developer, what non-React tools do you use most often?
Angular
0%
Astro
0%
Svelte
0%
Vue.js
0%
Other
0%
I only use React
0%
I don't use JavaScript
0%
Kubernetes

Deploy Kubernetes Behind Firewalls Using These Techniques

Actionable Strategies for Overcoming the Challenges of Deploying and Managing Kubernetes in Firewalled Environments
Oct 11th, 2024 10:00am by
Featued image for: Deploy Kubernetes Behind Firewalls Using These Techniques
Photo by Growtika on Unsplash

As Kubernetes and cloud native systems become the de facto standard for deploying and managing modern applications, their expansion into restricted or firewalled environments brings unique challenges. These environments are often driven by regulatory compliance, security concerns, or organizational policies, which present architectural, operational, and security-related hurdles. This article delves into the intricacies of deploying Kubernetes clusters behind firewalls, offering solutions and strategies to overcome these obstacles.

A firewalled or restricted environment limits external internet access to ensure data security and protect systems from unauthorized intrusions. These environments are typical in industries with stringent regulatory requirements, such as finance, healthcare, and government. In such environments, only specific types of traffic are permitted, often with strict oversight. While these controls enhance security, they create significant challenges for modern cloud native infrastructures like Kubernetes, which rely on internet access for features such as cluster management, image pulling, and external API communications.

Challenges of Deploying Kubernetes in Firewalled Environments

  1. Image Management and Distribution: Kubernetes applications require container images to be served from container registries such as Docker Hub, gcr.io, or quay.io. In firewalled environments, accessing these registries is often restricted or completely blocked. This can prevent image pulling, hindering the ability to deploy and upgrade applications.

Solution: To address this, enterprises can use registries that have repository replication or pull-through caching capabilities to host container images locally within the firewall. These registries can either replicate or pull images from external registries in a controlled manner, ensuring that the necessary container images are available without constant internet access. Registries like Harbor provide secure, internal image repositories for such environments. Further, utilizing image promotion workflows ensures that only vetted images from external sources make it into the secure registry.

Another approach I’ve used is to copy the images via a gateway or proxy server with connectivity to both source and destination registries. This solution might work where the source and destination registries’ capabilities are unknown. Tools like imgpkg, crane, or skopeo can copy images between registries that cross firewall boundaries. For example, the imgpkg packaging format bundles an application’s helm chart and its container images as a single unit. An imgpkg bundle can be exported as a tar archive from the source registry to the proxy server’s local filesystem. This tar archive can then be pushed to the registry running behind the firewall, and imgpkg ensures that the registry references in the application’s helm chart inside the bundle are automatically updated to point to the destination registry.

  1. Cluster Management and Control Plane Access: Kubernetes’ control plane (API server, etc.) must communicate with the worker nodes and external cloud APIs to manage the cluster. However, in firewalled environments, external access to these APIs or control plane components is often blocked or limited, posing significant challenges for monitoring, scaling, and controlling the cluster.

Solution: Organizations can establish reverse proxying and VPN tunneling techniques to overcome this. A reverse proxy deployed in a demilitarized zone (DMZ) can handle API requests from within the firewall while providing a secure entry point. Additionally, bastion hosts and VPN gateways can allow controlled, secure access to the Kubernetes control plane. These hosts reside outside the internal network but act as a bridge between the restricted environment and external services, allowing administrators to interact with the cluster without violating firewall policies.

For example, Azure allows the creation of “private” AKS clusters that are deployed in an enterprise’s private network. Access to the control plane of private AKS clusters is restricted by default for security reasons. But Azure also provides solutions like Azure Bastion, which provides secure access to a private cluster from the outside world. The user connects to Azure Bastion via RDP or SSH from their local computer and can access the private cluster by proxy. Bastion takes care of securing traffic to the private cluster.

  1. External Dependencies and DNS Resolution: Sometimes, an application running on an air-gapped Kubernetes cluster may need to pull an external dependency for which it may need to resolve a hostname outside the firewall. Access to public DNS like Google DNS or Cloudflare DNS may not be directly available from inside the pod, and the application may not be able to pull the dependency and fail to start. This will force the organization or the application developer to resolve the dependency within the firewall which may only sometimes be feasible.

Solution: Use DNS forwarding in CoreDNS. CoreDNS is the default DNS resolver in Kubernetes clusters and can be configured to resolve external DNS queries from within the firewall. CoreDNS can be modified to forward DNS queries to specific hostnames (like www.example.com) to external resolvers and resolve all other queries within the firewall. This is done by using the “forward” CoreDNS plugin to forward the query for www.example.com to Google or CloudFlare DNS and forward everything else (represented by a ‘.’) to the local resolver by just pointing them to /etc/resolv.conf This ensures that critical DNS resolution is not blocked by firewall policies and also allows the firewall administrator to keep their network secure by allowing only specific external queries.

  1. Updates, Patches, and Kubernetes Components: Regular updates and patches to Kubernetes components are essential for maintaining security, compliance, and performance. However, automated updates may be blocked in firewalled environments, leaving clusters vulnerable to security risks.

Solution: Use local mirrors and internal container registries to update the cluster. Kubernetes installation tools like Kubespray allow cluster management in offline environments. Installing and patching Kubernetes via Kubespray requires access to static files like kubectl and kubeadm, OS packages and a few container images for the core Kubernetes components. Static files can be served by running an nginx/HAproxy server inside the firewall. OS packages can be obtained by deploying a local mirror of a yum or Debian repository. And the container images required by Kubespray can be served by running local instances of a ‘kind’ or docker registry with pre-populated images.

Additionally, companies can use continuous integration/continuous delivery (CI/CD) pipelines to handle updates in a controlled manner, with local testing and validation on staging clusters before rolling out changes to production clusters. GitOps is a subcategory of CI/CD that automatically deploys changes to a target environment triggered by commits to a Git repository. Staging and production clusters can be mapped to different Git branches and upgrades and patches can be rolled out strategically by committing changes to the staging branch first, testing it thoroughly, and only then committing the same change to the production branch. This ensures that the cluster is up to date with the latest security patches despite not having automatic updates.

  1. Third-Party Integrations and Monitoring: Modern Kubernetes applications often rely on third-party integrations like Datadog and external storage solutions like AWS S3 or Google Cloud Storage. In a firewalled environment, outbound traffic is restricted, preventing direct communication with these cloud-hosted services.

Solution: Organizations can deploy self-hosted alternatives within their firewalled environment to maintain observability and monitoring. For example, Prometheus and Grafana can be deployed internally to handle metrics and visualization, while distributed storage solutions like Ceph or MinIO can replace external cloud storage. These tools can replicate the functionality of external services while ensuring that all data remains securely within the firewall. Container images and helm charts for self-hosted alternatives can be pulled into the air-gapped environment using the image management and distribution technique outlined earlier.

  1. Security Policies and Compliance: Security and compliance concerns are often the primary reason for deploying Kubernetes in firewall environments. Industries like healthcare and finance require strict adherence to regulations like HIPAA and PCI-DSS, which mandate the use of secure environments with restricted access to sensitive data.

Solution: Kubernetes’ native features, such as Pod Security Policies (PSPs), Role-Based Access Control (RBAC), and Network Policies, can be leveraged to enhance the security of the Kubernetes cluster within a firewalled environment. Additionally, deploying service meshes like Istio or Linkerd can provide fine-grained traffic control and security, ensuring that only authorized services communicate. These meshes also offer mutual TLS (mTLS) for encrypting traffic between microservices, further enhancing security and compliance.

  1. Ingress control and Load Balancing: In firewalled environments, external load balancing services (like AWS ELB or GCP Load Balancers) may not be available, causing difficulties in routing traffic to services running within the Kubernetes cluster. Kubernetes’ built-in NodePort-type services are not secure as they require a non-standard port to be opened on all the Kubernetes nodes. Each service that needs to be exposed outside the cluster requires a separate NodePort service, thus complicating the firewall administration.

Solution: To expose services outside the cluster, an ingress gateway like Istio or Contour can serve as a proxy that routes traffic to those services. They secure access to the internal services as they can terminate TLS traffic and serve as the single entry point for all services that need to be exposed.

Private load balancing solutions like MetalLB can be deployed to provide high availability of the IP/hostname for the ingress gateway. Using a combination of MetalLB and an ingress gateway improves security. There would be just one IP address/hostname to protect, and all network traffic to all exposed services would be encrypted.

Deploying and managing Kubernetes in firewalled environments introduces unique challenges, from image management and control plane access to DNS resolution and third-party integrations. However, with the right strategies and tools, organizations can harness the power of Kubernetes while maintaining the security, compliance, and operational stability required by their firewalled infrastructure. Techniques such as container registry image replication, DNS forwarding for specific queries, VPN tunnels, ingress gateways, and self-hosted monitoring tools ensure that Kubernetes remains a viable solution even in the most restricted environments.

Organizations aiming to adopt cloud native technologies behind firewalls must design their infrastructure thoughtfully, ensuring that security requirements are met without sacrificing the scalability and flexibility that Kubernetes offers. By leveraging the above solutions, Kubernetes clusters can operate effectively, even in highly restricted environments.

Group Created with Sketch.
TNS owner Insight Partners is an investor in: Control, Docker.
TNS DAILY NEWSLETTER Receive a free roundup of the most recent TNS articles in your inbox each day.