Google Kubernetes Networking options explained & demonstrated

This blog post explores the different network modes available in Google Kubernetes Engine (GKE), including the differences between them and the advantages of each when creating a new GKE cluster. It will help guide you in choosing the most appropriate network mode, or if using an older network mode, decide whether it’s worth the trouble of switching. Additionally, you’ll learn how enabling network policies affects networking.

Network Modes Explained

GKE currently supports two network modes: routes-based and VPC-native. The network mode defines how traffic is routed between Kubernetes pods within the same node, between nodes of the same cluster, and other network-enabled resources on the same VPC, such as virtual machines (VMs).

It is extremely important to note that the network mode must be selected when creating the cluster and cannot be changed for existing clusters. Though GKE Clusters may be created using a different network mode at any point in time, workloads need to be migrated, which can be a big undertaking.

Routes-Based Network Mode

The routes-based mode is the original GKE network mode. The name “routes-based” comes from the fact that it uses Google Cloud Routes to route traffic between nodes. Outside of GKE, Google Cloud Routes are also used, for example, to route traffic to different subnets, to the public internet, and for peering between networks.

While each Kubernetes node has a single IP address from the subnet it is launched on, each pod will also get an IP. This IP, however, is not registered in the VPC itself. So how does it work? First, each Node reserves a /24 IP address range that is unique within the VPC and not in use by any subnet. GKE then automatically creates a new route, using the node assigned /24 as the destination IP range and the node instance as the next hop.

Image for post

Figure 1: Nodes, instances, and custom static routes of GKE Cluster

Figure 1 shows a cluster with five nodes, which, in turn, creates five Compute Engine instances, each with an internal IP in the VPC. The appropriate custom static routes have also been generated automatically. Any pod created will have an IP in the /24 range of the node it’s scheduled on. Note that even though the /24 range has 254 available IP addresses, each GKE Node can only have a maximum of 110 Pods running.

Image for post

Figure 2: Networking between GKE Nodes and the underlying VPC

How each pod connects to the underlying network of a node is determined by the Container Network Interface (CNI) being used in the Kubernetes cluster. In GKE, when network policies are disabled, there is no CNI plugin in use, and instead, Kubernetes’ default kubenet is used.

Image for post

Figure 3: Networking inside a single GKE Node

Figure 3 illustrates that each pod has a fully isolated network from the node it runs on, using Linux network namespaces. A pod sees only its own network interface: eth0. The node connects to each of these eth0 interfaces on the pods using a virtual Ethernet (veth) device, which is always created as a pair between two network namespaces (the pod and the node’s).

All veth interfaces on the node are connected together using a Layer 2 software-defined bridge cbr0. Linux kernel routing is set up so that any traffic going to another pod on the same node goes through the cbr0 bridge, while traffic going to another node is routed to the node’s eth0 interface. The router makes this decision based on whether the destination IP belongs to the /24 reserved for pods on the same node or not.

VPC-Native Network Mode

The VPC-native network mode is newer and is recommended by Google Cloud for any new clusters. It is currently the default when using the Google Cloud Console, but not when using the REST API or most versions of the gcloud CLI. It is therefore important to check the selected mode carefully when creating a new cluster (or even better, make it explicit!).

This network mode uses a nifty feature called alias IP ranges. Traditionally, a VM has had a single primary network interface with its own internal address inside the VPC, determined by the subnet range it lives on. However, today, in addition to that, it is possible to have an IP range assigned to the same network interface. These supplementary addresses can then be assigned to applications or containers running inside the VM without requiring additional network interfaces.

To keep a network organized and easy to comprehend, it is possible to define that alias IP ranges not be taken from the primary CIDR range of the subnet. Presently, it is possible to add one or more secondary CIDR ranges to a subnet and use subsets of them as alias IP ranges. GKE takes advantage of this feature and uses separate secondary address ranges for pods and Kubernetes services.

Image for post

Figure 4: VPC-native networking between GKE Nodes and a VPC subnet, using Alias IP ranges and a secondary CIDR range

Each Node still reserves a more extensive than necessary IP range for pods, but it is also possible to adjust based on the maximum number of pods per node. For the absolute maximum of 110 pods per node, a /24 is still required, but, for example, for eight pods per Node, a /28 is sufficient.

Other than the fact that a smaller than /24 can be allocated to the node for pods, there is no difference between this and IP allocation inside of a node. The same goes for networking inside of a node — it is precisely the same as for clusters using the routes-based network mode (i.e. kubenet is used).

But because each VM now has the alias IP range defined on its network interface, there is no need to use the custom static routes created for each GKE Node, which are subject to GCP quotas. While this seems like a small difference between the two modes, it is highly advantageous.

Benefits of VPC-Native Clusters

There are absolutely no drawbacks to choosing a VPC-native cluster, but several benefits, with an important one being security. Because pod addresses are now native to the VPC, firewall rules can be applied to individual pods, while for routes-based clusters, the finest granularity level would have been an entire node. In this mode, the VPC network is also able to perform anti-spoofing checks.

There are other advantages as well, listed in the GKE documentation section on the benefits of creating a VPC-native cluster. Last, container-native load balancing is only available for VPC-native clusters.

Network Policies

By default, in Kubernetes, pods accept traffic from any source inside the same network. And even if you’re using a VPC-native cluster, and can use VPC firewall rules, it’s typically not practical enough to be a security solution at scale. With network policies, however, you can apply ingress and egress rules to pods based on selectors at the pod or Kubernetes namespace level. Generally speaking, you can think of network policies as a Kubernetes-native firewall.

In GKE, when you enable network policies, you get a world-class networking security solution powered by Project Calico, without having to set it up or manage its components on your own. Not only does Calico implement all of Kubernetes network policy features, but it also extends them with their own Calico network policies, which provide even more advanced network security features.

To enforce these policies, Calico replaces kubenet with its own CNI plugin. As previously mentioned, the CNI plugin in use affects how traffic is connected between a pod and the underlying node it runs on. So regardless of which network mode you have selected, enabling network policies in GKE will change the networking inside of the node.

Figure 5: Calico does not use a bridge, and instead uses L3 routing between pods.

With Calico, there is no L2 network bridge in the node, and instead, L3 routing is used for all traffic between pods, so that it can be secured using iptables and the Linux routing table. A Calico daemon runs on each node, which automatically configures routing and iptables rules based on the network policies and pods. While the veth pairs still exist between the pod and the node’s network namespace, if you were to inspect them directly on a node, you’d notice that their names start with cali instead of veth.

Conclusion

Understanding how different network modes operate in GKE and how enabling network policies affects the networking inside a node will help you make better decisions and help you troubleshoot Kubernetes/GKE networking problems.

If you’re launching a new GKE Cluster today, it makes sense to use the VPC-native network mode. If you have an existing cluster in routes-based network mode and would benefit from the advantages provided by VPC-native clusters, it might be time to plan a migration.

Need help handling your cloud environment so you can focus on your core business? Contact CloudZone today.