Category Archives: Kubernetes

KUBERNETES WITH FLANNEL

Kubernetes with Flannel

Kubernetes is an excellent tool for handling containerized applications at scale. But as you may know, working with kubernetes is not an easy road, mainly the backend networking implementation. Many developers have met many problems in networking and it costs much time to figure out how it works.

In this article, we want to use the simplest implementation as an example, to explain kubernetes networking works. So, let’s dive deep!

KUBERNETES NETWORKING MODEL

Kubernetes manages a cluster of Linux machines, on each host machine, kubernetes runs any number of Pods, in each Pod there can be any number of containers. User’s application will be running in one of those containers.

For kubernetes, Pod is the least management unit, and all containers inside one Pod shares the same network namespace, which means they have same network interface and can connect each other by using localhost.

KUBERNETES NETWORKING MODEL NECESSITATES

All containers can communicate with all other containers without NAT.

All nodes can communicate with all containers without NAT.

The user can replace all containers to Pods in above requirements, as containers share with Pod network.

Basically it means all Pods should be able to easily communicate with any other Pods in the cluster, even they are in different Hosts, and they recognized each other with their own IP address, just as the underlying Host does not exists. Also the Host should also be able to connect with any Pod with its own IP address, without any address translation.

THE OVERLAY NETWORK

Flannel is created by CoreOS for Kubernetes networking, it can also be used as a general software defined network solution for other purpose.

To achieve kubernetes network requirements, create flat network which runs above the host network, this is called overlay network. All containers(Pod) will be assigned one IP address in overlay network, they communicate with each other by calling each other’s IP address directly.

Flannel

In the above cluster there are three networks:

AWS VPC Network: All instances are in one VPC subnet 172.20.32.0/19. They have been allocated IP addresses in this range, all hosts can connect to each other because they are in same LAN.

Flannel overlay network: Flannel has created another network 100.96.0.0/16, it is a bigger network which can hold 216 (65536) addresses, and it is across all kubernetes nodes, each pod will be assigned one address in this range.

In-Host docker network: Inside each host, flannel assigned a 100.96.x.0/24 network to all pods in this host, it can hold 28 (256) addresses. The Docker bridge interface docker0 will use this network to create new containers.

By the above design, each container has its own IP address, all fall into the overlay subnet 100.96.0.0/16. The containers inside the same host can connect with each other by the Docker bridge docker0. To connect across hosts with other containers in the overlay network, flannel uses kernel route table and UDP encapsulation to attain it.

PACKET COPY AND PERFORMANCE

The newer version of flannel does not recommend to use UDP encapsulation for production, it should be only used for debugging and testing purpose. One reason is the performance.

Though the flannel0 TUN device provides a simple way to get and send packet through the kernel, it has a performance penalty, the packet has to be copied back and forth from the user space to kernel space.
packet copy and performance
as the above, from the original container process send packet, it has to be copied three times between user space and kernel space, this will upsurge network overhead in a significant way.

THE VERDICT

Flannel is one of the simplest implementation of kubernetes network model. It uses the standing Docker bridge network and an extra Tun device with a daemon process to do UDP encapsulation. We hope this article helps you to understand the fundamentals of kuberentes networking, with this information you can start exploring the more interesting realm of kubernetes networking.

AUTOSCALING WITH KUBERNETES

Autoscaling with Kubernetes

Autoscaling with Kubernetes

Customers using Kubernetes respond to end user requests swiftly and ship software faster than ever before. But what happens when the user builds a service that is even more popular than planned for, and run out of compute? In Kubernetes 1.3, we are proud to announce that we have a solution: autoscaling. On Google Compute Engine (GCE) and Google Container Engine (GKE) and on AWS, Kubernetes will automatically scale up the cluster as soon as user need it, and scale it back down to save money when the user doesn’t need. Let us explore this article and know more about autoscaling in Kubernetes.

WHAT TO SCALE?

In the context of Kubernetes cluster, there are typically two things one want to scale as a user:

Pods: For a given application let’s say you are running X replicas, if more requests come then the group of X pods can handle, it is a good idea to scale to more than X replicas for that application. For this to work faultlessly, the nodes should have enough available resources so that those extra pods can be scheduled and executed successfully.

Nodes: Capacity of all nodes putting together characterizes cluster’s capacity. If the workload demand goes beyond this capacity, then the user would have to add nodes to the cluster and make sure the workload can be scheduled and executed effectively. If the PODs keep scaling, at some point the resources that nodes have available will run out and the user will have to add more nodes to increase overall resources available at the cluster level.

When to Scale?

The choice of when to scale has two parts, one is measuring a certain metric continuously and when the metric crosses a threshold value, then acting on it by scaling a certain resource. For instance, if the user wants to measure the average CPU consumption of their pods and then trigger a scale operation if the CPU consumption crosses 80%. But one metric does not fit all use cases and for different kind of applications, the metric might vary.

So far we only considered the scale-up part, but when the workload usage drops, there should be a way to scale down with poise and without affecting the existing requests being processed.

HOW TO SCALE?

In case of pods, simply changing the number of replicas in replication controller is enough. In case of nodes, there should be a way to call the cloud provider’s API, create a new instance and make it a part of the cluster, which is relatively non-trivial operation and may take more time comparatively.

KUBERNETES AUTOSCALING

With this understanding of autoscaling, let’s discuss about detailed implementation and technical details of Kubernetes autoscaling.

CLUSTER AUTOSCALER

Cluster autoscaler is used in Kubernetes to scale cluster specifically nodes dynamically. It watches the pods continuously and if it finds that a pod cannot be scheduled, then based on the PodCondition, it chooses to scale up. This is far more effective than looking at the CPU percentage of nodes in aggregate. Since a node creation can take up to a minute or more depending on the cloud provider and other factors, it may take some time till the pod can be scheduled. Within a cluster, the user might have multiple node pools. Also, the nodes can be spread across AZs in a region and how the user scale might vary based on topology. Cluster Autoscaler provides various flags and ways to pull the node scaling behaviour.

For scaling down, it looks at average utilization on that node, but there are other factors which come into play. For instance, if a pod with pod disruption budget is running on a node which cannot be re-scheduled then the node cannot be removed from the cluster. Cluster autoscaler provides a way to terminate nodes and gives up to 10 minutes for pods to reposition.

HORIZONTAL POD AUTOSCALER

Horizontal pod autoscaler (HPA) is a control loop which viewpoints and scales a pod in the deployment. This can be done by creating an HPA object that refers to a deployment controller. The user can also define the threshold and minimum and maximum scale to which the deployment should scale. The original version of HPA which is GA (autoscaling/v1) only supports CPU as a metric that can be monitored. The current version of HPA which is in beta supports memory and other custom metrics. Once the user creates an HPA object and it is able to query the metrics for that pod, the user can see it reporting the details:

$ kubectl get hpa

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE

helloetst-ownay28d Deployment/helloetst-ownay28d 8% / 60% 1 4 1 23h

WHAT IS A NAMESPACE IN KUBERNETES?

What Is A Namespace In Kubernetes?

Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces are a way to divide cluster resources between multiple uses (via resource quota). In future versions of Kubernetes, objects in the same namespace will have the same access control policies by default.
 
 
 
 
 
 
 
 

CAN WE USE RKT WITH KUBERNETES (AKA “RKTNETES”)?

Can we use rkt with Kubernetes (aka “rktnetes”)?

Kubernetes is a system for managing containerized applications across a cluster of machines. Kubernetes runs all applications in containers. In the default setup, this is performed using the Docker engine, but Kubernetes also features support for using rkt as its container runtime backend. This allows a Kubernetes cluster to leverage some of rkt’s security features and native pod support.
 
 
 
 
 
 
 
 
 
 

KEY CONCEPTS OF KUBERNETES

Key concepts of Kubernetes

At a very high level, there are three key concepts:

Pods are the smallest deployable units that can be created, scheduled, and managed. Its a logical collection of containers that belong to an application.

Master is the central control point that provides a unified view of the cluster. There is a single master node that control multiple minions.

Minion is a worker node that run tasks as delegated by the master. Minions can run one or more pods. It provides an application-specific “virtual host” in a containerized environment.

WHAT IS KUBERNETES? EXPLAIN

What is Kubernetes? Explain

It is massively scalable tool for managing containers, made by Google. It is used internally on huge deployments and because of that it is maybe the best option for production use of containers. It supports self healing by restating non responsive containers, it pack containers in a way that they take less resources and has many other great features.