Category Archives: Kubernetes

AUTOSCALING WITH KUBERNETES

Autoscaling with Kubernetes

Autoscaling with Kubernetes

Customers using Kubernetes respond to end user requests swiftly and ship software faster than ever before. But what happens when the user builds a service that is even more popular than planned for, and run out of compute? In Kubernetes 1.3, we are proud to announce that we have a solution: autoscaling. On Google Compute Engine (GCE) and Google Container Engine (GKE) and on AWS, Kubernetes will automatically scale up the cluster as soon as user need it, and scale it back down to save money when the user doesn’t need. Let us explore this article and know more about autoscaling in Kubernetes.

WHAT TO SCALE?

In the context of Kubernetes cluster, there are typically two things one want to scale as a user:

Pods: For a given application let’s say you are running X replicas, if more requests come then the group of X pods can handle, it is a good idea to scale to more than X replicas for that application. For this to work faultlessly, the nodes should have enough available resources so that those extra pods can be scheduled and executed successfully.

Nodes: Capacity of all nodes putting together characterizes cluster’s capacity. If the workload demand goes beyond this capacity, then the user would have to add nodes to the cluster and make sure the workload can be scheduled and executed effectively. If the PODs keep scaling, at some point the resources that nodes have available will run out and the user will have to add more nodes to increase overall resources available at the cluster level.

When to Scale?

The choice of when to scale has two parts, one is measuring a certain metric continuously and when the metric crosses a threshold value, then acting on it by scaling a certain resource. For instance, if the user wants to measure the average CPU consumption of their pods and then trigger a scale operation if the CPU consumption crosses 80%. But one metric does not fit all use cases and for different kind of applications, the metric might vary.

So far we only considered the scale-up part, but when the workload usage drops, there should be a way to scale down with poise and without affecting the existing requests being processed.

HOW TO SCALE?

In case of pods, simply changing the number of replicas in replication controller is enough. In case of nodes, there should be a way to call the cloud provider’s API, create a new instance and make it a part of the cluster, which is relatively non-trivial operation and may take more time comparatively.

KUBERNETES AUTOSCALING

With this understanding of autoscaling, let’s discuss about detailed implementation and technical details of Kubernetes autoscaling.

CLUSTER AUTOSCALER

gfhCluster autoscaler is used in Kubernetes to scale cluster specifically nodes dynamically. It watches the pods continuously and if it finds that a pod cannot be scheduled, then based on the PodCondition, it chooses to scale up. This is far more effective than looking at the CPU percentage of nodes in aggregate. Since a node creation can take up to a minute or more depending on the cloud provider and other factors, it may take some time till the pod can be scheduled. Within a cluster, the user might have multiple node pools. Also, the nodes can be spread across AZs in a region and how the user scale might vary based on topology. Cluster Autoscaler provides various flags and ways to pull the node scaling behaviour.

For scaling down, it looks at average utilization on that node, but there are other factors which come into play. For instance, if a pod with pod disruption budget is running on a node which cannot be re-scheduled then the node cannot be removed from the cluster. Cluster autoscaler provides a way to terminate nodes and gives up to 10 minutes for pods to reposition.

HORIZONTAL POD AUTOSCALER

Horizontal pod autoscaler (HPA) is a control loop which viewpoints and scales a pod in the deployment. This can be done by creating an HPA object that refers to a deployment controller. The user can also define the threshold and minimum and maximum scale to which the deployment should scale. The original version of HPA which is GA (autoscaling/v1) only supports CPU as a metric that can be monitored. The current version of HPA which is in beta supports memory and other custom metrics. Once the user creates an HPA object and it is able to query the metrics for that pod, the user can see it reporting the details:

$ kubectl get hpa

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE

helloetst-ownay28d Deployment/helloetst-ownay28d 8% / 60% 1 4 1 23h

WHAT IS A NAMESPACE IN KUBERNETES?

What Is A Namespace In Kubernetes?

Namespaces are intended for use in environments with many users spread across multiple teams, or projects. Namespaces are a way to divide cluster resources between multiple uses (via resource quota). In future versions of Kubernetes, objects in the same namespace will have the same access control policies by default.
 
 
 
 
 
 
 
 

CAN WE USE RKT WITH KUBERNETES (AKA “RKTNETES”)?

Can we use rkt with Kubernetes (aka “rktnetes”)?

Kubernetes is a system for managing containerized applications across a cluster of machines. Kubernetes runs all applications in containers. In the default setup, this is performed using the Docker engine, but Kubernetes also features support for using rkt as its container runtime backend. This allows a Kubernetes cluster to leverage some of rkt’s security features and native pod support.
 
 
 
 
 
 
 
 
 
 

KEY CONCEPTS OF KUBERNETES

Key concepts of Kubernetes

At a very high level, there are three key concepts:

Pods are the smallest deployable units that can be created, scheduled, and managed. Its a logical collection of containers that belong to an application.

Master is the central control point that provides a unified view of the cluster. There is a single master node that control multiple minions.

Minion is a worker node that run tasks as delegated by the master. Minions can run one or more pods. It provides an application-specific “virtual host” in a containerized environment.

WHAT IS KUBERNETES? EXPLAIN

What is Kubernetes? Explain

It is massively scalable tool for managing containers, made by Google. It is used internally on huge deployments and because of that it is maybe the best option for production use of containers. It supports self healing by restating non responsive containers, it pack containers in a way that they take less resources and has many other great features.