Application container technology – such as Docker and Kubernetes – is revolutionizing application development and bringing previously unimagined flexibility and efficiency to the application software development process.
Application containers like Docker containers are lightweight with rapid provisioning (we’re talking milliseconds) and provide an alternative to virtual machines that can consume a high amount of system resources and have a long boot time.
Containers allow companies to operate at an unprecedented scale and maximize the number of applications running on a minimum number of servers. This results in responses to multiple users in a timely and efficient manner even as demand fluctuates for different parts of an application.
What Does Docker Have to Do With Kubernetes?
Containers are portable and lightweight alternatives to virtual machines, and Docker is a containerization platform. Docker has become the most popular container technology in the world. However, Docker technology alone is not enough for managing containerized applications. Kubernetes, among other platforms, is used in tandem with Docker to address the container management and orchestration challenges.
Kubernetes (or “k8s”) is an open source platform that automates container operations. It is one of the most popular container management and orchestration methods, and for good reason.
Kubernetes eases the burden of configuring, deploying, managing, and monitoring even the largest containerized applications. It helps manage container lifecycles and related application lifecycles and issues, including high availability and load balancing.
Kubernetes helps to manage clusters easily and efficiently with groups of hosts (dedicated servers or virtual machines) that run the Kubernetes ‘master node’ (the control plane) and the Kubernetes worker nodes (the workers that run the containers). Version 1.14 and up of Kubernetes supports Windows-based worker nodes that run Windows containers as well as Linux-based worker nodes that run Linux containers.
A Kubernetes node is typically a host with either a master or worker node functionality. The master node runs things like Kubernetes APIs (i.e., for kubectl, the native command-line interface for Kubernetes). The worker nodes have everything necessary to run the application containers, including the container runtime.
A Kubernetes pod is one or more containers running together. Kubernetes gives pods their own IP addresses and a domain name for a set of pods.
A Kubernetes service is a way to expose an application that is running on a set of pods as a network service. Pods come and go, and therefore sometimes have a short lifespan. Services help the other pods find out and keep track of which pod IP address they should connect to.
Operators are clients of the Kubernetes API that control custom resources and enable automation of tasks such as deployments, backups, and upgrades by watching events without editing Kubernetes code. The key attribute of an operator is the active, ongoing management of the application. This includes failover, backups, upgrades, and autoscaling. Operators offer self-managing experience with knowledge baked in from the experts.
A Kubernetes secret is a Kubernetes object storing sensitive information, such as an OAuth token or SSH key. This makes it so that the information is only accessible when necessary.
Leaseweb customers are already self-installing and managing their Kubernetes nodes on either bare metal dedicated servers or virtual machines. The installation of a Kubernetes cluster is made easier by using deployment tools like kubeadm. Next to installation support, the Kubernetes.io website also walks through management best practices. For more information click here to visit the Kubernetes setup page.