Kubernetes is an open source system for managing applications in a technology container environment. Kubernetes automates manual processes for deploying and scaling containerized applications. You can also manage containerized application pools, which can span public, private, and hybrid clouds. The name, Kubernetes, is inspired by the analogy with the container ship and is based on it to indicate the Greek word that translates as "helmsman", as in the one you need to pilot the container ship. Kubernetes is also called "kube" and "k8s", which is a number, using the first letter, the last letter, and the number of letters between those letters.
Kubernetes history
Besides the etiology of the words, Kubernetes was originally created by Google as it was born from an internal Borg project, with containers powering Google Cloud technology. In fact, Google claims to have experience with containers for over fifteen years, and claims to "run billions of containers per week", which has given them a lot of experience with this software. Kubernetes was donated to the Linux Foundation as seed technology, to form the Cloud Native Computing Foundation (CNCF) in 2015. Although Kubernetes is an open source project, it is officially supported by Microsoft Azure and Google Cloud. Kubernetes gained initial acceptance among early adopters, which has resulted in steady growth and now occupies a leadership position in the container management software space. Nowadays, the use of multiple containers for a real production application has become common, the containers are distributed across multiple servers. Kubernetes software allows these containers to be deployed and scaled across multiple servers to match the workload, including scheduling containers in a cluster. It can also help manage the health of these multiple containers. (Image credit: Pixabay)Kubernetes deployment
Kubernetes is deployed for a group of containers, which is called a cluster. One of the containers in the cluster is designated as the cluster master, which runs the Kubernetes control plane processes. The other containers in the cluster are assigned as nodes, which are the worker machines, reporting to the cluster master, which functions as a unified endpoint. The cluster master has full control of its nodes, designating their life cycle, including health assessment, as well as upgrade and repair control for each node. In the cluster, there may be special containers, which are designated as agents per node with specific functions, for example, log collection or network connectivity within the cluster. The default for a node is to have a virtual processor and 3.75 GB of RAM, which is the standard Compute Engine machine type. For more compute-intensive tasks, a higher base minimum CPU platform can be chosen. Note that not all node resources can be used by the application it is designed to run. Rather, some of these resources are required to run the Kubernetes engine. The node's allocable resources, which can be used to run the application, is the difference between the total resources and the amount reserved for the Kubernetes engine. As an example, if the node has 4 GB of RAM available, 25% is reserved for the Kubernetes engine, and the remaining 75% can be used to run the application and only requires 20% of the next 4 GB of RAM if the node It has a total of 8 GB of RAM. The Kubernetes engine uses fewer CPU resources, reserving only 6% of the processing power for the first core of the node, and only 1% for a second core designated for the node. The cluster master runs the Kubernetes API Server, which handles requests for Kubernetes API calls from Kubernetes software. The Kubernetes API server functions as the "hub" for the entire container cluster.Kubernetes features
Its robust feature set contributes to the popularity of Kubernetes. These include:- Automatic packaging: This automates the placement of containers based on the most efficient use of resources.
- Horizontal Scaling - Applications can be scaled up or down using a single command or automated to match CPU usage.
- Automated deployments and restores: Kubernetes rolls out application update updates in stages, rather than all at once, and monitors for integrity issues, and if found, it is automatically restored to a more stable version to preserve availability.
- Storage orchestration - Works with a variety of storage solutions for additional flexibility, from the local cloud to the public cloud.
- Self-healing: the ability to kill containers that freeze and reset containers that freeze or fail their health check.
- Service discovery and load balancing: Kubernetes can assign each container its own IP address, with a DNS name, and the ability to distribute the load between them.
- Confidentiality and configuration management: Applications can be updated without image reconstruction.
- Batch execution: batch workload management and continuous integration.