Introduction
Kubernetes is an open-source container orchestration system that helps automate the deployment, scaling, and management of containerized applications. It was originally developed by Google and is now maintained by the Cloud Native Computing Foundation (CNCF).
One of the key features of Kubernetes is its ability to abstract away the underlying infrastructure, allowing developers to focus on building and deploying their applications rather than worrying about the infrastructure itself. This is achieved through a declarative configuration model, which allows developers to specify the desired state of their applications and let Kubernetes handle the rest.
Kubernetes is also highly scalable, with the ability to easily add or remove resources as needed to meet the demands of the applications it is running. It can also automatically recover from failures, ensuring that applications are always available and running smoothly.
Kubernetes or K8s Architecture

The architecture of Kubernetes consists of the following main components:
- Master Node: The master node is the main control plane of the Kubernetes cluster. It is responsible for maintaining the desired state of the cluster, such as which applications are running and on which nodes. The master node exposes a REST API, which is used by various tools and libraries to interact with the cluster.
- Worker nodes: Worker nodes are the machines (physical or virtual) that run the applications and services. They communicate with the master node to receive instructions and report their status.
- Etcd: Etcd is a distributed key-value store that is used by the master node to store the configuration data of the cluster. It is used to store the desired state of the cluster and the current state of the cluster, as well as other metadata.
- Kubelet: Kubelet is a process that runs on each worker node and is responsible for running containers. It communicates with the master node to receive instructions and reports the status of the containers running on the node.
- Kube-proxy: Kube-proxy is a network proxy that runs on each worker node and is responsible for implementing network policies for the containers.
- Kubernetes API server: The Kubernetes API server is a component of the master node that exposes the REST API for interacting with the cluster. It is responsible for validating and configuring the API requests, as well as communicating with the etcd store to read and write data.
- Scheduler: The scheduler is a component of the master node responsible for assigning work to the worker nodes. It figures out where to place a new container based on the resource requirements and the available capacity of the nodes.
- Controller manager: The controller manager is a component of the master node that runs various controllers to ensure the desired state of the cluster is maintained. These controllers include the replication controller, which ensures that the desired number of replicas of a deployment are running, and the node controller, which ensures that nodes are healthy and available.
In addition to the core components of the master node and worker nodes, Kubernetes also includes several other tools and services.
kubectl is a command-line tool for interacting with the Kubernetes API. It enables user to perform the following tasks using command line interfaces:
- Deploying applications: kubectl can be used to create and deploy new applications to a Kubernetes cluster. It supports various types of applications, including containerized applications, batch jobs, and custom resource definitions (CRDs). To deploy an application, users can create a configuration file (usually in YAML format) that describes the desired state of the application, such as the number of replicas and the resources it requires. The configuration file can then be passed to kubectl to create and deploy the application.
- Managing applications: kubectl can be used to view and manage the applications running in a Kubernetes cluster. Users can use kubectl to view the status of their applications, such as the number of replicas and their resource usage. They can also scale their applications up or down, update their configurations, and perform rolling deployments to update the application without downtime.
- Troubleshooting applications: kubectl can be used to troubleshoot issues with applications in a Kubernetes cluster. It supplies various commands to view the logs of containers, view the events generated by the cluster, and access the shells of containers to debug issues.
- Interacting with the API server: kubectl communicates with the Kubernetes API server to perform its operations. It makes use of the API server’s REST API to create, update, and delete resources in the cluster. kubectl also supports various authentication methods to secure communication with the API server.
- Using contexts: kubectl supports multiple contexts, which allow users to switch between different clusters and namespaces. A context consists of a cluster and a namespace, and users can switch between contexts using the kubectl config use-context command. This is useful when working with multiple clusters or when working with different environments (e.g., staging and production) within the same cluster.
- Customizing output: kubectl allows users to customize the output of its commands using flags such as -o and -o=wide. Users can also use the –template flag to specify a Go template to customize the output.
- Extending kubectl: kubectl can be extended using plugins, which are programs that provide additional functionality to kubectl. Plugins can be written in any language and are invoked using the kubectl command followed by the name of the plugin. Users can also create their own plugins to add custom functionality to kubectl.
How to deploy applications on K8s
To deploy code on Kubernetes, you need to package your code into a container image and then deploy the image to the cluster. Here is a high-level overview of the process:
- Create a container image: The first step is to create a container image that includes your code and any dependencies it requires. A container image is a lightweight, standalone, executable package that contains everything needed to run an application, including the application code, libraries, dependencies, and runtime. You can create a container image using a tool like Docker.
- Push the image to a registry: Once you have created the container image, you need to push it to a container registry, such as Docker Hub or Google Container Registry. A container registry is a service that stores container images and makes them available for deployment.
- Create a Kubernetes deployment: Next, you need to create a Kubernetes Deployment resource that manages the deployment of your containerized application. A Deployment resource defines the desired state of your application, such as the number of replicas and the resources it requires. You can create a Deployment resource using a manifest file, which is a YAML- or JSON-formatted file that describes the configuration of the Deployment.
- Apply the deployment to the cluster: Once you have created the Deployment resource, you can apply it to your Kubernetes cluster using the kubectl apply command. This will create the necessary resources in the cluster, such as pods and replicasets, to run the specified number of replicas of your application.
- Expose the application: To access the application from outside the cluster, you need to expose it using a Kubernetes Service resource. A Service resource defines a logical set of pods and a policy to access them. You can create a Service resource using a manifest file and apply it to the cluster using the kubectl apply command.