Understanding Kubernetes Architecture: A Comprehensive Overview
Introduction to Kubernetes
Kubernetes, often stylized as K8s, is an open-source platform designed to automate deploying, scaling, and operating application containers. Developed by Google and now maintained by the Cloud Native Computing Foundation, Kubernetes has become the de facto standard for container orchestration. It is designed to handle the deployment and management of containerized applications across a cluster of machines, providing tools for deploying applications, scaling them as necessary, managing changes to existing containerized applications, and helping optimize the use of underlying hardware beneath your containers.
Key Components of Kubernetes Architecture
Kubernetes follows a client-server architecture. Here’s an overview of the main components involved in its architecture:
1. Control Plane (Master Node)
The control plane's components make global decisions about the cluster (for example, scheduling), as well as detecting and responding to cluster events (for example, starting up a new pod when a deployment's replicas field is unsatisfied).
- API Server (kube-apiserver): Acts as the front end to the control plane. The API server is the only Kubernetes component that connects with the cluster's shared state (etcd), allowing users, management tools, and other components to interact with the cluster.
- etcd: A consistent and highly-available key value store used as Kubernetes’ backing store for all cluster data.
- Scheduler (kube-scheduler): Watches for newly created pods with no assigned node, and selects a node for them to run on.
- Controller Manager (kube-controller-manager): Runs controller processes which regulate the state of the cluster, managing the lifecycle of different resources like nodes, namespaces, and persistent storage.
- Cloud Controller Manager (cloud-controller-manager): Lets you link your cluster into your cloud provider’s API, and separates out the components that interact with that cloud platform from components that just interact with your cluster.
2. Node (Worker Node)
Nodes are the workers that run your applications. Each node includes the services necessary to run pods and is managed by the master components. They include:
- Kubelet: An agent that runs on each node in the cluster. It ensures that containers are running in a pod.
- Kube-proxy: Maintains network rules on nodes. These network rules allow network communication to your pods from network sessions inside or outside of your cluster.
- Container Runtime: The software that is responsible for running containers. Kubernetes supports several container runtimes: Docker, containerd, CRI-O, and any implementation of the Kubernetes CRI (Container Runtime Interface).
3. Add-ons
These are components and services that support additional features and functionalities in Kubernetes:
- DNS: Every Kubernetes cluster should have DNS as all Kubernetes services are discovered through DNS.
- Web UI (Dashboard): Kubernetes Dashboard is a general-purpose, web-based UI for Kubernetes clusters.
- Container Resource Monitoring: Provides central storage for containers' time-series metrics and provides data for Kubernetes built-in horizontal pod auto-scaling.
- Cluster-Level Logging: Mechanisms for saving logs from containers, which can help in debugging or security compliance monitoring.
How Kubernetes Works
Here’s a simplified workflow in Kubernetes:
- Deployment: When you deploy applications on Kubernetes, you tell the cluster to start the desired containers. The API server takes your commands and communicates them to the appropriate components to get the containers started.
- Scheduling: The scheduler watches for requests from the API server for new containers and assigns them to nodes.
- Execution: Each node has a Kubelet, which instructs the node’s container runtime to start the container(s).
- Service Discovery and Load Balancing: Kubernetes gives containers their own IP addresses and a single DNS name for a set of containers, which can do load-balancing between them.
- Storage Orchestration: Kubernetes allows you to automatically mount a storage system of your choice, whether from local storage, a public cloud provider, or a network storage system.
- Self-healing: Kubernetes restarts containers that fail, replaces containers, kills containers that don't respond to your user-defined health check, and doesn't advertise them to clients until they are ready to serve.
Comments
Post a Comment