None

If you're preparing for technical interviews or customer conversations about Kubernetes, you'll get asked about these concepts. Here's what you actually need to know.

Pods vs Nodes vs Clusters: The Hierarchy

Cluster = Your entire Kubernetes environment

  • All the machines working together
  • Has master nodes (control plane) and worker nodes
  • Think: the whole apartment complex

Node = A single machine (physical or virtual)

  • Has CPU, RAM, disk
  • Runs the kubelet agent
  • Can host multiple pods
  • Think: one apartment building

Pod = Smallest deployable unit

  • One or more containers grouped together
  • Shares network and storage
  • Runs on a single node
  • Ephemeral — can be killed and recreated
  • Think: one apartment unit

The relationship: A cluster contains multiple nodes. Each node runs multiple pods. Each pod contains one or more containers.

If a node dies, all its pods die. Controllers will recreate those pods on healthy nodes automatically.

None

Master vs Worker Nodes: What Makes Them Different

Master Node (Control Plane) runs the brain:

  • kube-apiserver: The front door — all commands go through here
  • etcd: The database storing all cluster state (which pods exist, where they're running, desired state)
  • kube-scheduler: Decides which worker node should run new pods
  • kube-controller-manager: Runs controllers that watch state and make decisions ("we need 3 replicas but only have 2, create another")

Worker Node runs the actual workloads:

  • kubelet: The agent that executes commands from master ("start this container")
  • container runtime: Docker, containerd, or CRI-O — actually runs containers
  • kube-proxy: Handles networking so pods can talk to each other

What determines if a node is master or worker?

It's determined by which components are installed on it. A master node has the control plane components. A worker node has kubelet and container runtime. Some clusters have dedicated masters, others run master components on worker nodes too.

Where are cluster values stored?

Everything lives in etcd — a distributed key-value store on the master node:

  • Which pods should exist
  • Current state of all resources
  • Configuration data
  • Secrets

When you run kubectl get pods, the API server queries etcd. When a pod crashes, the kubelet reports to API server, which updates etcd. Controllers watch etcd for mismatches between desired and actual state.

Why etcd matters: If etcd goes down, you lose cluster state. That's why production clusters run multiple etcd instances for redundancy.

Essential kubectl Commands

View what's running:

kubectl get pods              # List all pods
kubectl get nodes             # List all nodes
kubectl get pods -o wide      # Show which node each pod is on
kubectl describe pod <name>   # Detailed info about a pod

Debugging:

kubectl logs <pod-name>                    # View pod logs
kubectl logs <pod-name> -f                 # Follow logs live
kubectl exec -it <pod-name> -- /bin/bash   # Shell into container

Scaling & Management:

kubectl scale deployment <name> --replicas=5   # Change number of pods
kubectl delete pod <pod-name>                  # Delete a pod

Resource Requests & Limits: Why They Matter

When you deploy a pod, you can specify two things:

Requests = Minimum resources guaranteed

  • "I need at least 256MB RAM and 0.5 CPU to function"
  • Kubernetes uses this for scheduling — won't place pod on a node without enough resources
  • Your pod gets this much reserved

Limits = Maximum resources allowed

  • "Don't let me use more than 512MB RAM and 1 CPU"
  • Prevents one pod from consuming entire node
  • Pod gets throttled (CPU) or killed (memory) if it exceeds limits

Example:

resources:
  requests:
    memory: "256Mi"
    cpu: "500m"
  limits:
    memory: "512Mi"
    cpu: "1000m"

Why this matters:

Without requests: Kubernetes doesn't know how big your pod is, might schedule too many pods on one node, causing crashes.

Without limits: One misbehaving pod can starve others of resources, crashing the whole node.

Interview answer: "Requests help Kubernetes make smart scheduling decisions. Limits prevent resource contention. Together, they ensure reliable performance across all pods in the cluster."

The One Thing Recruiters Are Testing

When they ask "what's the difference between a pod and a node?" they're checking if you understand Kubernetes is about managing applications (pods) across infrastructure (nodes).

Pods are what you deploy. Nodes are where they run. The cluster orchestrates the whole thing.

Questions Recruiters Actually Ask

"What happens if a pod crashes?" The kubelet detects it, reports to the API server. If it's managed by a controller (Deployment, ReplicaSet), the controller creates a replacement pod automatically.

"What happens if a node fails?" All pods on that node die. Controllers notice the pods are gone and recreate them on healthy nodes. This is why you run multiple replicas.

"Can a pod run on multiple nodes?" No. A pod always runs on exactly one node. If you need redundancy, you run multiple pods (replicas) across different nodes.

"What's the difference between Docker and Kubernetes?" Docker runs containers on a single machine. Kubernetes orchestrates containers across multiple machines — handling scheduling, scaling, and recovery.

"Why use Kubernetes instead of Docker Swarm?" Kubernetes is more complex but offers advanced scheduling, a massive ecosystem, multi-cloud portability, and sophisticated deployment strategies. Docker Swarm is simpler but hits limitations faster at scale.

Know these answers and you can speak credibly about Kubernetes in any technical conversation.

None