📋 Table of Contents
Advanced25 min read2025-06-08
Kubernetes ClusterManagement
Professional Kubernetes orchestration: deployments, services, ingress, and scaling strategies for container workloads in the cloud.
KubernetesContainer OrchestrationMicroservicesCloud Native
⚓ Kubernetes Basics
Kubernetes (K8s) is an open-source platform for automating the deployment, scaling, and management of containerized applications. It abstracts the underlying infrastructure and provides a unified API.
🎯 Orchestration
Automated container management
🔄 Auto-Scaling
Dynamic resource adjustment
🛡️ Self-Healing
Automatic recovery
🏗️ Cluster Architecture
Control Plane
- •API Server - Central API for all components
- •etcd - Distributed Key-Value Store
- •Scheduler - Pod assignment to nodes
- •Controller Manager - Monitor cluster state
Worker Nodes
- •kubelet - Node agent for pod management
- •kube-proxy - Network proxy & load balancer
- •Container Runtime - Docker, containerd, CRI-O
🚀 Cluster Setup
Local Development with kind
# Install kind (Kubernetes in Docker)
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
# Create cluster
kind create cluster --name dev-cluster
# Configure kubectl
kubectl cluster-info --context kind-dev-cluster
# Check cluster status
kubectl get nodes
Production Cluster with kubeadm
# Initialize master node
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
# Configure kubectl for user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Install pod network (Flannel)
kubectl apply -f https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
# Join worker nodes
sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash <hash>
📦 Deployments
Deployment Manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
periodSeconds: 10
readinessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
periodSeconds: 5
Deployment Management
# Create deployment
kubectl apply -f deployment.yaml
# List deployments
kubectl get deployments
# Check pod status
kubectl get pods -l app=nginx
# Scale deployment
kubectl scale deployment nginx-deployment --replicas=5
# Rolling update
kubectl set image deployment/nginx-deployment nginx=nginx:1.22
# Rollback
kubectl rollout undo deployment/nginx-deployment
# Rollout status
kubectl rollout status deployment/nginx-deployment
🌐 Services & Networking
Service Types
ClusterIP
Internal service communication
NodePort
External access via node port
LoadBalancer
Cloud Load Balancer Integration
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-loadbalancer
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
🚪 Ingress Controller
NGINX Ingress Setup
# Install NGINX Ingress Controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/cloud/deploy.yaml
# Create Ingress Resource
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- app.example.com
secretName: app-tls
rules:
- host: app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
- path: /api
pathType: Prefix
backend:
service:
name: api-service
port:
number: 3000
📈 Auto-Scaling
Horizontal Pod Autoscaler (HPA)
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: nginx-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 10
periodSeconds: 60
scaleUp:
stabilizationWindowSeconds: 0
policies:
- type: Percent
value: 100
periodSeconds: 15
Vertical Pod Autoscaler (VPA)
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
name: nginx-vpa
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: nginx-deployment
updatePolicy:
updateMode: "Auto"
resourcePolicy:
containerPolicies:
- containerName: nginx
maxAllowed:
cpu: 1
memory: 500Mi
minAllowed:
cpu: 100m
memory: 50Mi
🔒 Security & RBAC
Role-Based Access Control
# Create Service Account
apiVersion: v1
kind: ServiceAccount
metadata:
name: deploy-user
namespace: production
---
# Define Role
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: production
name: deployment-manager
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list"]
---
# Create RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: deploy-user-binding
namespace: production
subjects:
- kind: ServiceAccount
name: deploy-user
namespace: production
roleRef:
kind: Role
name: deployment-manager
apiGroup: rbac.authorization.k8s.io
🛡️ Security Best Practices
- • Least Privilege Principle – Minimal required permissions
- • Network Policies – Micro-segmentation of the network
- • Pod Security Standards – Restricted pod security context
- • Image Security – Vulnerability scanning and image signing
- • Secrets Management – Integrate external secret stores
🎯 Summary
Kubernetes provides a powerful platform for container orchestration in the cloud. With the right strategies, you achieve:
Achieved Goals
- ✅ Automated container orchestration
- ✅ Self-healing systems
- ✅ Efficient resource usage
- ✅ Zero-downtime deployments
Next Steps
- 🔍 Service Mesh (Istio/Linkerd)
- 📊 Advanced monitoring setup
- 🔄 GitOps workflows
- ⚡ Performance tuning