Kubernetes- Interview Question- Part2
1. Horizontal Pod Autoscaler (HPA):
Feature that automatically adjusts the number of replica pods in a deployment or replica set based on observed CPU utilization or other application-provided metrics.
Horizontal scaling refers to adding or removing instances (pods) to meet the demand, while vertical scaling involves adjusting the resources (CPU, memory) allocated to each instance.
apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
name: myapp-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: myapp-deployment
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
targetAverageUtilization: 50
2. Vertical scaling :
Vertical scaling typically involves modifying the resource requests and limits in your pod’s YAML file. For example:
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
spec:
containers:
- name: myapp-container
image: myapp-image
resources:
requests:
cpu: "200m"
memory: "512Mi"
limits:
cpu: "1"
memory: "1Gi"
You can apply these configurations using kubectl apply -f filename.yaml
.
3. Scaling Nodes:
Scaling nodes in a Kubernetes cluster typically involves adding or removing nodes from the cluster.
Kubernetes itself does not directly provide automated node scaling like it does for pods.
However, cloud providers often offer managed Kubernetes services with built-in features for scaling the underlying infrastructure, including nodes.
AWS offers a managed Kubernetes service called Amazon Elastic Kubernetes Service (EKS).
You can use AWS Auto Scaling groups to automatically adjust the number of nodes in your EKS cluster based on workload demand.
Here’s a simplified example of how you can create an Auto Scaling Group for EKS:
aws autoscaling create-auto-scaling-group \
--auto-scaling-group-name my-node-group \
--launch-template LaunchTemplateName=my-launch-template \
--min-size 2 \
--max-size 5 \
--desired-capacity 3 \
--vpc-zone-identifier subnet-12345678
4. Step-by-step guide to upgrading the EKS cluster:
- Prepare: Review release notes, and backup critical data.
- Update Tools: Ensure
eksctl
andkubectl
are up-to-date. - Plan: Choose the target EKS version, and check compatibility.
- Upgrade Control Plane: Use
eksctl
AWS CLI to upgrade. - Monitor: Keep an eye on upgrade progress.
- Upgrade Worker Nodes: Update node group AMI, drain, and replace nodes.
- Verify: Ensure cluster and applications are functioning correctly.
- Clean Up: Remove temporary resources.
- Monitor and Maintain: Set up monitoring, and perform regular maintenance.
5. Upgrade an EKS cluster’s node image version:
Check the Current Version:
aws eks describe-cluster --name CLUSTER_NAME --query "cluster.version"
Update Node Group:
eksctl create nodegroup --cluster=CLUSTER_NAME --region=REGION --name=NEW_NODE_GROUP_NAME --node-ami=NEW_AMI_ID --node-type=INSTANCE_TYPE --nodes=MIN_NODES --nodes-max=MAX_NODES
Drain Nodes (Optional):
kubectl drain NODE_NAME --ignore-daemonsets
Update Node Group:
eksctl delete nodegroup --cluster=CLUSTER_NAME --region=REGION --name=OLD_NODE_GROUP_NAME --approve
Verify Upgrade:
kubectl get nodes
Cleanup (Optional):
eksctl delete nodegroup --cluster=CLUSTER_NAME --region=REGION --name=OLD_NODE_GROUP_NAME --approve
This sequence of commands will help you upgrade the node image version of your EKS cluster efficiently.
6. How to troubleshoot kube- proxy:
Troubleshooting kube-proxy involves identifying and resolving issues related to network connectivity, service discovery, and load balancing within a Kubernetes cluster.
Here are some steps you can take to troubleshoot kube-proxy:
- Check kube-proxy Logs:
Start by checking the logs of kube-proxy pods to see if there are any error messages or warnings:
kubectl logs -n kube-system <kube-proxy-pod-name>
2. Verify kube-proxy Status:
Ensure that kube-proxy pods are running and are in the Running
state:
kubectl get pods -n kube-system | grep kube-proxy
3. Check Node Connectivity:
Verify that each node in the cluster can communicate with other nodes and with the Kubernetes API server:
kubectl get nodes
4. Service Discovery:
Ensure that DNS resolution for services is working correctly. You can test service discovery using nslookup
or dig
from within a pod:
kubectl exec -it <pod-name> -- nslookup <service-name>
5. Check iptables Rules:
kube-proxy uses iptables rules to manage service traffic. Check if the iptables rules are correctly configured:
iptables-save | grep KUBE
6. Restart kube-proxy:
If you suspect an issue with kube-proxy, you can try restarting the kube-proxy pods:
kubectl delete pods -n kube-system -l k8s-app=kube-proxy
7. Verify Cluster DNS Configuration:
Check if CoreDNS (or kube-dns) pods are running and if the DNS configuration is correct:
kubectl get pods -n kube-system -l k8s-app=kube-dns
kubectl logs -n kube-system <coredns-pod-name>
8. Check Node Resources:
Ensure that nodes have enough resources (CPU, memory) available to run kube-proxy and other necessary components.
9. Review Kubernetes Events:
Look for any relevant events in the Kubernetes event logs that might indicate issues with kube-proxy or networking:
kubectl get events --all-namespaces
7. How to make Highly available Application
- Redundancy and Load Balancing: Deploy multiple instances of your application across different zones or regions. Use load balancers to evenly distribute traffic.
- Stateless Architecture: Design your application to be stateless, storing data externally. This enhances scalability and fault tolerance.
- Automated Scaling: Set up auto-scaling to adjust instance count based on demand, ensuring your app can handle traffic spikes.
- Health Checks and Monitoring: Implement health checks and monitoring tools to track instance health and performance metrics.
- Fault Tolerance and Resilience: Design with retry mechanisms, circuit breakers, and distributed tracing for graceful failure handling.
- Database Replication and Sharding: Use database replication and sharding for data distribution, improving performance and fault tolerance.
- Geographic Distribution: Consider deploying across multiple regions for resilience and latency reduction.
- Backup and Disaster Recovery: Regularly back up data and test disaster recovery procedures to ensure readiness for catastrophic failures.
- Security Best Practices: Implement encryption, strong authentication, and regular patching to protect against security threats.
8. What is “readiness” and “liveness” :
Readiness Probe:
- Determines if a container is ready to serve traffic.
- Checked before adding a pod to the service endpoints.
- Ensures that only fully initialized containers receive traffic.
- Failing a readiness probe removes the pod’s IP address from service endpoints.
- Verifies if the application has completed initialization tasks.
- Supports HTTP, TCP, and Command probes.
Liveness Probe:
- Determines if a container is alive and healthy.
- Checked periodically after the pod is running.
- Ensures that containers remain responsive and functional during their lifecycle.
- Failing a liveness probe triggers a container restart.
- Verifies if the application remains responsive.
- Supports HTTP, TCP, and Command probes.
These probes are crucial for maintaining the health and stability of applications running within a Kubernetes cluster, ensuring that only healthy containers receive traffic and promptly restarting any containers that become unresponsive.
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-app
spec:
replicas: 3
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example-container
image: example-image:latest
ports:
- containerPort: 8080
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 15
periodSeconds: 20
9. How we can secure clusters in Kubernetes?
- Use Role-Based Access Control (RBAC): Restrict access to cluster resources based on user roles.
- Enable Network Policies: Control network traffic between pods and services to limit access.
- Secure Cluster Communications: Encrypt communication between components and nodes using TLS.
- Update Kubernetes Components: Keep components and nodes updated with security patches.
- Harden Worker Node Security: Apply security measures to worker nodes like disabling unnecessary services and using secure runtimes.
- Implement Pod Security Policies (PSPs): Enforce security policies on pods to limit their capabilities.
- Monitor Cluster Activity: Use logging and monitoring tools to detect and respond to security incidents.
- Implement Image Security: Scan container images for vulnerabilities and enforce image signing.
- Limit Access to Sensitive Information: Store sensitive data securely and avoid exposing it unnecessarily.
- Regularly Review and Audit Configuration: Periodically review and audit cluster configuration to identify and address security issues.
10. What is RBAC and how to Implement it?
RBAC (Role-Based Access Control) in Kubernetes allows you to control access to resources based on predefined roles and bindings:
- Define Roles: Create roles specifying permissions (e.g., read, write) for specific resources.
- Create Role Bindings: Associate roles with users, groups, or service accounts.
- Optional: Define cluster-wide roles and bindings for broader access control.
- Apply Configuration: Use YAML manifests to define roles and bindings, then apply them to the cluster with
kubectl apply
.
# Define a role named "pod-reader" that allows read-only access to pods in the "default" namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: pod-reader
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
........................
# Define a role binding that assigns the "pod-reader" role to a user named "user1"
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: read-pods
namespace: default
subjects:
- kind: User
name: user1
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-reader
apiGroup: rbac.authorization.k8s.io
11. Perform maintenance on K8S node
- Drain the Node:
kubectl drain <node-name> --ignore-daemonsets
2. Mark the Node as Unschedulable:
kubectl cordon <node-name>
3. Perform Maintenance Tasks:
4. Reboot (if required):
5. Uncordon the Node:
kubectl uncordon <node-name>
6. Verify Node Status:
kubectl get nodes
12. What is Daemon set:
DaemonSet ensures that a specific pod, typically a system-level service or agent, runs on every node in a Kubernetes cluster
DaemonSet is a Kubernetes resource that guarantees the deployment of exactly one instance of a specified pod template on each node in the cluster.
It’s commonly used for deploying system-level services or agents, such as monitoring agents, log collectors, or network proxies, ensuring that these critical components are running on every node to provide essential cluster-wide functionality and observability.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: logging-agent
spec:
selector:
matchLabels:
app: logging-agent
template:
metadata:
labels:
app: logging-agent
spec:
containers:
- name: logging-agent
image: logging-agent-image:latest
# Add additional configuration as needed
13. Purpose of operators: .
Operators in Kubernetes automate the management of complex applications by extending the Kubernetes API. They come in various types:
- Custom Resource Definition (CRD) Controllers: Extend Kubernetes API with custom resources and controllers to manage specific applications.
- Built-in Kubernetes Operators: Managed by Kubernetes itself, such as Deployment, StatefulSet, and DaemonSet controllers.
- Third-party Operators: Developed by third-party vendors or the community for managing specific applications or services.
- Operator Frameworks: Toolkits like Operator SDK and Kubebuilder simplify operator development.
Benefits include automation, self-healing, consistency, and extensibility, making them essential for efficiently managing stateful workloads in Kubernetes.
14. StatefulSets and Deployments :
Deployments:
- Deployments are primarily used for stateless applications.
- They manage replicated pods, ensuring a specified number of replicas are running at any given time.
- Deployments provide features such as rolling updates, rollback, and scaling.
- Pods managed by Deployments are typically interchangeable, and there’s no expectation of stable identity or persistent storage.
StatefulSets:
- StatefulSets are designed for stateful applications that require stable, unique identities and persistent storage.
- They maintain a sticky identity for each pod, allowing them to maintain state across restarts or rescheduling.
- StatefulSets guarantee the ordering and uniqueness of pod creation and termination, which is essential for applications like databases.
- StatefulSets support features such as persistent volumes, ordered pod creation, and stable network identities.
If you found this guide helpful then do click on 👏 the button.
Follow for more Learning like this 😊
If there’s a specific topic you’re curious about, feel free to drop a personal note or comment. I’m here to help you explore whatever interests you!