Implementation of an Ingress Controller using NGINX

Nidhi Ashtikar
3 min readMay 3, 2024

--

1. Deployment of NGINX Ingress Controller with Load Balancing:

You need to deploy the NGINX Ingress Controller in your Kubernetes cluster.

# nginx-ingress-controller.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: nginx-ingress
spec:
replicas: 3 # Adjust as needed
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- name: nginx-ingress-controller
image: nginx/nginx-ingress:latest
ports:
- containerPort: 80
- containerPort: 443

Apply this YAML using kubectl apply -f nginx-ingress-controller.yaml.

Default Backend Deployment YAML (Optional):

# default-backend-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
name: default-backend
namespace: nginx-ingress
spec:
replicas: 1
selector:
matchLabels:
app: default-backend
template:
metadata:
labels:
app: default-backend
spec:
containers:
- name: default-backend
image: nginx:alpine
ports:
- containerPort: 8080

**Namespace YAML for NGINX Ingress Controller:

# nginx-ingress-namespace.yaml

apiVersion: v1
kind: Namespace
metadata:
name: nginx-ingress

2. Service for NGINX Ingress Controller:

You also need to expose the NGINX Ingress Controller deployment as a service within your cluster

# nginx-ingress-service.yaml

apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller
namespace: nginx-ingress
spec:
selector:
app: nginx-ingress
ports:
- protocol: TCP
port: 80
targetPort: 80
- protocol: TCP
port: 443
targetPort: 443
type: LoadBalancer

Apply this YAML using kubectl apply -f nginx-ingress-service.yaml.

Default Backend Service YAML (Optional):

# default-backend.yaml

apiVersion: v1
kind: Service
metadata:
name: default-backend
namespace: nginx-ingress
spec:
selector:
app: default-backend
ports:
- protocol: TCP
port: 80
targetPort: 8080

3. Ingress Resource:

Here’s an example of what the application code for /app1 and /app2 :

# app1.py

from flask import Flask

app = Flask(__name__)

@app.route('/app1')
def hello_app1():
return 'Hello from App 1!'

if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)
# app2.py

from flask import Flask

app = Flask(__name__)

@app.route('/app2')
def hello_app2():
return 'Hello from App 2!'

if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)

For app1-service.yaml:

# app1-service.yaml

apiVersion: v1
kind: Service
metadata:
name: app1-service
namespace: default # Specify the namespace if not using the default namespace
spec:
selector:
app: app1
ports:
- protocol: TCP
port: 80
targetPort: 8080 # Assuming your app runs on port 8080

For app2-service.yaml:

# app2-service.yaml

apiVersion: v1
kind: Service
metadata:
name: app2-service
namespace: default # Specify the namespace if not using the default namespace
spec:
selector:
app: app2
ports:
- protocol: TCP
port: 80
targetPort: 8080 # Assuming your app runs on port 8080

You would apply these service definitions using kubectl apply -f app1-service.yaml and kubectl apply -f app2-service.yaml

Define Ingress resources to configure how traffic should be routed.

# example-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
spec:
rules:
- host: example.com
http:
paths:
- path: /app1
pathType: Prefix
backend:
service:
name: app1-service
port:
number: 80
- path: /app2
pathType: Prefix
backend:
service:
name: app2-service
port:
number: 80

Apply this YAML using kubectl apply -f example-ingress.yaml

4. Traffic Routing:

Once the Ingress resource is applied, the NGINX Ingress Controller watches for changes and configures itself accordingly.

Incoming traffic to example.com/app1 or example.com/app2 will be routed to the respective services (app1-service and app2-service).

5. Load Balancing:

The NGINX Ingress Controller service is now configured as a LoadBalancer type, which allows cloud providers to provision a load balancer (e.g., AWS ELB, GCP LoadBalancer) to distribute traffic across the NGINX Ingress Controller replicas.

6. Proxying and Response Handling:

Requests are proxied to the appropriate services based on the Ingress rules, and responses are forwarded back to the original requester.

This is how you deploy an NGINX Ingress Controller with load balancing capabilities to manage traffic routing using Ingress resources in a Kubernetes cluster

If you found this guide helpful then do click on 👏 the button.

Follow for more Learning like this 😊

If there’s a specific topic you’re curious about, feel free to drop a personal note or comment. I’m here to help you explore whatever interests you!

Thanks for spending your valuable time learning to enhance your knowledge!

--

--

Nidhi Ashtikar
Nidhi Ashtikar

Written by Nidhi Ashtikar

Experienced AWS DevOps professional with a passion for writing insightful articles.

No responses yet