AWS Load Balancer -Network Load Balancer

Nidhi Ashtikar
4 min readApr 11, 2024

--

Explained about Network Load Balancer and its Key Components

A Network Load Balancer (NLB) is a type of load balancer used to distribute incoming network traffic across multiple servers. It operates at the network layer (Layer 4) of the OSI model, meaning it forwards traffic based on network-level information such as IP addresses and TCP/UDP ports.

In Simple words, A Network Load Balancer (NLB) is like a traffic cop for the internet. Imagine you have a bunch of servers serving a website or an app. When someone wants to visit your site or use your app, their request gets sent to the NLB first.

The NLB then decides which server should handle the request, based on factors like how busy each server is. This helps make sure that no single server gets overwhelmed with too much traffic, keeping everything running smoothly for your users.

So basically, NLBs help spread the load evenly across all your servers, keeping your website or app up and running smoothly for everyone.

Key features of Network Load Balancers typically include:

  1. High Throughput: NLBs are designed to handle high volumes of network traffic efficiently.
  2. Low Latency: They aim to minimize the delay in processing and forwarding incoming requests.
  3. Scalability: NLBs can scale horizontally by adding more servers to the backend pool as the demand for resources increases.
  4. Health Checking: They regularly monitor the health of the servers in the backend pool and route traffic away from any unhealthy servers.
  5. Session Persistence: Some NLBs support session persistence, ensuring that requests from the same client are sent to the same backend server.

Listener:

A listener is a network endpoint that receives incoming traffic destined for a specific port or range of ports. NLBs can have multiple listeners, each configured to handle traffic for different services or applications.

Target Group:

A target group is a logical grouping of backend servers or resources that receive traffic from the NLB. When configuring an NLB, you specify one or more target groups to route incoming traffic to. Target groups can be based on various criteria such as instance IDs, IP addresses, or AWS resource tags.

Backend Servers:

The backend servers serve as the final destinations for traffic forwarded by the NLB. They are responsible for hosting the services or applications that are being balanced. NLBs distribute incoming traffic among these backend servers using the specified routing algorithm.

Routing Algorithm:

The routing algorithm determines how incoming traffic is distributed among the backend servers within a target group. Common routing algorithms include round-robin, least connections, and IP hash. The choice of algorithm depends on factors such as the type of workload and desired traffic distribution strategy.

To learn more about the Load Balancer, Algorithm check my blog: HERE

Health Checks:

NLBs continuously monitor the health of backend servers by sending health checks at regular intervals. Health checks determine whether a server is healthy and capable of handling traffic. Unhealthy servers are automatically removed from the rotation until they recover.

Security Groups:

Security groups define the inbound and outbound traffic rules for the NLB and its backend servers. They help control access to the NLB and enforce network security policies.

Subnets and Availability Zones:

NLBs are typically deployed across multiple subnets and availability zones within a region to enhance availability and fault tolerance. Distributing NLB resources across multiple availability zones helps ensure that traffic can be routed to healthy servers even if an entire availability zone becomes unavailable.

Thanks for spending your valuable time learning to enhance your knowledge!

--

--

Nidhi Ashtikar
Nidhi Ashtikar

Written by Nidhi Ashtikar

Experienced AWS DevOps professional with a passion for writing insightful articles.

Responses (1)