← Blog'a Dön
ARCHITECTURE

What Is Load Balancing? Strategies, Algorithms & Configuration

F. Çağrı Bilgehan9 Şubat 202610 dk okuma
load balancingnginxscalabilityarchitecture

What Is Load Balancing? A Complete Guide

Can't a single server handle all the traffic? Does everything go down when one server crashes? Load balancing distributes traffic across multiple servers for high availability and performance.

What Is Load Balancing?

Load balancing distributes incoming network traffic across multiple servers, preventing any single server from becoming a bottleneck and ensuring high availability.

                    ┌─ Server 1 ──→ App
Client ──→ Load    ├─ Server 2 ──→ App
           Balancer├─ Server 3 ──→ App
                    └─ Server 4 ──→ App

Load Balancing Algorithms

1. Round Robin

Distributes requests sequentially. The simplest method.

2. Weighted Round Robin

Sends more traffic to more powerful servers.

3. Least Connections

Routes to the server with the fewest active connections.

4. IP Hash

Routes the same IP to the same server (sticky sessions).

5. Least Response Time

Routes to the fastest-responding server.

Nginx Load Balancing

upstream backend {
    least_conn;
    server 10.0.0.1:3000 weight=3;
    server 10.0.0.2:3000 weight=2;
    server 10.0.0.3:3000;
    server 10.0.0.4:3000 backup;
}

server {
    listen 80;
    location / {
        proxy_pass http://backend;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
    }
}

Health Checks

Automatically remove unhealthy servers:

upstream backend {
    server 10.0.0.1:3000 max_fails=3 fail_timeout=30s;
    server 10.0.0.2:3000 max_fails=3 fail_timeout=30s;
}

L4 vs L7 Load Balancing

| Layer | Level | Routing Basis | |-------|-------|--------------| | L4 | Transport (TCP) | IP + Port, very fast | | L7 | Application (HTTP) | URL, headers, cookies |

Load Balancer Tools

| Tool | Type | Strength | |------|------|----------| | Nginx | L4/L7 | Performance, widespread | | HAProxy | L4/L7 | TCP support, reliability | | AWS ALB | L7 | AWS integration | | GCP LB | L4/L7 | Global, anycast | | Traefik | L7 | Container-native |

Best Practices

  1. Health checks required — Auto-remove unhealthy servers
  2. Graceful shutdown — Complete in-flight requests before stopping
  3. Shared sessions — Use Redis instead of sticky sessions
  4. SSL termination — Terminate TLS at the load balancer
  5. Monitor everything — Response times and error rates per server
  6. Auto-scaling — Automatically add servers on traffic spikes

Conclusion

Load balancing is fundamental to scalable, highly available systems. With the right algorithm and configuration, your application can handle thousands of concurrent requests seamlessly.

Learn load balancing and distributed systems on LabLudus.

İlgili Yazılar

What Is a Message Queue? Async Communication with RabbitMQ & Kafka

Message queues explained: RabbitMQ, Apache Kafka, async architecture, pub/sub patterns, and event-driven design for scalable systems.

What Is Software Architecture? A Comprehensive Guide

What is software architecture, why does it matter, and how do you learn it? A deep dive into architectural patterns, quality attributes, and the architect's career path.