← Blog'a Dön
TECHNICAL

What Is API Rate Limiting? Protect Your App from Overload

F. Çağrı Bilgehan3 Şubat 202610 dk okuma
rate limitingapisecurityperformance

What Is API Rate Limiting? Protect Your App from Overload

Is a user sending thousands of requests per second? Bot traffic crashing your server? Rate limiting caps the number of requests to your API, protecting both security and performance.

What Is Rate Limiting?

Rate limiting restricts the number of requests a client can make within a time window. When exceeded, the API returns HTTP 429 (Too Many Requests).

Why Rate Limit?

  1. DDoS protection — Block malicious traffic
  2. Resource protection — Prevent server/database overload
  3. Fair usage — Equal service for all users
  4. Cost control — Limit third-party API expenses

Rate Limiting Algorithms

1. Fixed Window

Count requests in fixed time windows. Simple but can spike at boundaries.

2. Sliding Window

Smoother distribution using a rolling time window with sorted sets.

3. Token Bucket

Tokens refill at a constant rate. Each request consumes a token. Allows bursts.

4. Leaky Bucket

Requests processed at a constant rate. Excess queued or rejected.

Express Middleware

import rateLimit from 'express-rate-limit';

const limiter = rateLimit({
  windowMs: 15 * 60 * 1000,
  max: 100,
  standardHeaders: true,
  message: { error: 'Too many requests. Try again in 15 minutes.' }
});

app.use('/api/', limiter);

Nginx Rate Limiting

limit_req_zone $binary_remote_addr zone=api:10m rate=10r/s;

location /api/ {
    limit_req zone=api burst=20 nodelay;
    limit_req_status 429;
}

Response Headers

HTTP/1.1 429 Too Many Requests
RateLimit-Limit: 100
RateLimit-Remaining: 0
Retry-After: 45

Tiered Limits

| Level | Scope | Example | |-------|-------|---------| | Global | Entire API | 10,000 req/min | | User | Per user | 100 req/min | | IP | Per IP | 50 req/min | | Endpoint | Per route | /login: 5 req/min | | Plan | By subscription | Free: 100, Pro: 1000 |

Best Practices

  1. Informative responses — Include Retry-After header
  2. Layered limits — Global + user + endpoint-specific
  3. Whitelist — Exempt trusted internal services
  4. Graceful degradation — Degrade quality before hard rejecting
  5. Monitor — Track rate limit triggers and set up alerts
  6. Distributed counting — Use Redis for centralized counters across servers

Conclusion

Rate limiting is fundamental to API security and stability. The right algorithm protects against abuse while ensuring fair access for legitimate users.

Learn API security and rate limiting on LabLudus.

İlgili Yazılar

How to Build a SaaS Product: A Starter Guide

What is SaaS, how is it built, and what steps should you follow for a successful SaaS product? Technology selection, pricing, and MVP strategy guide.

No-Code and Low-Code: Build Apps Without Coding

What are no-code and low-code platforms, what are their advantages, and when should you use them? Comparing Bubble, Webflow, Retool, and Airtable.