· engineering  · 3 min read

Simple rate-limiting with NGINX

Configuring NGINX to protect a login endpoint with rate limiting to mitigate brute-force attacks.

Configuring NGINX to protect a login endpoint with rate limiting to mitigate brute-force attacks.

During a recent penetration test of our application, one vulnerability identified was the absence of rate limiting on the login endpoint, leaving our application susceptible to brute-force attacks.

Our application’s framework lacked built-in support for rate limiting, and we wanted to avoid writing code for this functionality due to the potential for introducing new bugs and the additional maintenance burden. Implementing rate limiting at the infrastructure layer is (for our purposes) an acceptable solution, and as it can function independently of the application, we could enhance our security without altering the existing codebase.
NGINX, already in our stack, offered a straightforward and solid way to implement this critical security feature easily and reliably.

Configuring Rate Limiting in NGINX

Define a Rate Limiting Zone: First, you need to define a rate limiting zone in your NGINX configuration. This zone will keep track of the number of requests from each client IP address.

http { # Define a limit_req_zone to track requests
limit_req_zone $binary_remote_addr zone=login_limit:10m rate=5r/m;
}

In this configuration:

$binary_remote_addr is used to track requests by IP address. If you were behind a load balancer, you might need to change this to something like $http_x_forwarded_for.
zone=login_limit:10m creates a zone named login_limit with 10 MB of storage for tracking requests, which is enough to store about 160,000 IP addresses. rate=5r/m limits clients to 5 requests per minute.

Apply the Rate Limit to a Specific Location:

Next, apply this rate limit to your login endpoint. This is done in the server block of your NGINX configuration where your application is defined.

 
server {
    location /login {
        limit_req zone=login_limit burst=20 nodelay;
        proxy_pass http://your_backend;
    }
 
}

In this location block:

limit_req zone=login_limit burst=10 nodelay; applies the login_limit zone. The burst parameter allows a user to exceed the rate limit momentarily (up to 10 requests) without being rejected, but subsequent requests will be delayed or limited based on the nodelay setting. The nodelay option ensures that exceeding requests are processed immediately as long as they do not exceed the burst size, instead of imposing a delay.
proxy_pass http://your_backend; You would replace this with the configuration needed to pass requests to your backend.

Conclusion

This setup will add simple rate limiting, responding to rate-limited requests with something like this:

HTTP/1.1 429 Too Many Requests

While it’s a straightforward, easy to implement solution, it lacks more advanced capabilities such as blocking attempts from multiple IP addresses to access a specific account, and it’s of lesser use if scaling, as each host will maintain it’s own record of login attempts.
A more thorough solution would be to use rate limiting within the application itself, or a web application firewall (WAF), however for many use cases, this method will suffice.

James Babington

About James Babington

A cloud architect and engineer with a wealth of experience across AWS, web development, and security, James enjoys writing about the technical challenges and solutions he's encountered, but most of all he loves it when a plan comes together and it all just works.

Comments

No comments yet. Be the first to comment!

Leave a Comment

Check this box if you don't want your comment to be displayed publicly.

Back to Blog

Related Posts

View All Posts »
DNS Troubleshooting

DNS Troubleshooting

A few simple steps to take, if you're having unexpected issues with your DNS changes.