How to rate limit punks with nginx

I do some ops work for the Green Web Foundation, and over the last few weeks we've been seeing nasty spikes

limit_req_zone $binary_remote_addr zone=nopunks:10m rate=10r/s;

What does this mean? We start by calling limit_req_zone, to tell nginx we want to set up a zone where we rate limit requests on our server, telling it to use $binary_remote_addr, or the binary representation of a connecting client's IP address to tell one requesting client from another. We want to be able to refer to this rate limiting zone, so we give it a name, nopunks zone=nopunks:10m, and we site aside 10 megabytes of space to keep track of all the possible IP addresses connecting.

This means we can keep track of something how much our poor API is being hammed, from around 160,000 different IP addresses - useful!

Finally we set a rate of requests that seems fair with rate=10r/s. This means we want an upper limit of 10 requests per second to apply to this zone.

So, after writing this, we have a special zone, nopunks, that we can apply to any vhost or server we want with nginx.

Adding our zone to a site or API we want to protect

Now we have that, let's apply this handy new nopunks zone, to a route in nginx.

location / {
      # apply the nopunks
      # allow a burst of up to 20 requests
      # in one go, with no delay
      limit_req zone=nopunks burst=20 nodelay;
      # tell the offending client they are being
      # rate limited - it's polite!
      limit_req_status=429;
      # try to serve file directly, fallback to index.php
      try_files $uri /index.php$is_args$args;
}

What we're doing here is applying the nopunks zone, and passing in a couple of extra incantations, to avoid a page loading too slowly. We use burst=20 to say:

we are cool with a burst of up to 20 requests, in one go before we stop accepting requests

Thing is, this leaves us with a backlog of 20 requests, each taking 0.1 seconds to be served, so the whole set of requests will take 2 seconds. That's a pretty poor user experience. So, we can pass in nodelay - this adjusts our rate limiting to say this instead:

okay, you can send up to 20 requests, and we'll even let you send them as fast as you like, but if you send any more than that, we'll rate limit you

Finally, by default, with nginx, when a site is rate limited it serves a rather dramatic 503 error, as if something very wrong had happened. Instead limit_req_status=429 tells nginx to tell the connecting client to send a 429 too many requests status, so ideally, the person programming the offending HTTP client gets the message, and stops hammering the your API so hard.

So there you have it

This is a message mainly to my future self, next time I am looking after a server under attack. But with some luck, it'll make being DOS'd (unintentional or not) a less stressful experience for another soul on the internet.

Further reading

The nginx documentation is pretty clear if you need to do this, with lots of helpful examples, and the guide on rate limiting on the Nginx website also was a godsend when I had do this today.



Copyright © 2020 Chris Adams
Powered by Cryogen
Theme by KingMob