21

Say we have a load balancer that also does rate limiting. Rate limiting seems pretty straightforward for logged-in users - just look at the JWT and maybe use an in-memory data-store to see how many requests in the last 10 seconds for that user.

However, what about non-logged in (unauthenticated) users? We don't know for sure who they or where the request is coming from exactly, so can't easily rate-limit those requests or..?

Are there built-in solutions to this on AWS and other hosting platforms is it something we need to worry about? Seems like we need to handle the rate-limiting logic of logged-users manually, but what about non-logged in users?

My guess/hope is there might be some built-in mechanism for rate-limiting unauthenticated requests on hosting platforms, please inform us all.

4 Answers4

16

However, what about non-logged in (unauthenticated) users? We don't know for sure who they or where the request is coming from exactly, so can't easily rate-limit those requests or..?

There are a couple approaches you can take. One is that you need a reasonably reliable origin identifier, for example IP address. You can rate limit by IP address, so that attacks on a single compromised machine will be limited. This is a pretty simple approach, but there's a drawback that there are large network providers may only use single outgoing IP addresses to hide a very large number of users behind a NAT.

Another approach to rate limiting you can take is to require a proof of work for any unauthenticated requests. Your server issues a challenge code that any clients making unauthenticated request (e.g. login requests) have to calculate an resource intensive response before the request is processed. A common implementation of this idea requires the clients to calculate a partial hash reversion.

Lie Ryan
  • 12,496
6

To know if a request is from an authenticated user or from an anonymous user, you have to necessarily process the request (albeit quickly). This still means your application is vulnerable to a denial of service attack.

You should be checking overall requests per second, and if a certain number is exceeded, you simply ignore the rest. That number should be sufficiently high to not cause problems during normal functioning, but should protect against such attacks.

Also, as a general rule, you should probably not assume that an attack would not come from an authenticated user, as least for what concerns DOS attacks. A weak password would easily allow someone to presume the identity of an old user. So supposing you could do such a check, your (human) users should never need to perform requests at such rates not withstanding simply because you have many individual users.

Neil
  • 22,848
2

In AWS, there are the related services AWS Shield and AWS WAF. They are primarily intended for preventing DDoS attacks but also offer support for rate-limiting based on IP addresses.

In WAF, the concept is called Rate-Based Rules. Preventing brute-force based login attempts is mentioned as a use case in the original announcement:

This new rule type protects customer websites and APIs from threats such as web-layer DDoS attacks, brute force login attempts and bad bots. Rate Based Rules are automatically triggered when web requests from a client exceed a certain configurable threshold.

Other cloud providers should have similar offerings. Here, is a tabular comparison: Google Cloud Armor vs. AWS WAF vs. Cloudflare WAF.

As you are already using Nginx, using the built-in IP based rate-limiting might also be simple option. The module is called ngx_http_limit_req_module. This blog post describes how it can be used.

Please note that IP based rate limiting is a relatively simple concept but it is not perfect:

  • IP addresses might be shared (people working in the same office) leading to false positives
  • An attacker might have easy access to multiple IP addresses and use them to bypass the limits (distributed brute-force login attack)

In general, IP addresses are a good start. But if you need stronger protection, your best choices will depend on your thread model (which kind of attacks you want to prevent).

1

One of Cloudflare's main offerings is protection against Denial of Service attacks by providing an intelligent proxy for your API/web server. The basic service is free; they make money off of other related services like CDN services and load balancing. They also provide more sophisticated and controllable Rate Limiting services, currently at the rate of US $.05 per 10k good requests (no charge for rejected requests), but you have to upgrade to paid plans to get more than one global rule.

You can use Cloudflare's service with AWS or any other platform so long as you have control over your domain's name servers (as in, you can change the name servers registered for your domain).

You can provide separate rate limits for anonymous and logged-in users by directing logged-in users to different URLs. For example, you might simply prefix all your anonymously available URL paths with '/u' to create an endpoint that always requires authentication and has a different rate limit.

Note that Cloudflare's rate limiting (like all commercial rate limiting for anonymous users I am aware of) defines a client by its IP address. This can cause problems for people using commercial VPNs or Tor, since they tend to hide a large number of clients behind 1 IP address for added privacy.

Old Pro
  • 813