1

I'm hosting some relatively large files on my website, and it struck me that it would be trivial for someone to create a script that keeps downloading them over and over again, chewing through my data transfer and costing me money. Your typical home connection would be fast enough to cause me trouble if left downloading 24/7, nevermind a super fast fibre connection or a proper remote server.

So I'm wondering if there is anything available for Apache or even Ubuntu system-wide which will impose restrictions per IP address? For example, 10GB transfer per 24 hours. When the limit is reached, the client would be given a lightweight "forbidden - quota reached" page or be outright refused connection. I've looked around and haven't found anything other than various throttling solutions, which may help but wouldn't solve the issue.

spacer GIF
  • 111
  • 4

2 Answers2

0

There is the iptables quota module documented here: http://ipset.netfilter.org/iptables-extensions.man.html#lbBT

quota
    Implements network quotas by decrementing a byte counter with each
    packet. The condition matches until the byte counter reaches zero.
    Behavior is reversed with negation (i.e. the condition does not match
    until the byte counter reaches zero).

    [!] --quota bytes
           The quota in bytes.

I don't think this will help you though as, using this method, you would probably have to create individual quota rules for every possible source IP address that might come into your server (which is pretty much impossible). You would also need to integrate some kind of BASH script automation to reset these quotas every 24 hours (or what ever period), as mentioned here: IPtables Traffic Quota - up and down

Many years ago now, I toyed with an Apache module that was able to do this at the application level per IP address. This might be a better approach (I'm not sure what modern modules are available for Apache/Nginx that would do this - further research required).

The only other alternative that I know would work better (in my opinion) is using iptables hashlimit module in conjunction with tc (traffic shaping/policing): http://ipset.netfilter.org/iptables-extensions.man.html#lbAY - using this approach, traffic is only slowed but never blocked. The reason this might be more feasible is because the iptables hashlimit module can handle on-the-fly tracking of clients by recording distinct IP address and/or source/destination port "hash keys" as they come into your server. I have posted an answer on how to achieve this on another related question, here: https://serverfault.com/a/754116/30506 - but in summary, you create a traffic shaping class (bandwidth limiter pinned at say, 5Mb/sec) and then as the packet rate of per client sessions reaches a particular threshold, you start funneling their packets into this class, thus pacifying their traffic. In your case though, the pacification would need to be tuned to operate over hours of time rather than seconds.

parkamark
  • 1,358
-2

Hello and welcome to Server Fault!

Bandwidth limiting can be done using iptables and the limit module.

A very good introduction to this can be found here

A small example from this link to set bandwidth throtteling up:

$ sudo iptables --flush  # start again
$ sudo iptables --new-chain RATE-LIMIT
$ sudo iptables --append INPUT --match conntrack --ctstate NEW --jump RATE-LIMIT
$ sudo iptables --append RATE-LIMIT --match limit --limit 50/sec --limit-burst 20 --jump ACCEPT
$ sudo iptables --append RATE-LIMIT --jump DROP

Kudos to the authors Will Sewell and Jim Fisher of the mentioned article!