102

I am uploading a 26Gb file, but I am getting:

413 Request Entity Too Large

I know, this is related to client_max_body_size, so I have this parameter set to 30000M.

  location /supercap {
    root  /media/ss/synology_office/server_Seq-Cap/;
    index index.html;
    proxy_pass  http://api/supercap;
  }

  location /supercap/pipe {
    client_max_body_size 30000M;
    client_body_buffer_size 200000k;
    proxy_pass  http://api/supercap/pipe;
    client_body_temp_path /media/ss/synology_office/server_Seq-Cap/tmp_nginx;
  }

But I still get this error when the whole file has been uploaded.

user2979409
  • 1,121

4 Answers4

127

Modify NGINX Configuration File

sudo nano /etc/nginx/nginx.conf

Search for this variable: client_max_body_size. If you find it, just increase its size to 100M, for example. If it doesn’t exist, then you can add it inside and at the end of http

client_max_body_size 100M;

Test your nginx config changes.

sudo service nginx configtest

Restart nginx to apply the changes.

sudo service nginx restart

Modify PHP.ini File for Upload Limits

It’s not needed on all configurations, but you may also have to modify the PHP upload settings as well to ensure that nothing is going out of limit by php configurations.

If you are using PHP5-FPM use following command,

sudo nano /etc/php5/fpm/php.ini

If you are using PHP7.0-FPM use following command,

sudo nano /etc/php/7.0/fpm/php.ini

Now find following directives one by one

upload_max_filesize
post_max_size

and increase its limit to 100M, by default they are 8M and 2M.

upload_max_filesize = 100M
post_max_size = 100M

Finally save it and restart PHP.

PHP5-FPM users use this,

sudo service php5-fpm restart

PHP7.0-FPM users use this,

sudo service php7.0-fpm restart

It will work fine !!!

31

If you're uploading files of that size you should probably just disable the body size check altogether with:

client_max_body_size 0;
devius
  • 487
8

With respect, I'm not sure why you're using http to transfer that much data. I tend to do my large transfers over ssh

//such as:
tar cjf - /path/to/stuff | ssh user@remote-host "cd /path/to/remote/stuff;tar xjf -"

...which gives me a bzip-compressed transfer. But if I needed to do a resumable transfer, I might use sftp, lftp, even rsync. Any of those (or their derivatives or siblings) is capable of

  1. employing an encrypted channel if desired,
  2. resuming an interrrupted transfer and
  3. compressing the transfer

Only one of those would be an option to you when attempting to upload over http (namely, #1 if you were on https).

I hope you'll look into any of the above or the several other alternatives.

2

Add the following line to http or server or location context to increase the size limit in nginx.conf, enter:

client_max_body_size 100M;

Source: https://www.cyberciti.biz/faq/linux-unix-bsd-nginx-413-request-entity-too-large/

The client_max_body_size directive can be added in http, server or location.

Note that the max body size will typically be 1.34x larger than the file being uploaded. For example, the above limit 100M will only allow a 74M file to be uploaded.

700 Software
  • 2,323