1

I have a high hit nginx web server on CentOS that provides static large content. when the number concurrent connection are low, server could easily provide up to 4Gb of traffic , but when the number of concurrent connections increase, the server performance drops dramatically down to 400Mb with 100% I/O utilization. I have tried SSD caching, mounting file system with noatime , changing I/O scheduler, increase server Memory up to 256GB , different configuration on nginx like aio , file send but no success. are there any other configuration to improve it's performance?

Nidal
  • 187

2 Answers2

0

What about create a ramdisk and putting content there? You can run rsync to backup the data to physical disk and prevent data loss.

ThoriumBR
  • 5,427
0

I suppose that when you're serving low number of clients then your server is able to cache most of needed data in RAM, so it is serving it almost exclusively from RAM.

But when more and more clients are served most of data does not fit to your RAM and your server needs to read it from your IO subsystem. Cache is much less used, as most of the time needed data just isn't there. And mechanical drives would need to seek a lot.

I don't know how much data you have and how is your IO configured, but I think mechanical drives just won't suffice. Also any SSD cache smaller than data you use.

You could use several SSD's with high random read performance in RAID1. Or maybe you could "shard" your data - instead of using one huge filesystem split your files to large number of small SSD disks based for example on crc32(filepath) % 8 for 8 SSD's.

Tometzky
  • 2,709