0

I'm talking to a web host that's just starting up. They have shared hosting and managed VPS's. With shared hosting I understand that they have a script checking if a process goes over memory limits and if so, the process is killed. Similarly, for their managed VPS's (CentOS-7):

... managed VPS plans are a managed service exactly like our shared hosting plans. The only difference is that you're on a VPS. We don't monitor or limit your memory usage on a VPS, so you're free to use up all of the available system memory on a VPS if you want to. That said, the kernel does have out-of-memory protection so you'd see various processes being killed off by the kernel if you start taking away memory that the kernel needs.

Wait. What about this thing called virtual memory? Is there a reason a host would want to do this?

Even for shared hosting, isn't there a way they could set ulimit -m and start paging instead of killing off a job?

Edit: I added an answer based in my own research. I'd still appreciate input.

Diagon
  • 248

2 Answers2

0

There are certain parts of the system that can not be put I to virtual memory (swap as its called in Linux). Also, it can sometimes be prudent to have limited swap on a server depending on the disk IO performance and size. I've seen plenty systems lock up where there was (hard disk based swap).

davidgo
  • 6,504
0

After some research I see that this question reflects an issue that has been in peoples' craw for some time. Partitioning memory for shared hosting seems an unsolved problem (if I understand correctly). VPS's for some reason often do not have swap. Advice is "consider a cloud service that appears like a "normal" server with swap and the like (Amazon EC2 is one such option)" and furthermore, "Only container-based VPSes like OpenVZ lack swap space. Xen, KVM, VMware, etc., all allow for it, and can actually be used to construct the sort of quality environment you're talking about. OpenVZ really cannot."

Diagon
  • 248