2

the hosting provider of my KVM (Strato) recently upgraded the version of Virtuozzo they were using from 6 to 7. Since then a few of my docker containers fail to start with the following error message:

❯ sudo docker start spring_webapp

Error response from daemon: OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:297: applying cgroup configuration for process caused "failed to set memory.kmem.limit_in_bytes, because either tasks have already joined this cgroup or it has children"": unknown

Error: failed to start containers: spring_webapp

The only thing the containers that won't start seem to have in common is that they contain a java webapp. Other containers like gitlab or a few mariadb instances seem to start just fine.

A google search for the error message returned some older issues on github but those seem to have been fixed years ago: https://github.com/opencontainers/runc/issues/1083

I am running Ubuntu 18.04 LTS with docker 19.03.12.

Already opened a ticket with my hosting provider but they answered with a canned response that since I have a root server they can't help me, basically saying that it's a problem with my configuration.

Unfortunately I don't know enough about OpenVZ/Virtuozzo to refute them.

Any hint pointing me in the right direction would be highly appreciated :)

Here's the output of /proc/user_beancounters

❯ sudo cat /proc/user_beancounters

Version: 2.5 uid resource held maxheld barrier limit failcnt 831745: kmemsize 564412416 766132224 9223372036854775807 9223372036854775807 0 lockedpages 0 16 9223372036854775807 9223372036854775807 0 privvmpages 5148863 5444666 9223372036854775807 9223372036854775807 0 shmpages 41758 74651 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 919 919 1100 1100 0 physpages 1972269 2444334 8388608 8388608 0 vmguarpages 0 0 9223372036854775807 9223372036854775807 0 oomguarpages 2104451 2452167 0 0 0 numtcpsock 0 0 9223372036854775807 9223372036854775807 0 numflock 0 0 9223372036854775807 9223372036854775807 0 numpty 4 4 9223372036854775807 9223372036854775807 0 numsiginfo 16 129 9223372036854775807 9223372036854775807 0 tcpsndbuf 0 0 9223372036854775807 9223372036854775807 0 tcprcvbuf 0 0 9223372036854775807 9223372036854775807 0 othersockbuf 0 0 9223372036854775807 9223372036854775807 0 dgramrcvbuf 0 0 9223372036854775807 9223372036854775807 0 numothersock 0 0 9223372036854775807 9223372036854775807 0 dcachesize 353423360 552648704 9223372036854775807 9223372036854775807 0 numfile 6327 11230 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numiptent 219 222 2000 2000 0

fiendie
  • 21
  • 4

1 Answers1

1

Yep, thats a Strato issue... happened after their maintenance 15./16. July 2020. I have already submitted a ticket... lets see if there is a response.

Needless to say, Strato had major performance problems in the past 3..4 months and just got that fixed recently.

Now this new maintenance last week messed up everything again.

EDIT:

Just deleted the container(s) from my server and reinstalled them. Now all is working well again. Maybe thats a solution for others, too?

MSZ
  • 11
  • 2