0

Im running centos 6.8 with virtualmin

My server is from kimsufi.com with a disk space of 2tb

the file system is 20gb

below I have df -h output:

[root@server ~]# df -h 
Filesystem      Size  Used Avail Use% Mounted on
rootfs           20G  8,7G  9,7G  48% /
devtmpfs        7,8G  176K  7,8G   1% /dev
tmpfs           7,9G     0  7,9G   0% /dev/shm
/dev/sda2        20G  8,7G  9,7G  48% /
/dev/sda2        20G  8,7G  9,7G  48% /var/named/chroot/etc/named
/dev/sda2        20G  8,7G  9,7G  48% /var/named/chroot/var/named
/dev/sda2        20G  8,7G  9,7G  48% /var/named/chroot/etc/named.conf
/dev/sda2        20G  8,7G  9,7G  48% /var/named/chroot/etc/named.rfc1912.zones
/dev/sda2        20G  8,7G  9,7G  48% /var/named/chroot/etc/rndc.key
/dev/sda2        20G  8,7G  9,7G  48% /var/named/chroot/usr/lib64/bind
/dev/sda2        20G  8,7G  9,7G  48% /var/named/chroot/etc/named.iscdlv.key
/dev/sda2        20G  8,7G  9,7G  48% /var/named/chroot/etc/named.root.key
[root@server ~]# 

with df -i 48% is 100% Im a newbea but my server was working fine since one month ago.

I tried to clear the cache after searching google with the following command:

sudo rm -rf /var/cache/yum/x86_64/6/$REPONAME

so after running that command I logged in webmin and instead of the error in the title I got webmin screen it worked and local disk space was 50% in the panel.

after than I tried to restart mysql via /etc/init.d/mysqld restart and mysql failed to start

Now I dont have also the mysql

the last error is /usr/bin/mysqlshow: Can't connect to local MySQL server through socket '/var/lib/mysql/mysql.sock' (2)

and last again Failed to open /etc/webmin/apache/site for writing : No space left on device messagge in webmin or virtualmin.

Guys Im very confused and afraid to lost my databases please if anyone can help me here to solve this problem will be very appreciated

edit:

[root@server ~]# ls -l /var/spool/postfix/
total 56
drwx------  2 postfix root     4096 Oct 25 04:46 active
drwx------  2 postfix root     4096 Oct 24 21:45 bounce
drwx------  2 postfix root     4096 Nov 10  2015 corrupt
drwx------  6 postfix root     4096 Oct 10 02:17 defer
drwx------  6 postfix root     4096 Oct 10 02:17 deferred
drwx------  2 postfix root     4096 Nov 10  2015 flush
drwx------  2 postfix root     4096 Nov 10  2015 hold
drwx------  2 postfix root     4096 Oct 25 04:46 incoming
drwx-wx---  2 postfix postdrop 4096 Oct 25 04:46 maildrop
drwxr-xr-x. 2 root    root     4096 Oct 25 08:58 pid
drwx------. 2 postfix root     4096 Oct 25 11:21 private
drwx--x---. 2 postfix postdrop 4096 Oct 25 11:21 public
drwx------  2 postfix root     4096 Nov 10  2015 saved
drwx------  2 postfix root     4096 Nov 10  2015 trace
[root@server ~]# 
Gazi
  • 133

1 Answers1

0

You probably have some program that creates a lot of very small files, in my experience on RH6 some cron script that produces output that get's sent by local mail.

Check the output of:

ls -l /var/spool/postfix/

If the number at the fifth column is high with respect to the other files that's it.

Update

From the output of ls -l /var/spool/postfix seems the combination cron + postfix is not the problem in this case.

At this point short of reinstalling with the option for more inodes, check if you have some place filled with files, as per this question:

https://unix.stackexchange.com/questions/117093/find-where-inodes-are-being-used#117094

Try:

find / -xdev -printf '%h\n' | sort | uniq -c | sort -k 1 -n

But beware, it'll take ages. The last entry will be the directory with the more inodes inside, that should givesome clue

Fredi
  • 2,307
  • 13
  • 14