I'm trying to write a backup script that can be deployed on many different linux servers, some of which mount network drives as described in /etc/fstab. I want my script to be able to see if any network drives have come unmounted before a backup to ensure that they don't go unsaved. Is there any way to reliably check for this in a bash script?
2 Answers
One method that comes to mind is to iterate over the list of mountpoints and see how many files are present under each one. A value of 1 probably means the filesystem isn't mounted (and only the directory itself is present). This strategy won't work if the mountpoints are nested, however. By "nested" I mean mountpoints like:
/mnt/server1/share1
/mnt/server1/share1/share2
/mnt/server1/share1/share2/share3
I have also seen this fail when someone/some process didn't know that a mount wasn't there, and copied files to it anyway, and the files got written to the underlying filesystem, instead of to the filesystem that should have been mounted there.
But if your structure is more "flat" (or if it can be made flat for backup purposes), like:
/mnt/server1/share1
/mnt/server1/share2
/mnt/server2/share3
then:
MNTS="/mnt/server1/share1 /mnt/server1/share2 /mnt/server2/share3"
for DIR in $MNTS; do
N=$(find $DIR | head -10 | wc -l)
if [ $N -eq 1 ]; then
print "%s appears to not be mounted\n" "$DIR"
else
# back it up
fi
done
Another method that might be slightly OS-specific, or at least subject to change if mount's output format changes, is to do a brute-force check of mount's output. Assuming that your mount always uses spaces and never uses tabs, then:
MNTS="/mnt/server1/share1 /mnt/server1/share2 /mnt/server2/share3"
for DIR in $MNTS; do
if ! mount | grep -F " $DIR "; then
print "%s appears to not be mounted\n" "$DIR"
else
# back it up
fi
done
- 665