6

I have Debian 10 (Buster) installed and have added ZFS from Backports. I have 4 iSCSI-LUNs that I use as disks for ZFS. Each LUN holds a separate zpool.

So far the ZFS setup works. But the system is not reboot-stable. Sometimes after reboot all ZFS volumes are restored and mounted correctly, sometimes not. I think that happens, because ZFS does not wait for iSCSI-completion.

I tried:

$ cat /etc/systemd/system/zfs-import-cache.d/after-open-iscsi.conf
[Unit]
After=open-iscsi.service
BindsTo=open-iscsi.service
$ systemd-analyze critical-chain zfs-import-cache.service
The time after the unit is active or started is printed after the "@" character.
The time the unit takes to start is printed after the "+" character.

zfs-import-cache.service +1.602s └─open-iscsi.service @2min 1.033s +286ms └─iscsid.service @538ms +72ms └─network-online.target @536ms └─ifup@eth0.service @2min 846ms └─apparmor.service @2min 748ms +83ms └─local-fs.target @2min 745ms └─exports-kanzlei.mount @2min 3.039s └─local-fs-pre.target @569ms └─keyboard-setup.service @350ms +216ms └─systemd-journald.socket @347ms └─system.slice @297ms └─-.slice @297ms

This does not solve my problems. Probably the iSCSI stuff is not ready but already systemd-activated and therefore ZFS does not find its devices.

Currently the only very dirty workaround is to put some rules in /etc/rc.local:

systemctl start zfs-import-cache.service
systemctl start zfs-mount.service
systemctl start zfs-share.service
systemctl start zfs-zed.service

zfs mount -a

This works, but I want a clean solution.

What I really do not understand and what drives me crazy is that in Debian there do exist /etc/init.d/scriptname and also systemd unit files. Which one is used? sysvinit or systemd? Why are both provided? Which ones are the better ones?

So currently I feel I have a not stable boot process here.

guntbert
  • 699

1 Answers1

0

The recommended way of doing this is probably to use a udev rule, but I don't know udev well enough. This is my present workaround (please tell me how to do better):

  1. Create a service for the iSCSI disk device you need to wait on (/dev/sdb1, in my case):

    $ sudo EDITOR=vim systemctl edit --force --full dev-sdb1.service

  2. In the service definition, make ZFS depend on it.

  3. In the service definition, make it wait for the device to be available.

3 is the tricky part. What I have is this:

[Unit]
Description="Monitor the existence of /dev/sdb1"
Before=zfs-import-cache.service
# Requires=dev-sdb1.device
# After=dev-sdb1.device
# I thought this would wait for the device to become available.
# It doesn't, and there appears to be no way to actually do so.

[Service] Type=oneshot ExecStart=/bin/sh -c 'while [ ! -e /dev/sdb1 ]; do sleep 10; done'

pathetic

[Install] WantedBy=multi-user.target RequiredBy=zfs-import-cache.service

Obviously it would be better to do this without a shell script to poll for the device. Having read the documentation regarding device units and the answers to these Stack Exchange questions:

and verified that dev-sdb1.device exists after booting, I expected to just be able to make the zfs-import-cache.service wait for /dev/sdb1 by adding the rules

After=dev-sdb1.device
Requires=dev-sdb1.device

to its definition. That doesn't work; it will say the service failed with result 'dependency', whatever that means. I suppose the dev-sdb1.device doesn't exist yet, systemd doesn't know it will be created soon, and I cannot find a directive to say "just wait for it".

Alternative approaches:

  1. Use a path unit instead of a shell script to wait for the device. I haven't tried this.
  2. Add a udev rule matching /dev/sdb or /dev/sdb1 (if possible) to explicitly make its device unit available. I haven't tried this; I don't see why it would help (the device unit is already being created) and I don't know how to figure out if this would have any other effects on device initialization and break anything as a result.