Bug#767893: systemd cannot mount zfs filesystems from fstab

John Holland jholland at vin-dit.org
Wed Nov 12 23:43:49 GMT 2014


I looked at this a little more. I found someone saying setting
plymouth.conf to theme=detail would help, so I tried that. I also
apt-get update'd and apt-get dist-upgrade'd.

Basically, there are two issues. The prompts to enter passwords for
luks encrypted volumes are mixed in with boot messages. However I was
able to enter the passwords OK in spite of that.

The real issue is that I cannot use zfs volumes in /etc/fstab. If I
have any such mounts present at boot time, the boot fails, zfs is not
successfully initialized, and generally the situation is bad. If I
comment out that entry and reboot, everything is OK.  And if after
that, I re-enable the entry in /etc/fstab (like zpool1/zfshome ->
/home) and issue "mount -a" (but don't reboot) the filesystem is
mounted OK.

If I just wanted to run zfs file systems with their mounting behavior
of putting themselves in place according to their path like
<pool>/<volume> I'd be OK. But right now on my real machine I have zfs
filesystems mounted (using wheezy)  on /var, /home etc
via /etc/fstab. If the system won't boot with the right filesystems
mounted in the first place this is  unworkable.


There seems to be an interaction with systemd, /etc/fstab and zfs
volumes. Since zfs is not an official debian package this might seem
like something that could be blown off. I hope it is not, because ZFS
has become very useful to me.

BTW I don't want to sound like an anti-systemd zealot but I did also
try to replace systemd with sysinitv on this VM to see if that would
make a difference. When I tried to remove the systemd-named packages
and their dependencies it basically destroyed the machine by removing
so much that it has no network with which to recover.



  




More information about the Pkg-systemd-maintainers mailing list