Bug#767893: more info

John Holland jholland at vin-dit.org
Fri Nov 14 13:14:20 GMT 2014


I figured I would investigate further -

I have a jessie VM with some Virtio disks I am creating zfs with. I
updated the jessie to the latest (apt-get update, apt-get upgrade). I
isntalled zfsonlinux from their .deb for jessie and then installed
their packages to add the dmks modules and utilities. 

I create a zpool and some zfs filesystems. Seems fine. Then I set one
to mountpoint=legacy (a zfs thing that allows them to be mounted like
other filesystems) and put it in /etc/fstab to mount on /home.

As long as I am booting with sysvinit (I have both sysvinit and systemd
on this vm) it is fine. Booting with systemd  (via
init=/lib/systemd/systemd ) produces an error and "enter root password".


if there is an /etc/fstab entry for the zfs filesystem on /home,
and /home on root filesystem is not empty, it gives an error that the
mount point is not empty (not normal behaviour IMHO). 

If I empty out /home and reboot, I get 


=======================================
Unit home.mount has failed.

...

Dependency failed for Local File Systems.  

Unit home.mount entered failed state.
==========================================


I have taken the encrypted block device part of this ticket out of the
equation, but I still cannot mount a zfs filesystem via /etc/fstab
under systemd.  (When booting via sysvinit it works fine)

Also, even worse, and this seems to have started with recent updates
(since I first created this bug) - booting /restarting with systemd
seems to DESTROY all my zfs filesystems and pools. This is hard to test
but it really seems to be consistently happening. Rebooting with
sysvinit does not have this behavior.





-- 
John Holland
jholland at vin-dit.org
gpg public key ID 0xEFAF0D15



More information about the Pkg-systemd-maintainers mailing list