Bug#767893: systemd cannot mount zfs filesystems from fstab

Azeem Esmail azeeme234 at gmail.com
Mon Jan 19 23:10:44 GMT 2015


Dear Maintainer,

 

I am also trying to mount zfs datasets through fstab on Debian Jessie (in
VirtualBox). It appear that mounting /usr through fstab results in systemd
and systemd-remount-fs errors. 

 

It seem that the current bug report is related to my issue, so I am posting
my information here. Can you kindly look into this matter. If I need to open
a new bug report let me know.

 

I would also appreciate it if you can advise me on whether I can safely
ignore this error and if there is a possible systemd fix in the works.

 

Thanks in advance.

 

 

I am booting through UEFI on GTP partitioned disks

 

Jessie installer build: 20141222-00:04

systemd version: 215-8

zfs on Linux version: 0.6.3

 

 

I have found that "/" and "/home" can be mounted through zfs mount or
through fstab without errors (/var/log/daemon.log):

 

... zfs-mount[1150]: Importing ZFS pools.

... zfs-mount[1150]: Mounting ZFS filessystems not yet mounted.

... zfs-mount[1150]: Mounting volumes registered in fstab.

 

 

Mounting /var through zfs mount results in the following errors:

 

... zfs-mount[1176]: Importing ZFS pools

... zfs-mount[1176]: Mounting ZFS filessystems not yet mountedcannot mount
'/var': directory is not empty

... zfs-mount[1176]: failed!

... systemd[1]: zfs-mount.service: control process exited, code=exited
status=1

... systemd[1]: Failed to start LSB: Import and mount ZFS pools, filesystem
and volumes.

... systemd[1]: Unit zfs-mount.service entered failed state.

 

 

Even though the /var is not mounted properly, the /var directory will appear
on boot up but is missing a lot of files. This is somewhat understandable
with zfs currently refusing to mount over non-empty directories, which
should hopefully be fixed in version 0.6.4 through the overlay feature.

 

https://github.com/zfsonlinux/zfs/issues/1827

 

 

Mounting /var through fstab yields:

 

... systemd[1]: var.mount: Directory /var to mount over is not empty,
mounting anyway.

... zfs-mount[1150]: Importing ZFS pools.

... zfs-mount[1150]: Mounting ZFS filessystems not yet mounted.

... zfs-mount[1150]: Mounting volumes registered in fstab.

 

On boot up the /var directory appear to be mounted correctly with all its
files present. /boot acts the same as /var.

 

/usr acts a little differently. When mounting /usr though zfs mount, you
would be lucky if the system boots up.

 

 

Mounting /usr though fstab in addition to /var and /usr yields:

 

... systemd-remount-fs[571]: filesystem 'mpool/ROOT/deb/usr' can not be
mounted due to error 22

... systemd-remount-fs[571]: /bin/mount for /usr exited with exit status 1.

... systemd[1]: systemd-remount-fs.service: main process exited,
code=exited, status=1/FAILURE

... systemd[1]: Failed to start Remount Root and Kernel File Systems.

... systemd[1]: Unit systemd-remount-fs.service entered failed state.

... systemd[1]: var.mount: Directory /var to mount over is not empty,
mounting anyway.

... systemd[1]: boot.mount: Directory /boot to mount over is not empty,
mounting anyway.

... zfs-mount[1158]: Importing ZFS pools.

... zfs-mount[1158]: Mounting ZFS filessystems not yet mounted.

... zfs-mount[1158]: Mounting volumes registered in fstab.

 

 

On boot up the following is listed:

 

[FAILED] Failed to start Remount Root and Kernal File Systems.

see 'systemctl status systemd-remount-fs. service' for details.

[  OK  ] Reached target Local File Systems (Pre).

         Mounting /usr...

         Mounting /var...

         Mounting /boot...

         Mounting /home...

[  OK  ] Activated swap /dev/disk/by-uuid/...

[  OK  ] Reached target Swap.

[  OK  ] Mounted /var.

         Starting Load/Save Random Seed...

[  OK  ] Mounted /boot.

[  OK  ] Mounted /usr.

         Mounting /boot/efi...

[  OK  ] Mounted /home

[  OK  ] Started Load/Save Random Seed.

[  OK  ] Mounted /boot/efi

 

 

Checking errors in systemd-remount-fs:

 

# systemctl status systemd-remount-fs.service

 

Active: failed (Result: exit code) since Mon 2015-01-19 ...

 

Process: 592 ExecStart=/lib/systemd/systemd-remount-fs (code=exited,
status=1/FAILURE)

Main PID: 592 (code=exited, status=1/FAILURE)

 

... systemd-remount-fs[592]: filesystem 'mpool/ROOT/deb/usr' can not be
mounted due to error 22

... systemd[1]: systemd-remount-fs.service: main process exited,
code=exited, status=1/FAILURE

... systemd[1]: Failed to start Remount Root and Kernel File Systems.

... systemd[1]: Unit systemd-remount-fs.service entered failed state.

 

 

Entries in /etc/fstab:

 

# if zfs set mountpoint=legacy for / and /home

#

mpool/ROOT/deb  /  zfs  defaults,noatime,rw  0  0

mpool/ROOT/deb/home  /home  zfs  defaults,noatime,nodev,nosuid,noexec  0  0


#

# zfs set mountpoint=legacy for /boot, /var, /usr

#

mpool/ROOT/deb/usr  /usr  zfs  defaults,noatime,nodev,rw  0  0

mpool/ROOT/deb/boot  /boot  zfs  defaults,noatime  0  0

mpool/ROOT/deb/var  /var  zfs  defaults,noatime,nodev,nosuid,noexec  0  0

 

 

For the most part partitioning requirements are setup following instructions
for securing-debian-howto

 

4.10 Mounting partitions the right way

https://www.debian.org/doc/manuals/securing-debian-howto/ch4.en.html

and

http://changelog.complete.org/archives/9241-update-on-the-systemd-issue

 

 

The zpool is a mirrored pair (RAID 10) and was created with /dev/disk/by-id
with the following options:

 

pool options:

 

ashift=12

autoexpand=on

autoreplace=on

feature at lz4_compress=enabled

 

dataset options:

 

atime=off

checksum=fletcher4

compression=lz4

xattr=sa

 

Additional options set through /etc/fstab entries above.

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-systemd-maintainers/attachments/20150119/31344dd0/attachment.html>


More information about the Pkg-systemd-maintainers mailing list