[Pkg-zfsonlinux-devel] Bug#994855: zfs-dkms: Panic when receiving, fixed upstream
Chris
debian at mid-earth.net
Wed Sep 22 02:43:34 BST 2021
Package: zfs-dkms
Version: 2.0.3-9
Severity: grave
Tags: patch upstream
Justification: causes non-serious data loss
With this latest version of the debian package, I've been getting
panics receiving datasets. There's a patch already merged upstream. The
errors are:
VERIFY3(insert_inode_locked(ip) == 0) failed (-16 == 0)
PANIC at zfs_znode.c:616:zfs_znode_alloc()
The patch is at
https://github.com/openzfs/zfs/commit/afa7b3484556d3ae610a34582ce5ebd2c3e27bba
Please cherrypick this and add it soon.
-- System Information:
Debian Release: 11.0
APT prefers stable
APT policy: (700, 'stable')
Architecture: amd64 (x86_64)
Kernel: Linux 5.10.0-0.bpo.5-amd64 (SMP w/32 CPU threads)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE, TAINT_UNSIGNED_MODULE
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8) (ignored: LC_ALL set to en_US.UTF-8), LANGUAGE not set
Shell: /bin/sh linked to /bin/dash
Init: sysvinit (via /sbin/init)
LSM: AppArmor: enabled
Versions of packages zfs-dkms depends on:
ii debconf [debconf-2.0] 1.5.77
ii dkms 2.8.4-3
ii file 1:5.39-3
ii libc6-dev [libc-dev] 2.31-13
ii libpython3-stdlib 3.9.2-3
ii lsb-release 11.1.0
ii perl 5.32.1-4
ii python3-distutils 3.9.2-1
Versions of packages zfs-dkms recommends:
ii linux-libc-dev 5.10.46-4
ii zfs-zed 2.0.3-9
ii zfsutils-linux 2.0.3-9
Versions of packages zfs-dkms suggests:
ii debhelper 13.3.4
-- debconf information excluded
-------------- next part --------------
>From afa7b3484556d3ae610a34582ce5ebd2c3e27bba Mon Sep 17 00:00:00 2001
From: Paul Zuchowski <31706010+PaulZ-98 at users.noreply.github.com>
Date: Fri, 11 Jun 2021 20:00:33 -0400
Subject: [PATCH] Do not hash unlinked inodes
In zfs_znode_alloc we always hash inodes. If the
znode is unlinked, we do not need to hash it. This
fixes the problem where zfs_suspend_fs is doing zrele
(iput) in an async fashion, and zfs_resume_fs unlinked
drain processing will try to hash an inode that could
still be hashed, resulting in a panic.
Reviewed-by: Brian Behlendorf <behlendorf1 at llnl.gov>
Reviewed-by: Alan Somers <asomers at gmail.com>
Signed-off-by: Paul Zuchowski <pzuchowski at datto.com>
Closes #9741
Closes #11223
Closes #11648
Closes #12210
---
module/os/linux/zfs/zfs_znode.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
Index: zfs-linux-2.0.3/module/os/linux/zfs/zfs_znode.c
===================================================================
--- zfs-linux-2.0.3.orig/module/os/linux/zfs/zfs_znode.c
+++ zfs-linux-2.0.3/module/os/linux/zfs/zfs_znode.c
@@ -610,17 +610,24 @@ zfs_znode_alloc(zfsvfs_t *zfsvfs, dmu_bu
* number is already hashed for this super block. This can never
* happen because the inode numbers map 1:1 with the object numbers.
*
- * The one exception is rolling back a mounted file system, but in
- * this case all the active inode are unhashed during the rollback.
+ * Exceptions include rolling back a mounted file system, either
+ * from the zfs rollback or zfs recv command.
+ *
+ * Active inodes are unhashed during the rollback, but since zrele
+ * can happen asynchronously, we can't guarantee they've been
+ * unhashed. This can cause hash collisions in unlinked drain
+ * processing so do not hash unlinked znodes.
*/
- VERIFY3S(insert_inode_locked(ip), ==, 0);
+ if (links > 0)
+ VERIFY3S(insert_inode_locked(ip), ==, 0);
mutex_enter(&zfsvfs->z_znodes_lock);
list_insert_tail(&zfsvfs->z_all_znodes, zp);
zfsvfs->z_nr_znodes++;
mutex_exit(&zfsvfs->z_znodes_lock);
- unlock_new_inode(ip);
+ if (links > 0)
+ unlock_new_inode(ip);
return (zp);
error:
-------------- next part --------------
>From afa7b3484556d3ae610a34582ce5ebd2c3e27bba Mon Sep 17 00:00:00 2001
From: Paul Zuchowski <31706010+PaulZ-98 at users.noreply.github.com>
Date: Fri, 11 Jun 2021 20:00:33 -0400
Subject: [PATCH] Do not hash unlinked inodes
In zfs_znode_alloc we always hash inodes. If the
znode is unlinked, we do not need to hash it. This
fixes the problem where zfs_suspend_fs is doing zrele
(iput) in an async fashion, and zfs_resume_fs unlinked
drain processing will try to hash an inode that could
still be hashed, resulting in a panic.
Reviewed-by: Brian Behlendorf <behlendorf1 at llnl.gov>
Reviewed-by: Alan Somers <asomers at gmail.com>
Signed-off-by: Paul Zuchowski <pzuchowski at datto.com>
Closes #9741
Closes #11223
Closes #11648
Closes #12210
---
module/os/linux/zfs/zfs_znode.c | 15 +++++++++++----
1 file changed, 11 insertions(+), 4 deletions(-)
Index: zfs-linux-2.0.3/module/os/linux/zfs/zfs_znode.c
===================================================================
--- zfs-linux-2.0.3.orig/module/os/linux/zfs/zfs_znode.c
+++ zfs-linux-2.0.3/module/os/linux/zfs/zfs_znode.c
@@ -610,17 +610,24 @@ zfs_znode_alloc(zfsvfs_t *zfsvfs, dmu_bu
* number is already hashed for this super block. This can never
* happen because the inode numbers map 1:1 with the object numbers.
*
- * The one exception is rolling back a mounted file system, but in
- * this case all the active inode are unhashed during the rollback.
+ * Exceptions include rolling back a mounted file system, either
+ * from the zfs rollback or zfs recv command.
+ *
+ * Active inodes are unhashed during the rollback, but since zrele
+ * can happen asynchronously, we can't guarantee they've been
+ * unhashed. This can cause hash collisions in unlinked drain
+ * processing so do not hash unlinked znodes.
*/
- VERIFY3S(insert_inode_locked(ip), ==, 0);
+ if (links > 0)
+ VERIFY3S(insert_inode_locked(ip), ==, 0);
mutex_enter(&zfsvfs->z_znodes_lock);
list_insert_tail(&zfsvfs->z_all_znodes, zp);
zfsvfs->z_nr_znodes++;
mutex_exit(&zfsvfs->z_znodes_lock);
- unlock_new_inode(ip);
+ if (links > 0)
+ unlock_new_inode(ip);
return (zp);
error:
More information about the Pkg-zfsonlinux-devel
mailing list