From ftpmaster at ftp-master.debian.org Thu Feb 2 16:55:02 2023 From: ftpmaster at ftp-master.debian.org (Debian FTP Masters) Date: Thu, 02 Feb 2023 16:55:02 +0000 Subject: [Pkg-zfsonlinux-devel] Processing of zfs-linux_2.1.9-1~bpo11+1_source.changes Message-ID: zfs-linux_2.1.9-1~bpo11+1_source.changes uploaded successfully to localhost along with the files: zfs-linux_2.1.9-1~bpo11+1.dsc zfs-linux_2.1.9-1~bpo11+1.debian.tar.xz zfs-linux_2.1.9-1~bpo11+1_source.buildinfo Greetings, Your Debian queue daemon (running on host usper.debian.org) From ftpmaster at ftp-master.debian.org Thu Feb 2 17:05:17 2023 From: ftpmaster at ftp-master.debian.org (Debian FTP Masters) Date: Thu, 02 Feb 2023 17:05:17 +0000 Subject: [Pkg-zfsonlinux-devel] zfs-linux_2.1.9-1~bpo11+1_source.changes ACCEPTED into bullseye-backports Message-ID: Thank you for your contribution to Debian. Accepted: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Format: 1.8 Date: Fri, 03 Feb 2023 00:39:42 +0800 Source: zfs-linux Architecture: source Version: 2.1.9-1~bpo11+1 Distribution: bullseye-backports Urgency: medium Maintainer: Debian ZFS on Linux maintainers Changed-By: Aron Xu Changes: zfs-linux (2.1.9-1~bpo11+1) bullseye-backports; urgency=medium . * Rebuild for bullseye-backports. Checksums-Sha1: 9eb4ca626fc1d95cabe018eeece81ad3ee44dd90 3196 zfs-linux_2.1.9-1~bpo11+1.dsc f2ed013822461959b5e6659d61e729a9c77ebaa5 105980 zfs-linux_2.1.9-1~bpo11+1.debian.tar.xz cb3d9661f30e74194d0a3ea4bc75e132cca69ab8 7909 zfs-linux_2.1.9-1~bpo11+1_source.buildinfo Checksums-Sha256: 09a837849066f4416e0abeaa538a01e9b193ac02d77724b9313779983f6769cf 3196 zfs-linux_2.1.9-1~bpo11+1.dsc b25f4a03e9a79ab289570fb8bbe22b379b834c9085b621e18dceefbb226c2aba 105980 zfs-linux_2.1.9-1~bpo11+1.debian.tar.xz 3626678b4b7b6554fc5c1ce53d19dd0f66f2e5ea0ffffe506dfb1192511c086c 7909 zfs-linux_2.1.9-1~bpo11+1_source.buildinfo Files: a57da472e1364fa2f07d7205b763f916 3196 contrib/kernel optional zfs-linux_2.1.9-1~bpo11+1.dsc dcf6abcbff573e622f57f85e3a9eb034 105980 contrib/kernel optional zfs-linux_2.1.9-1~bpo11+1.debian.tar.xz f8cb4e91f054d26737824b42e1fee151 7909 contrib/kernel optional zfs-linux_2.1.9-1~bpo11+1_source.buildinfo -----BEGIN PGP SIGNATURE----- iQEzBAEBCAAdFiEEhhz+aYQl/Bp4OTA7O1LKKgqv2VQFAmPb6KAACgkQO1LKKgqv 2VTWegf/UAeqEyGvGCNM7A2dQa2m/Majfy1nhOMoUntoE7OGF5AvqqjSJVfSG9Ow HnLILrElJ3lp8VA2Es8J8sbHDB6PVpwzH3XqLWT9CHRizT1OJF49y9DncvKRQYDD duldrlcB91cUVSWgNmAnx55HOJ2FIYUMJ9kgcZhVGItDX1BrhTZ53R/rntyv94wu 6jTQXq4ALvXRZ+poWuu4ksPXXiesVAaW32pegiwvaH7YEQNnOqoiRCzUerU4uT9/ 3EttiwWH+/Dh2C5Xdfm4cCGbkKOiTq44JySbs150zF8INlYM+exvEfUjpga57cPr XCAABO1rS8rMOKbjl/8Uv+inKv84/g== =9FHS -----END PGP SIGNATURE----- From debian at scott.scolby.com Thu Feb 2 19:23:02 2023 From: debian at scott.scolby.com (Scott Colby) Date: Thu, 02 Feb 2023 14:23:02 -0500 Subject: [Pkg-zfsonlinux-devel] Bug#1030316: trim script always exits 1 despite not failing Message-ID: <4b98bf03-4dc2-402c-83df-8ed62cefc1e0@app.fastmail.com> Package: zfsutils-linux Version: 2.1.7-1~bpo11+1 When I invoke `sh /usr/lib/zfs-linux/trim`, the script always exits 1 despite operating properly. This is causing me some pain when trying to automate calling this script with a systemd timer, since I can't differentiate some other reason of the script exiting 1 from a successful run. I believe this is caused by the final command of the script being `zpool list ... | while read -r pool do ...; done`. When the output of `zpool list` is exhausted, `read` returns an error, and thus the script exits with that status. I have confirmed this by running the script with `sh -x` and seeing that the last output line is `+ read -r pool`. I'm not sure what the right solution to this would be, but I think that it should be addressed. Thank you, Scott Colby From pere at hungry.com Thu Feb 2 19:45:03 2023 From: pere at hungry.com (Petter Reinholdtsen) Date: Thu, 02 Feb 2023 20:45:03 +0100 Subject: [Pkg-zfsonlinux-devel] Bug#1030316: Bug#1030316: trim script always exits 1 despite not failing In-Reply-To: <4b98bf03-4dc2-402c-83df-8ed62cefc1e0@app.fastmail.com> References: <4b98bf03-4dc2-402c-83df-8ed62cefc1e0@app.fastmail.com> <4b98bf03-4dc2-402c-83df-8ed62cefc1e0@app.fastmail.com> Message-ID: [Scott Colby] > I believe this is caused by the final command of the script being > `zpool list ... | while read -r pool do ...; done`. When the output > of `zpool list` is exhausted, `read` returns an error, and thus the > script exits with that status. I have confirmed this by running the > script with `sh -x` and seeing that the last output line is > `+ read -r pool`. This sound strange. It is not according to my understanding of bourne shell scripting, and this oneliner describe how I believe it work: % ((set -x; echo foo|while read a; do a=$a; done); echo $?) + echo foo + read a + a=foo + read a 0 % -- Happy hacking Petter Reinholdtsen From debian at scott.scolby.com Thu Feb 2 19:54:03 2023 From: debian at scott.scolby.com (Scott Colby) Date: Thu, 02 Feb 2023 14:54:03 -0500 Subject: [Pkg-zfsonlinux-devel] Bug#1030316: Bug#1030316: trim script always exits 1 despite not failing In-Reply-To: References: <4b98bf03-4dc2-402c-83df-8ed62cefc1e0@app.fastmail.com> <4b98bf03-4dc2-402c-83df-8ed62cefc1e0@app.fastmail.com> Message-ID: <6c4f8ce4-5a64-4714-9cb5-47e6cec2df04@app.fastmail.com> On Thu, Feb 2, 2023, at 14:45, Petter Reinholdtsen wrote: > [Scott Colby] > > I believe this is caused by the final command of the script being > > `zpool list ... | while read -r pool do ...; done`. When the output > > of `zpool list` is exhausted, `read` returns an error, and thus the > > script exits with that status. I have confirmed this by running the > > script with `sh -x` and seeing that the last output line is > > `+ read -r pool`. > > This sound strange. It is not according to my understanding of bourne > shell scripting, and this oneliner describe how I believe it work: > > % ((set -x; echo foo|while read a; do a=$a; done); echo $?) > + echo foo > + read a > + a=foo > + read a > 0 > % You are correct. I was continuing to investigate this and I think the actual case is that the previous call returns 1: + lsblk -dnr -o TRAN /dev/sda + [ sata = nvme ] # <-- this is false + return + read -r pool As can be demonstrated by: $ cat test.sh #!/usr/bin/env sh printf '1\n2\n3\n' | \ while read -r h do echo "aa$h" false done $ sh -x test.sh + + printf 1\n2\n3\n read -r h + echo aa1 aa1 + false + read -r h + echo aa2 aa2 + false + read -r h + echo aa3 aa3 + false + read -r h $ echo $? 1 I still think that this is a bug in the trim script though. From noreply at release.debian.org Mon Feb 13 04:39:29 2023 From: noreply at release.debian.org (Debian testing autoremoval watch) Date: Mon, 13 Feb 2023 04:39:29 +0000 Subject: [Pkg-zfsonlinux-devel] zfs-linux is marked for autoremoval from testing Message-ID: zfs-linux 2.1.9-1 is marked for autoremoval from testing on 2023-03-20 It (build-)depends on packages with these RC bugs: 1030511: libguestfs: FTBFS on s390x: timeout https://bugs.debian.org/1030511 This mail is generated by: https://salsa.debian.org/release-team/release-tools/-/blob/master/mailer/mail_autoremovals.pl Autoremoval data is generated by: https://salsa.debian.org/qa/udd/-/blob/master/udd/testing_autoremovals_gatherer.pl From happyaron.xu at gmail.com Sat Feb 25 08:17:44 2023 From: happyaron.xu at gmail.com (Aron Xu) Date: Sat, 25 Feb 2023 16:17:44 +0800 Subject: [Pkg-zfsonlinux-devel] zfs-dkms and Intel QAT In-Reply-To: References: Message-ID: Hi, On Tue, Jan 31, 2023 at 7:12?AM Chandler wrote: > > Hello, we have a computer running bullseye with an Intel 8970 (C62x) PCIe card in it. I previously had this working fine with ZFS 0.8.2 I think, which I built and installed from the source. Obviously that was very old, along with all the other software and OS on the machine. I've now upgraded everything. > > Plus I learned from another user I could just export ICP_ROOT to point to the Intel drivers and install your zfs-dkms package, so I have zfs from bullseye-backports installed. This worked, but only until I rebooted the computer. Now I can't for the life of me get ZFS to use the QAT anymore, and he hasn't been able to either for over a year it seems. You might see our lengthy discussions on the ZoL Github pages. > It seems that nobody here has access to Intel's QAT hardware, so I'm afraid it's hard to get help. But in case you find the solution, it's welcomed to contribute back. Regards, Aron From ftpmaster at ftp-master.debian.org Sun Feb 26 05:04:04 2023 From: ftpmaster at ftp-master.debian.org (Debian FTP Masters) Date: Sun, 26 Feb 2023 05:04:04 +0000 Subject: [Pkg-zfsonlinux-devel] Processing of zfs-linux_2.1.9-2_source.changes Message-ID: zfs-linux_2.1.9-2_source.changes uploaded successfully to localhost along with the files: zfs-linux_2.1.9-2.dsc zfs-linux_2.1.9-2.debian.tar.xz zfs-linux_2.1.9-2_source.buildinfo Greetings, Your Debian queue daemon (running on host usper.debian.org) From ftpmaster at ftp-master.debian.org Sun Feb 26 05:20:02 2023 From: ftpmaster at ftp-master.debian.org (Debian FTP Masters) Date: Sun, 26 Feb 2023 05:20:02 +0000 Subject: [Pkg-zfsonlinux-devel] zfs-linux_2.1.9-2_source.changes ACCEPTED into unstable Message-ID: Thank you for your contribution to Debian. Accepted: -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 Format: 1.8 Date: Sun, 26 Feb 2023 12:32:52 +0800 Source: zfs-linux Architecture: source Version: 2.1.9-2 Distribution: unstable Urgency: medium Maintainer: Debian ZFS on Linux maintainers Changed-By: Aron Xu Changes: zfs-linux (2.1.9-2) unstable; urgency=medium . [ Aron Xu ] * d/control: remove obsolete Recommends lsb-base * cherry-pick upstream post 2.1.9 patches . [ Mo Zhou ] * Cherry-pick more patches. * Add a new symbol for libzpool5linux. Checksums-Sha1: 6ff0dd3bc1ec23033fd9a198a1201991153a05cd 3164 zfs-linux_2.1.9-2.dsc e0ee2fb20dd9572570b5bcc9f867903741c99a2d 112476 zfs-linux_2.1.9-2.debian.tar.xz 5bda21f725a47cd4dee4f03817c415418698533a 7877 zfs-linux_2.1.9-2_source.buildinfo Checksums-Sha256: d60a5e8469a8f26d8f318ab60e20373bb12b7aa93c74caa8064893789a54e6e2 3164 zfs-linux_2.1.9-2.dsc 897457fdb08f258cde34b3e6a37b1152e62efac89ec3b54161bc42be7f49790e 112476 zfs-linux_2.1.9-2.debian.tar.xz 0067d8d1f702d0fb940df59e3b3f6bf93c8881a65e02f386a498935b4fce1bc9 7877 zfs-linux_2.1.9-2_source.buildinfo Files: 622c5bd763b31a89587c60aeec104891 3164 contrib/kernel optional zfs-linux_2.1.9-2.dsc c0bbd249571892f3025c08a7e8e4771a 112476 contrib/kernel optional zfs-linux_2.1.9-2.debian.tar.xz c503258c94ccb2d0602a90773cb153f8 7877 contrib/kernel optional zfs-linux_2.1.9-2_source.buildinfo -----BEGIN PGP SIGNATURE----- iQEzBAEBCAAdFiEEhhz+aYQl/Bp4OTA7O1LKKgqv2VQFAmP65YEACgkQO1LKKgqv 2VSj2QgAhN3hFQf8Tmi8cToJraqme+vpf9W7DkMQr3B1CH1Lt2nwx683XsFpDAf0 m0hlk5nK6W32zyKy+s93gDW2DqVk56CAd5pbYkXk8A+2z52mjAIX52Z4ijHxyIoV c8M2HDWVTW5kAftWrWkS/24aa69CSeTbHxn/70HhHwQR3ZcEJvAprYBTSnsZ0msT FJ2rgYx5Tetduu2ZMcQm+LKCJLHOCc/MQcyMdQyZmoSSrjUzWpGGvZ4ETUF+09Oq z6WgGG82A6AIw0Er1GBHs73BVVuW/EGyS6feakXoJEz7K6cUaT5F9DtfX+yswKJu EGXZuSL+NVm3zRmpMCrI6aKmu7cFUQ== =ytT8 -----END PGP SIGNATURE----- From admin at genome.arizona.edu Mon Feb 27 13:05:51 2023 From: admin at genome.arizona.edu (Chandler) Date: Mon, 27 Feb 2023 06:05:51 -0700 Subject: [Pkg-zfsonlinux-devel] zfs-dkms and Intel QAT In-Reply-To: References: Message-ID: Aron Xu wrote on 2/25/23 1:17?AM: > It seems that nobody here has access to Intel's QAT hardware, so I'm > afraid it's hard to get help. But in case you find the solution, it's > welcomed to contribute back. Hi Aron, yes that's understandable and what I figured as well. I've asked for help in several places, thanks for following up and reminding me here. Most of my testing and results and conclusions have been posted to various issues on the ZFS Github page https://github.com/openzfs/zfs There is another Debian ZFS QAT user there who was having issues too so we at least could work together. Regarding this particular issue. The problem seems to be that the kernel will automatically load modules as soon as it detects hardware that needs/uses those modules. In this case, the kernel was detecting ZFS partitions on the disks and auto-loading the ZFS module before the QAT had loaded its modules. It wasn't clear to me why this happened, for at least a couple reasons: 1. I thought the ZFS module had a dependency on the QAT module, so if it was being loaded/inserted then it would trigger the QAT module to be inserted first. `lsmod` states that qat_api is used by zfs, and `modinfo` states that zfs depends on qat_api. 2. The kernel scans the PCI bus before it scans the disk partitions, and so it was actually detecting the Intel QAT in the PCI-E slot first. I guess it doesn't have the proper code to recognize that and load the related modules... probably similarly because no one has access to this hardware. So it took me a while to figure out how to overcome this and I asked for help in several places too but in the end I figured it out after many days if not weeks of trial and error. I tried so many things in frustration and I probably enumerated them all on some linux-modules mailing list but in the end what I had to do was falsify the ZFS module to the system with this configuration: # cat /etc/modprobe.d/zfs-falsify.conf install zfs /usr/bin/false # Now, whenever anyone wanted to load the zfs module it would just get false in return! This of course also affected the systemd zfs-load-module.service which is the one I wanted to be loading the module, so I had to update that service with `systemctl edit --full zfs-load-module.service` with this modification to ExecStart: ExecStart=/sbin/modprobe -iv zfs The "-i" option is the needed one, to ignore the install command and load the module normally. Finally ZFS was back in charge of loading its own module, but that was only part of the battle: the module was still loading before the QAT module was loaded. In the end I had to add several "Requires" and "After" to a few of the ZFS systemd services so they waited for the QAT service to finish starting which brings up the QAT engines... although when I check my current setup, I don't have any of those anymore in /etc/systemd/system, so maybe I messed something up previously that affected the service start order, so that's good at least. I must have rebooted our computer 100 times over the past month. After it had been up for ~750 days previously, it deserved it! But everything is good now, so here's to another 750 days! I put `echo --with-qat=` in /etc/dkms/zfs.conf and it picks it right up, version 2.1.9-1~bpo11+1 up and running currently. Also it seems the new zstd compression algorithm is nearly as efficient as gzip and many times faster, not requiring co-processors for decent performance. I'll be moving our home directories over soon from a machine with ZFS+QAT to ZFS+Zstd and will see how it goes. My preliminary testing indicates there won't be much difference in performance or storage savings. So Intel QAT may become even less accessible in the future. Anyway I think that's it for now, take care! Best Regards, Chandler From admin at genome.arizona.edu Mon Feb 27 20:22:32 2023 From: admin at genome.arizona.edu (Chandler) Date: Mon, 27 Feb 2023 13:22:32 -0700 Subject: [Pkg-zfsonlinux-devel] [EXT]Re: zfs-dkms and Intel QAT In-Reply-To: References: Message-ID: <5404912b-a7f8-41e5-87a9-e07535905e39@genome.arizona.edu> No, no, I spoke too soon again, of course. I think I keep forgetting to to try rebooting the machine and see if ZFS loads up correct, but it doesn't. I will start by adding: Requires=qat.service After=qat.service to zfs-load-module.service. Indeed this works and the module gets loaded at the correct time, but the rest of the ZFS services get lost for some reason. For example, even though /lib/systemd/system/zfs-import-scan.service has "Requires=zfs-load-module.service" (twice for some reason), it still fails to start: ? zfs-import-scan.service - Import ZFS pools by device scanning Loaded: loaded (/lib/systemd/system/zfs-import-scan.service; enabled; vendor preset: disabled) Active: inactive (dead) Condition: start condition failed at Mon 2023-02-27 12:30:28 MST; 54s ago ?? ConditionPathIsDirectory=/sys/module/zfs was not met Now if you check the timestamp and compare to the time zfs-load-module.service started: ? zfs-load-module.service - Install ZFS kernel module Loaded: loaded (/etc/systemd/system/zfs-load-module.service; enabled; vendor preset: enabled) Active: active (exited) since Mon 2023-02-27 12:30:33 MST; 48s ago It makes no sense to me why it is even trying to start 5 seconds before the zfs modules are loaded when it Requires them. So maybe that duplicate "Requires" should be "After"... I'll try that and reboot again... Ok the importing now starts after the module is loaded which starts after the QAT service is started. However, at the beginning of boot, it has caused this: [ 17.004060] systemd[1]: local-fs.target: Found ordering cycle on zfs-mount.service/start [ 17.012172] systemd[1]: local-fs.target: Found dependency on zfs-import.target/start [ 17.019931] systemd[1]: local-fs.target: Found dependency on zfs-import-scan.service/start [ 17.028191] systemd[1]: local-fs.target: Found dependency on zfs-load-module.service/start [ 17.036449] systemd[1]: local-fs.target: Found dependency on qat.service/start [ 17.043667] systemd[1]: local-fs.target: Found dependency on basic.target/start [ 17.050975] systemd[1]: local-fs.target: Found dependency on sockets.target/start [ 17.058460] systemd[1]: local-fs.target: Found dependency on dbus.socket/start [ 17.065683] systemd[1]: local-fs.target: Found dependency on sysinit.target/start [ 17.073164] systemd[1]: local-fs.target: Found dependency on systemd-tmpfiles-setup.service/start [ 17.082029] systemd[1]: local-fs.target: Found dependency on local-fs.target/start [ 17.089597] systemd[1]: local-fs.target: Job zfs-mount.service/start deleted to break ordering cycle starting with local-fs.target/start [ SKIP ] Ordering cycle found, skipping Mount ZFS filesystems So now I will edit zfs-mount.service removing "Before=local-fs.target" and see what that does... yes that helped, we're almost there, the ZFS is finally auto mounted with QAT support, but ZED is still failing to start: ? zfs-zed.service - ZFS Event Daemon (zed) Loaded: loaded (/lib/systemd/system/zfs-zed.service; enabled; vendor preset: enabled) Active: inactive (dead) Condition: start condition failed at Mon 2023-02-27 13:05:45 MST; 1min 4s ago ?? ConditionPathIsDirectory=/sys/module/zfs was not met I'll add "Requires=zfs-load-module.service" and "After=zfs-load-module.service" to this service, and I'll put it above "ConditionPathIsDirectory=/sys/module/zfs" just in case since that makes sense, I don't know the intricacies of systemd.service files though, no time either... and there we go! All ZFS targets and services green again. Best, Chandler