<div dir="ltr"><div>Cheers,</div><div><br></div><div> I've thought about it for a while, and to me it does not carry a clear-cut "not acceptable" stamp, in fact.</div><div><br></div><div> I tend to agree with Daniel's take on it (and some others I've seen over time): when humans posted PRs, those purely manually ones made in the pre-AI years, they could also make sloppy mistakes, and with code copy-pasted and adapted from Stack Overflow or whatever repository or earlier in-memory experience they had, its legally-meaningful provenance was always uncertain. Beside having a larger memory than most people, LLMs crawling and "creatively" digesting whatever code they can see do not really differ from what human students and senior developers do.</div><div>
<div><br></div><div> After all, nobody is born with C coding patterns
in their head to say that this is all purely their work alone, everyone
stands on some shoulders of earlier giants, and claims somehow that what
they made is theirs to share further, and slap a permissive license on
it for us to use and merge.</div><div><br></div> For open-source code the LLMs were trained from, open-source projects may actually be legally beneficiaries: whether a contribution was derived from GPL, or Apache, or MIT, or Creative Commons (Stack Overflow posts), we may absorb it into GPLv2+ NUT as derived open source work. Presumably the models were not trained on proprietary code that some corporation did not permit to go out, so we should not be at a legal risk here. Even for those, it is the same legal situation though as when corporate developers contribute: in my recent memory, there were e.g. authorised new driver contributions from Riello; and when I was at Eaton, we actively seeked permissions to officially share under GPLv2+ what was already essentially open-sourced but proprietarily licensed scripted UPS companion software and packaging based on NUT (so in fact less head-ache to maintain in-house, while getting more outreach for them, and the work is not lost to eternity). So if some corporation made their code visible... oh well. They probably meant to.</div><div>
<div><br></div><div>
<div> It may (I think) cut worse the other way around, when someone
wants to keep their proprietary project private, and it suddenly must
become open as a derived work from GPL licensed code (or work on ripping it out).</div><div><br></div><div></div>
The two currently known AI-augmented NUT PRs also did not
propose any changes that a human (basically proficient in development)
could not do, nor is there anything apparently coming from other
projects: one was a clean-up of void pointer casts to satisfy clang-21
warnings (and caught one wrongly sized malloc along the way), and
another is that SNMP subdriver I mentioned before, which is a lot of text mapping the NUT
datapoint names to MIB OIDs and some needed scaling numbers. This did need review
(same as with purely human contributions), but I see no problem merging
those just because "AI" was involved at some point.</div><br></div><div> With my forays into coding assistants encouraged at the dayjob, it was questionable if they improved productivity for me (some colleagues had much better gains), but they did quickly generate a starting point to chisel unrecognisably into what I wanted to achieve a few days later. In fact, maybe my fingers would suffice to count the programs I wrote from scratch over the past decades, I don't even know the boilerplates needed in most languages and ecosystems I am dealing with. But there were hundreds of scripts and programs and recipes that I've picked up
from someone else at a few kilobytes of size and grew into huge monsters. Someone or something (else, other than me) making that first step is good. Getting my own feet wet is hard, and causes long delays before what often ends up a simple job.</div><div><br></div><div> Coincidentally, GitHub would likely try to improve the situation somehow: <a href="https://www.theregister.com/2026/02/03/github_kill_switch_pull_requests_ai/">https://www.theregister.com/2026/02/03/github_kill_switch_pull_requests_ai/</a> - so we will stay tuned and stock up on popcorn...<br><br></div><div>Respectfully,</div><div>Jim Klimov</div><div><br></div></div><br><div class="gmail_quote gmail_quote_container"><div dir="ltr" class="gmail_attr">On Thu, Feb 5, 2026 at 2:05 PM Greg Troxel via Nut-upsuser <<a href="mailto:nut-upsuser@alioth-lists.debian.net">nut-upsuser@alioth-lists.debian.net</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Thanks for the FOSDEM summary/comments.<br>
<br>
<br>
My take on AI is that is LLM output is a derived work of the training<br>
data, and there is no licensing story, so people submitting it are<br>
sending the project code without the ability to make the normal<br>
inbound=outbound license grant.<br>
<br>
Consider LICENSE-DCO:<br>
<br>
Developer Certificate of Origin<br>
Version 1.1<br>
<br>
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.<br>
<br>
Everyone is permitted to copy and distribute verbatim copies of this<br>
license document, but changing it is not allowed.<br>
<br>
<br>
Developer's Certificate of Origin 1.1<br>
<br>
By making a contribution to this project, I certify that:<br>
<br>
(a) The contribution was created in whole or in part by me and I<br>
have the right to submit it under the open source license<br>
indicated in the file; or<br>
<br>
(b) The contribution is based upon previous work that, to the best<br>
of my knowledge, is covered under an appropriate open source<br>
license and I have the right under that license to submit that<br>
work with modifications, whether created in whole or in part<br>
by me, under the same open source license (unless I am<br>
permitted to submit under a different license), as indicated<br>
in the file; or<br>
<br>
(c) The contribution was provided directly to me by some other<br>
person who certified (a), (b) or (c) and I have not modified<br>
it.<br>
<br>
(d) I understand and agree that this project and the contribution<br>
are public and that a record of the contribution (including all<br>
personal information I submit with it, including my sign-off) is<br>
maintained indefinitely and may be redistributed consistent with<br>
this project or the open source license(s) involved.<br>
<br>
<br>
For LLM output:<br>
point a is not true<br>
point b is not true<br>
point c is either not applicable or not true<br>
<br>
Merging LLM output means that the codebase is contaminated and that<br>
there is no longer clear permission to copy under the GPL.<br>
<br>
<br>
<br>
The other issue, totally separate, is that I believe it is outright<br>
unethical to ask humans to review or even read LLM output.\<br>
<br>
So yes, we live in a world where improper behavior is common, but that<br>
doesn't mean we have to say it's ok.<br>
<br>
Thus:<br>
<br>
No LLM output may be submitted in a PR, inserted into a ticket, sent<br>
to a mailinglist, or sent privately to any maintainer.<br>
<br>
<br>
_______________________________________________<br>
Nut-upsuser mailing list<br>
<a href="mailto:Nut-upsuser@alioth-lists.debian.net" target="_blank">Nut-upsuser@alioth-lists.debian.net</a><br>
<a href="https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/nut-upsuser" rel="noreferrer" target="_blank">https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/nut-upsuser</a><br>
</blockquote></div>