[Nut-upsuser] FOSDEM 2026

Jim Klimov jimklimov+nut at gmail.com
Thu Feb 5 13:43:02 GMT 2026


Cheers,

  I've thought about it for a while, and to me it does not carry a
clear-cut "not acceptable" stamp, in fact.

  I tend to agree with Daniel's take on it (and some others I've seen over
time): when humans posted PRs, those purely manually ones made in the
pre-AI years, they could also make sloppy mistakes, and with code
copy-pasted and adapted from Stack Overflow or whatever repository or
earlier in-memory experience they had, its legally-meaningful provenance
was always uncertain. Beside having a larger memory than most people, LLMs
crawling and "creatively" digesting whatever code they can see do not
really differ from what human students and senior developers do.

  After all, nobody is born with C coding patterns in their head to say
that this is all purely their work alone, everyone stands on some shoulders
of earlier giants, and claims somehow that what they made is theirs to
share further, and slap a permissive license on it for us to use and merge.

  For open-source code the LLMs were trained from, open-source projects may
actually be legally beneficiaries: whether a contribution was derived from
GPL, or Apache, or MIT, or Creative Commons (Stack Overflow posts), we may
absorb it into GPLv2+ NUT as derived open source work. Presumably the
models were not trained on proprietary code that some corporation did not
permit to go out, so we should not be at a legal risk here. Even for those,
it is the same legal situation though as when corporate developers
contribute: in my recent memory, there were e.g. authorised new driver
contributions from Riello; and when I was at Eaton, we actively seeked
permissions to officially share under GPLv2+ what was already essentially
open-sourced but proprietarily licensed scripted UPS companion software and
packaging based on NUT (so in fact less head-ache to maintain in-house,
while getting more outreach for them, and the work is not lost to
eternity). So if some corporation made their code visible... oh well. They
probably meant to.

  It may (I think) cut worse the other way around, when someone wants to
keep their proprietary project private, and it suddenly must become open as
a derived work from GPL licensed code (or work on ripping it out).

  The two currently known AI-augmented NUT PRs also did not propose any
changes that a human (basically proficient in development) could not do,
nor is there anything apparently coming from other projects: one was a
clean-up of void pointer casts to satisfy clang-21 warnings (and caught one
wrongly sized malloc along the way), and another is that SNMP subdriver I
mentioned before, which is a lot of text mapping the NUT datapoint names to
MIB OIDs and some needed scaling numbers. This did need review (same as
with purely human contributions), but I see no problem merging those just
because "AI" was involved at some point.

  With my forays into coding assistants encouraged at the dayjob, it was
questionable if they improved productivity for me (some colleagues had much
better gains), but they did quickly generate a starting point to chisel
unrecognisably into what I wanted to achieve a few days later. In fact,
maybe my fingers would suffice to count the programs I wrote from scratch
over the past decades, I don't even know the boilerplates needed in most
languages and ecosystems I am dealing with. But there were hundreds of
scripts and programs and recipes that I've picked up from someone else at a
few kilobytes of size and grew into huge monsters. Someone or something
(else, other than me) making that first step is good. Getting my own feet
wet is hard, and causes long delays before what often ends up a simple job.

  Coincidentally, GitHub would likely try to improve the situation somehow:
https://www.theregister.com/2026/02/03/github_kill_switch_pull_requests_ai/ -
so we will stay tuned and stock up on popcorn...

Respectfully,
Jim Klimov


On Thu, Feb 5, 2026 at 2:05 PM Greg Troxel via Nut-upsuser <
nut-upsuser at alioth-lists.debian.net> wrote:

> Thanks for the FOSDEM summary/comments.
>
>
> My take on AI is that is LLM output is a derived work of the training
> data, and there is no licensing story, so people submitting it are
> sending the project code without the ability to make the normal
> inbound=outbound license grant.
>
> Consider LICENSE-DCO:
>
>   Developer Certificate of Origin
>   Version 1.1
>
>   Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
>
>   Everyone is permitted to copy and distribute verbatim copies of this
>   license document, but changing it is not allowed.
>
>
>   Developer's Certificate of Origin 1.1
>
>   By making a contribution to this project, I certify that:
>
>   (a) The contribution was created in whole or in part by me and I
>       have the right to submit it under the open source license
>       indicated in the file; or
>
>   (b) The contribution is based upon previous work that, to the best
>       of my knowledge, is covered under an appropriate open source
>       license and I have the right under that license to submit that
>       work with modifications, whether created in whole or in part
>       by me, under the same open source license (unless I am
>       permitted to submit under a different license), as indicated
>       in the file; or
>
>   (c) The contribution was provided directly to me by some other
>       person who certified (a), (b) or (c) and I have not modified
>       it.
>
>   (d) I understand and agree that this project and the contribution
>       are public and that a record of the contribution (including all
>       personal information I submit with it, including my sign-off) is
>       maintained indefinitely and may be redistributed consistent with
>       this project or the open source license(s) involved.
>
>
> For LLM output:
>   point a is not true
>   point b is not true
>   point c is either not applicable or not true
>
> Merging LLM output means that the codebase is contaminated and that
> there is no longer clear permission to copy under the GPL.
>
>
>
> The other issue, totally separate, is that I believe it is outright
> unethical to ask humans to review or even read LLM output.\
>
> So yes, we live in a world where improper behavior is common, but that
> doesn't mean we have to say it's ok.
>
> Thus:
>
>   No LLM output may be submitted in a PR, inserted into a ticket, sent
>   to a mailinglist, or sent privately to any maintainer.
>
>
> _______________________________________________
> Nut-upsuser mailing list
> Nut-upsuser at alioth-lists.debian.net
> https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/nut-upsuser
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/nut-upsuser/attachments/20260205/330d7f15/attachment-0002.htm>


More information about the Nut-upsuser mailing list