Some thoughts/predictions about how open source developers will be forced to choose their path with GenAI.
Full disclaimer: my own post, sharing for discussion and to find out if anyone has any brilliant ideas what else could be done. It looks like self-posts are okay here but let me know if I’m wrong about that.
The answer is 2.
Cling to known humans who write their own code.
Snake oil salesmen always encourage the public to bet against the experts, with predictable results.
Someday ethically sourced AI can be used responsibly by trustworthy coders.
But the key is choosing to collaborate with trustworthy coders.
Yep. It does increasingly feel like developers like me who find it deeply disturbing and problematic for our profession and society are going to increasingly become rarer. Fewer and fewer people are going to understand how anything actually works.
I think nobody understands exactly how anything works, but enough of us understand our own little corner of tech to make new things and keep the older things going. I’ve been coding for decades, and proudly state I understand about 1% of what I do. This is higher than most
AI will make these little gardens of knowledge smaller for most, and yet again we, as the human species, will forever rely on another layer of tech.
with a long tail of grumpy holdouts who adhere to free software principles
Nothing in the core free software principles - namely, the four freedoms - actually concerns the development process and tools used - or copyright. It’s all about what you can do with the software.
The GPL is more of a “hack” that “perverts” copyright to enforce free software principles - because that was the tool available, not because the people who wrote it really liked intellectual property.
This is a good point. I assumed here that FS advocates will be basically opposed to a technology that serves to incorporate their code into software that does not provide the fundamental freedoms to end users, more than those who license their work permissively. But yes you could imagine an FS advocate who is quite happy to use the tech themselves and churn out code with GPL attached.
The fact is, currently, AI can’t write good code. I’m sure that at some point in the future they will - but we’re not there yet, and probably have some years still.
Imagine at some point in the future, where an AI can program any piece of software you want for you, and do it well. At that point, the value of code itself will be minimal. If you keep your code proprietary, I’ll just get the AI to re-implement the functionality anew and publish it.
Therefore, all code will be permissive open source. There would be no point in keeping anything proprietary, and also no point in applying copyleft. But at this point the copyleft “hack” would simply be unnecessary, so permissive open source would be just as good.
Until then, me not using AI doesn’t in any way prevent others from training AI on my code. So I just don’t see training on my code as a valid reason to avoid it. I don’t use AI currently - but that’s for entirely pragmatic reasons: I’m not yet happy with the code it generates.
I just write in a language few people really knows well. Not like I expect AI to do a great job of basing off my code.