I spent more time refactoring AI drivel in my last job than I did implementing my own.
I’m glad LLMs work for the OOP, unfortunately programming isn’t uniform and different scopes and contexts can cause LLMs to create more overhead than they’re worth, I suppose in the same sense that throwing junior devs at a problem until it goes away creates more overhead.
Sure it can figure out X problem and make a PR for it, but did it do it in a clean manner? No.
Did it while working on X, also realise how X ties into problem Y and Z, and that dependency A does not have the extensibility to cover all these problems in a clean and effective manner before baking up a weird solution? Also no.
Do I have to divide my time and attention across multiple different areas of the code base, comprehend, refactor and commit the code that would usually take me 15 minutes to write in the first place? Yes.
They’ve got their strongpoints like the OP said, but I’m not insane or crazy for not wanting to use them. My tools work fine without the use of AI
My AI-enjoying friends don’t exist.
The stolen training data issue alone is enough to make the use of AI in business settings unethical. And until there’s an LLM that is trained on 100% authorized data, selling a product developed with AI is outrught theft.
Of course there’s also the energy use issue. Yeah, congrats, you used as much energy as a plane ride to generate something you could have written with your own brain with a fraction of the energy.
From a technical or legal perspective, copyright infringement is not theft. The relationship a copyright holder has with a work is of a completely different character than actual ownership. See Dowling v. United States (1985).
Whether or not “AI” training constitutes copyright infringement is, as far as I know, still up in the air. And, while I believe most of us can agree that actual theft is unethical, the ethics of copyright infringement are as far as I know also very debatable.
Disclaimer - not an uncritical supporter of “AI.”
The energy use argument hasn’t been true for a while now. https://wccftech.com/m3-ultra-chip-handles-deepseek-r1-model-with-671-billion-parameters/
Meanwhile, corps clearly don’t care about IP here and will keep developing this tech regardless of how ethical it is. Seems to me that it’s better if there are open model available and developed by the community than there only being closed models developed by corps who decide how they work and who can use them.
[Linked article] M3 Ultra Runs DeepSeek R1 With 671 Billion Parameters Using 448GB Of Unified Memory, Delivering High Bandwidth Performance At Under 200W Power Consumption, With No Need For A Multi-GPU Setup
Running the AI is not where the power demand comes from, it’s training the AI. Which, if you trained it only once it wouldn’t be so bad, but obviously every AI vendor will be training all the time to ensure their model stays competitive. That’s when you get into the tragedy of the commons situation where the collective power consumption goes out of control for tiny improvements in the AI model.
Meanwhile, corps clearly don’t care about IP here and will keep developing this tech regardless of how ethical it is.
“It will happen anyway” is not an excuse to not try to stop it. That’s like saying drug dealers will sell drugs regardless of how ethical it is so there’s no point in trying to criminalize drug distribution.
Seems to me that it’s better if there are open model available and developed by the community than there only being closed models developed by corps who decide how they work and who can use them.
Except there are no truly open AI models because they all use stolen training data. Even the “open source” models like Mistral and DeepSeek say nothing about where they get their data from. The only way for there to be an open source AI model is if there was a reputable pool of training data where all the original authors consented to their work being used to train AI.
Even if the model itself is open source and free to run, if there are no restrictions against using the generated data commercially, it’s still complicit in the theft of human-made works.
A lot of people will probably disagree with me but I don’t think there’s anything inherently wrong with using AI generated content as long as it’s not for commercial purposes. But if it is, you’re by definition making money off content that you didn’t create which to me is what makes it unethical. You could have hired that hypothetical person whose work was used in the AI, but instead you used their work to generate value for yourself while giving them nothing in return.
Training AI is a one time task. Every AI vendor is not training models from scratch, what they’re doing is using approaches like LoRA to tune existing models.
“It will happen anyway” is not an excuse to not try to stop it. That’s like saying drug dealers will sell drugs regardless of how ethical it is so there’s no point in trying to criminalize drug distribution.
You can’t put toothpaste back in the tube. The only question going forward is how AI will be developed and who will control it. It’s funny that you’d bring up the drug analogy because you’re advocating a war on drugs here.
Except there are no truly open AI models because they all use stolen training data. Even the “open source” models like Mistral and DeepSeek say nothing about where they get their data from. The only way for there to be an open source AI model is if there was a reputable pool of training data where all the original authors consented to their work being used to train AI.
Personally, I have absolutely no problem with that if the model is itself open and publicly owned. I’m a communist, I don’t support copyrights and IP laws in principle. The ethical objection to AI training on copyrighted material holds superficial validity, but only within capitalism’s warped logic. Intellectual property laws exist to concentrate ownership and profit in the hands of corporations, not to protect individual artists. Disney’s ruthless copyright enforcement, for instance, sharply contrasts with its own history of mining public-domain stories.
Meanwhile, OpenAI scraping data at scale, it exposes the hypocrisy of a system that privileges corporate IP hoarding over collective cultural wealth. Large corporations can ignore copyright without being held to account while regular people cannot. In practice, copyright helps capitalists far more than it help individual artists. Attacking AI for “theft” inadvertently legitimizes the very IP regimes that alienate artists from their work. Should a proletarian writer begrudge the use of their words to build a tool that, in better hands, could empower millions? The real question isn’t in AI training methods but in who controls its outputs.
You can’t put toothpaste back in the tube. The only question going forward is how AI will be developed and who will control it.
Fair enough, but even if the model is open source, you still have no control or knowledge of how it was developed or what biases it might have baked in. AI is by definition a black box, even to the people who made it, it can’t even be decompiled like a normal program.
It’s funny that you’d bring up the drug analogy because you’re advocating a war on drugs here.
I mean, China has the death penalty for drug distribution, which is supported by the majority of Chinese citizens. They do seem more tolerant of drug users compared to the US (I’ve never done drugs in China nor the US so I wouldn’t know), so clearly the decision to have zero tolerance for distributors is a very intentional action by the Communist party. As far as I know, no socialist country has ever been tolerant to even the distribution of cannabis, let alone hard drugs, and they have made it pretty clear that they never will.
Personally, I have absolutely no problem with that if the model is itself open and publicly owned. I’m a communist, I don’t support copyrights and IP laws in principle. The ethical objection to AI training on copyrighted material holds superficial validity, but only within capitalism’s warped logic. Intellectual property laws exist to concentrate ownership and profit in the hands of corporations, not to protect individual artists.
I never thought of it in terms of copyright infringement, but in terms of reaping the labour of proletarians while giving them nothing in return. I’m admittedly far less experienced of a communist than you, but I see AI as the ultimate means of removing workers from their means of production because it’s scraping all of humanity’s intellectual labour without consent, to create a product that is inferior to humans in every way except for how much you have to pay it, and it’s only getting the hype it’s getting because the bourgeoisie see it as a replacement for the very humans it exploited.
For the record, I give absolutely no shits about pirating movies or “stealing” content from any of the big companies, but I personally hold the hobby work of a single person in higher regard. It’s especially unfair to the smallest content creators because they are most likely making literally nothing from their work since the vast majority of personal projects are uploaded for free on the public internet. It’s therefore unjust (at least to me) to appropriate their free work into something whose literal purpose is to get companies out of paying people for content. Imagine working your whole life on open source projects only for no company to want to hire you because they’re using AI trained on your open source work to do what they would have paid you to do. Imagine writing novels your whole life and putting them online for free, only for no publisher to want to pay for your work because they have a million AI monkeys trained on your writing typing out random slop and essentially brute forcing a best seller. Open source models won’t prevent this from happening, in fact it will only make it easier.
AI sounds great in an already communist society, but in a capitalist one, it seems to me like it would be deadly to the working class, because capitalists have made it clear that they intend to use it to eliminate human workers.
Again, I don’t know nearly as much about communism as you so most of this is probably wrong, but I am expressing my opinions as is because I want you to examine them and call me out where I’m wrong.
Fair enough, but even if the model is open source, you still have no control or knowledge of how it was developed or what biases it might have baked in. AI is by definition a black box, even to the people who made it, it can’t even be decompiled like a normal program.
You can tune models for specific outputs actually. There are even projects that are exploring making models adapt and learn over time. https://github.com/babycommando/neuralgraffiti
The fact that it’s a black box is not really a show stopper in any meaningful way. We don’t know minds of other people, yet we can clearly collaborate effectively to solve problems despite that.
I mean, China has the death penalty for drug distribution, which is supported by the majority of Chinese citizens.
Sure, there are tough laws against drugs in China as well as other countries, but that has not eliminated use drugs entirely. Meanwhile, there is no indication that any state would ban the use of AI, and it would be self defeating to do so because it would make it less competitive against the states that don’t. The reality is that there are huge financial incentives for developing this technology for both private companies and state level actors. This tech is here to stay, and I don’t think it makes any sense to pretend otherwise. The question is how this tech will evolve going forward and how it will be governed.
I never thought of it in terms of copyright infringement, but in terms of reaping the labour of proletarians while giving them nothing in return.
I don’t see it that way at all. Open-source AI models, when decoupled from profit motives, have the potential to democratize creativity in unprecedented ways. They enable a nurse to visualize a protest poster, a factory worker to draft a union newsletter, or a tenant to simulate rent-strike scenarios. This is no different from fanfiction writers reimagining Star Wars or street artists riffing on Warhol. It’s just collective culture remixing itself, as it always has. The threat arises when corporations monopolize these tools to replace paid labor with automated profit engines. But the paradox here is that boycotting AI in grassroots spaces does nothing to hinder corporate adoption. It only surrenders a potent tool to the enemy. Why deny ourselves the capacity to create, organize, and imagine more freely, while Amazon and Meta invest billions to weaponize that same capacity against us?
And I have a concrete example I can give you here because AI tools like ComfyUI are already being used by artists, and they’re particularly useful for smaller studios. These tools can streamline the workflow, and allow for a faster transition from the initial sketch to a final product. They can also facilitate an iterative and dynamic creative process, encouraging experimentation and leading to unexpected results. Far from replacing artists, AI expands their creative potential, enabling smaller teams to tackle more ambitious projects.
https://www.youtube.com/watch?v=envMzAxCRbw
Imagine working your whole life on open source projects only for no company to want to hire you because they’re using AI trained on your open source work to do what they would have paid you to do.
Right, I would not like a company to build a proprietary model using my open source work. However, I’d have absolutely no problem with an open model being trained on my open source. As long as the model is distributed under an open license then anybody can benefit from it, and use it in any way that makes sense to them. I see it exactly the same as open sourcing code.
I do think capitalists will use this technology to harm works, that’s been the case with every advance in automation. However, I do think it’s going to be a far better scenario if this tech is open and can be used by workers on their own terms. The worst possible outcome is that we have corporations running models as subscription services, and people end up having to use them like serfs. I see open source models as workers owning the means of production.
Honestly, I’ve been doing some recreational thinking about this whole thing, and I find myself agreeing with you. You brought up good points I hadn’t considered, thanks!
O7
The real question isn’t in AI training methods but in who controls its outputs.
You’re not asking any of these questions. You’re just pushing pseudo-science about imaginary “AI” and how it’s great for the environment.
I’m literally asking this question, and I’m not pushing any pseudo-science about AI. This is just you making a straw man because you don’t actually have any coherent counterpoint to make. It’s incredible how any discussion about LLMs inevitably causes the trolls to crawl out of the woodwork.
This is a fact that a full DeepSeek model can now be run using 200 watts. What your IEA link is saying is that there will be surge in energy use because this tech will be deployed at scale, this has fuck all to do with efficiency.
Yeah I mean all that is basically true – for code. The tools work, if you know how to work them.
Of course, is this going to put programmers out of work? yes. Is it profitable for the same companies that are causing unemployment in other fields? Yes. So like, it’s not as though there isn’t blood on your hands.
TBH “AI” is going to create more jobs for devs. I tried using “AI” once for coding. It took me more time to debug the code than to google an alternative solution. It might be good for boilerplates, summarizing stack exchange, etc. But in reality you can’t code anything original or worthwhile with statistics.
If you use LLMs, you should use it primarily in ways where it’s easier to confirm the output is valid than it is to create the output yourself. For instance, “what API call in this poorly-documented system performs <some task>?”
There is no consistent definition of AI so you might as well drop the quotation marks, lest you be prescriptivist.