

What do you mean they don’t give you a choice? You always have the choice not to use it. DDG gives me AI summaries and I never read them. WhatsApp has an LLM button I’ve never pressed. Twitter has Grok, never tried it. Android probably has Gemini somewhere, and I don’t even know how to access it. As for Proton’s LLM, I hadn’t even heard of it despite paying for their email for a decade. I just don’t see how something existing as a feature in a service I already use somehow mandates me to engage with it.
If someone is so deeply anti-LLM that they want to avoid all this on principle, I don’t necessarily have an issue with that. But personally, I genuinely struggle to grasp the logic behind it. People seem to have a strong emotional response to LLMs - your reply makes that pretty clear - and that’s the part that really boggles my mind.
Depends on what job it’s replacing. LLMs are so-called narrow intelligence. They’re built to generate natural-sounding language, so if that’s what the job requires, then even an imperfect LLM might be fit for it. But if the job demands logic, reasoning, and grounding in facts, then it’s the wrong tool. If it were an imperfect AGI that can also talk, maybe - but it’s not.
My unpopular opinion is that LLMs are actually too good. We just wanted something that talks, but by training it on tons of correct information, they also end up answering questions correctly as a by-product. That’s neat, but it happens “by accident” - not because they actually know anything.
It’s kind of like a humanoid robot that looks too much like a person - we struggle to tell the difference. We forget what it really is because of what it seems.