Freedom is the right to tell people what they do not want to hear.

  • George Orwell
  • 1 Post
  • 17 Comments
Joined 1 month ago
cake
Cake day: July 17th, 2025

help-circle
  • Depends on what job it’s replacing. LLMs are so-called narrow intelligence. They’re built to generate natural-sounding language, so if that’s what the job requires, then even an imperfect LLM might be fit for it. But if the job demands logic, reasoning, and grounding in facts, then it’s the wrong tool. If it were an imperfect AGI that can also talk, maybe - but it’s not.

    My unpopular opinion is that LLMs are actually too good. We just wanted something that talks, but by training it on tons of correct information, they also end up answering questions correctly as a by-product. That’s neat, but it happens “by accident” - not because they actually know anything.

    It’s kind of like a humanoid robot that looks too much like a person - we struggle to tell the difference. We forget what it really is because of what it seems.


  • What do you mean they don’t give you a choice? You always have the choice not to use it. DDG gives me AI summaries and I never read them. WhatsApp has an LLM button I’ve never pressed. Twitter has Grok, never tried it. Android probably has Gemini somewhere, and I don’t even know how to access it. As for Proton’s LLM, I hadn’t even heard of it despite paying for their email for a decade. I just don’t see how something existing as a feature in a service I already use somehow mandates me to engage with it.

    If someone is so deeply anti-LLM that they want to avoid all this on principle, I don’t necessarily have an issue with that. But personally, I genuinely struggle to grasp the logic behind it. People seem to have a strong emotional response to LLMs - your reply makes that pretty clear - and that’s the part that really boggles my mind.















  • Way to move the goalposts.

    If you take that question seriously for a second - AlphaFold doesn’t spew chemicals or drain lakes. It’s a piece of software that runs on GPUs in a data center. The environmental cost is just the electricity it uses during training and prediction.

    Now compare that to the way protein structures were solved before: years of wet lab work with X‑ray crystallography or cryo‑EM, running giant instruments, burning through reagents, and literally consuming tons of chemicals and water in the process. AlphaFold collapses that into a few megawatt‑hours of compute and spits out a 3D structure in hours instead of years.

    So if the concern is environmental footprint, the AI way is dramatically cleaner than the old human‑only way.


  • Artificial intelligence isn’t designed to maximize human fulfillment. It’s built to minimize human suffering.

    What it cannot do is answer the fundamental questions that have always defined human existence: Who am I? Why am I here? What should I do with my finite time on Earth?

    Expecting machines to resolve existential questions is like expecting a calculator to write poetry. We’re demanding the wrong function from the right tool.

    Pretty weird statements. There’s no such thing as just “AI” - they should be more specific. LLMs aren’t designed to maximize human fulfillment or minimize suffering. They’re designed to generate natural-sounding language. If they’re talking about AGI, then that’s not designed for any one thing - it’s designed for everything.

    Comparing AGI to a calculator makes no sense. A calculator is built for a single, narrow task. AGI, by definition, can adapt to any task. If a question has an answer, an AGI has a far better chance of figuring it out than a human - and I’d argue that’s true even if the AGI itself isn’t conscious.