Little bit of everything!

Avid Swiftie (come join us at [email protected] )

Gaming (Mass Effect, Witcher, and too much Satisfactory)

Sci-fi

I live for 90s TV sitcoms

  • 40 Posts
  • 1.41K Comments
Joined 2 years ago
cake
Cake day: June 2nd, 2023

help-circle


  • Honestly a good article, and a perfect example of performative security. Security was ever present, hassled people, wore tactical vests and made a big show - but still did nothing of actual value

    Twitch backpedaled and said they’ll add moar security, but won’t do anything to actually protect creators like grant them their own security guard.

    To me, it’s obvious. Guy didn’t have a knife, didn’t threatened her, it’s some lonely creepy dude who doesn’t respect her and very sick in the head. They don’t need another row of metal detectors, they needed someone managing the line and access to her so he couldn’t just walk up to her, and if he did still make it that close then an immediate boot out of the premises with charges for assault filed with a permanent from all future events.

    Instead they just… Let him walk around and be chill, and sounds like he was only banned well after the huge backlash.














  • Leopold Aschenbrenner (born 2001 or 2002[1]) is a German artificial intelligence (AI) researcher and investor. He was part of OpenAI’s “Superalignment” team before he was fired in April 2024 over an alleged information leak, which Aschenbrenner disputes. He has published a popular essay called “Situational Awareness” about the emergence of artificial general intelligence and related security risks.[2] He is the founder and chief investment officer (CIO) of Situational Awareness LP, a hedge fund investing in companies involved in the development of AI technology.

    Wikipedia

    So, I’m calling bullshit. I’ve read the papers, I’ve kept up on everything. I run AI models myself to keep up with everything, I’ve built my own agents and my own agentic workflows. It keeps coming back to a few big things that unless they’ve suddenly had another massive breakthrough - I don’t see happening.

    • LLMs already have the vast majority of data associated with them, and they still hallucinate. The docs say that it will take exponentially more data to train them on a linear trajectory. So to get double the performance, we’d need the current amount of data squared.
    • LLMs and Agentic flows are very cool, and very useful for me. But they’re incredibly unreliable. And it’s just how models work - it’s a black box. You can say “that didn’t work” and it’ll train next time that it was a bad option, but it’s never going be zero. Businesses are learning (see Taco Bell and several others), that AI is not code. It is not true or false. It’s probably true or probably false. That doesn’t work when you’re putting in an order or deciding how much money to spend.
    • We’ve certainly plateaued with AI for the time being. There will be more things that come out, but until the next major leap we’re pretty much here. GPT5 proved that, it was mediocre, it was… the next version. They promised groundbreaking, but there just isn’t any more ground to break with current AI. Like I said agents were kind of the next thing, and we’re already using them.