• HankMardukas@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    4
    ·
    1 year ago

    Always remember that it will only get better, never worse.

    They said “computers will never do x” and now x is assumed.

    • Poob@lemmy.ca
      link
      fedilink
      arrow-up
      12
      ·
      1 year ago

      There’s a difference between “this is AI that could be better!” and “this could one day turn into AI.”

      Everyone is calling their algorithms AI because it’s a buzzword that trends well.

      • Fedora@lemmy.haigner.me
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 year ago

        Shit as dumb as decision trees are considered AI. As long as there’s an if-statement somewhere in the app, they can slap the label AI on it, and it’s technically correct.

        • Batman@lemmy.ca
          link
          fedilink
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 year ago

          That’s not technically correct unless the thresholds in those if statements are updated on the information gained for the data.

    • MajorHavoc@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      It usually also gets worse while it gets better.

      But I take your point. This stuff will continue to advance.

      But the important argument today isn’t over what it can be, it’s an attempt to clarify for confused people.

      While the current LLMs are an important and exciting step, they’re also largely just a math trick, and they are not a sign that thinking machines are almost here.

      Some people are being fooled into thinking general artificial intelligence has already arrived.

      If we give these unthinking LLMs human rights today, we expand orporate control over us all.

      These LLMs can’t yet take a useful ethical stand, and so we need to not rely on then that way, if we don’t want things to go really badly.