• chobeat@lemmy.mlOP
    link
    fedilink
    arrow-up
    2
    arrow-down
    2
    ·
    1 year ago

    This paper explain a taxonomy of harms created by LLMs: https://dl.acm.org/doi/pdf/10.1145/3531146.3533088

    OpenAI released ChatGPT without systems to prevent or compensate these harms and being fully aware of the consequences, since this kind of research has been going on for several years. In the meanwhile they’ve put some paper-thin countermeasures on some of these problems but they are still pretty much a shit-show in terms of accountability. Most likely they will get sued into oblivion before regulators outlaw LLMs with dialogical interfaces. This won’t do much for the harm that open-source LLMs will create but at least will limit large-scale harm to the general population.

    • MxM111@kbin.social
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I can only imagine what would happen if these authors were to write about internet.

      • chobeat@lemmy.mlOP
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        There are entire fields of research on that. Or do you believe the internet, a technology developed for military purposes, an infrastructure that supports most of the economy, the medium through billions of people experience most of reality and build connections, is free from ideology and propaganda?

        • MxM111@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          That’s my point, nearly everything in life have good and bad sides, you have to use it accordingly. Would you believe if I say that a banal kitchen knife can be used to murder people? Those kitchen knife manufacturers released a product which is a harmful tool! And they knew that!