It feels like we have a new privacy threat that’s emerged in the past few years, and this year especially. I kind of think of the privacy threats over the past few decades as happening in waves of:

  1. First we were concerned about governments spying on us. The way we fought back (and continue to fight back) was through encrypted and secure protocols.
  2. Then we were concerned about corporations (Big Tech) taking our data and selling it to advertisers to target us with ads, or otherwise manipulate us. This is still a hard battle being fought, but we’re fighting it mostly by avoiding Big Tech (“De-Googling”, switching from social media to communities, etc.).
  3. Now we’re in a new wave. Big Tech is now building massive GPTs (ChatGPT, Google Bard, etc.) and it’s all trained on our data. Our reddit posts and Stack Overflow posts and maybe even our Mastodon or Lemmy posts! Unlike with #2, avoiding Big Tech doesn’t help, since they can access our posts no matter where we post them.

So for that third one…what do we do? Anything that’s online is fair game to be used to train the new crop of GPTs. Is this a battle that you personally care a lot about, or are you okay with GPTs being trained on stuff you’ve provided? If you do care, do you think there’s any reasonable way we can fight back? Can we poison their training data somehow?

  • duncesplayed@lemmy.oneOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    I have a similar kind of idea. I think if it had been a free/open source/community project that made the headlines I would have been all like “this is so awesome”.

    I guess what I don’t like is the economic system that makes that impractical. In order to build one of those giant GPTs, you need tonnes of hardware (capital), so the community projects are always going to be playing catchup, and I think quite serious catchup in this arena. So the economic system requires that instead of our posts going to a “collective hive mind” that aid human knowledge, they go to some walled garden owned by OpenAI, which filters and controls it for us, and gives us little bits of access to our own data, as long as we used it only in approved ways (i.e., ways that benefit them).

    • IDe@lemmy.one
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 years ago

      Most of the data used in training GPT4 has been gathered through open initiatives like Wikipedia and CommonCrawl. Both are freely accessible by anyone. As for building datasets and models, there are many non-profits like LAION and EleutherAI involved that release their models for free for others to iterate on.

      While actually running the larger models at a reasonable scale will always require expensive computational resources, you really only need to do the expensive base model training once. So the cost is not nearly as expensive as one might first think.

      Any headstart OpenAI may have gotten is quickly diminishing, and it’s not like they actually have any super secret sauce behind the scenes. The situation is nowhere as bleak as you make it sound.

      Fighting against the use of publicly accessible data is ultimately as self-sabotaging ludditism as fighting against encryption.