I was thinking about this after a discussion at work about large language models (LLMs) - the initial scrape of the internet before Chat GPT become publicly usable was probably the last truly high quality scrape of human-made content any model will get. The second Chat GPT went public, the data pool became tainted with people publishing information from it. Future language models will have increasingly large percentages of their data tainted by AI-generated content, skewing the results away from how humans actually write. To get actual human content, they may need to turn to transcriptions of audio recordings or phone calls for training, and even that wouldn’t be quite correct because people write differently than they speak.

I sort of wonder if eventually people will start being influenced in how they choose to write based on seeing this AI content. If teachers use AI-generated texts in school lessons, especially at lower levels, will that effect how kids end up writing and formatting their work? It’s weird to think about the wider implications of how this AI stuff will ultimately impact society.

What’s your predictions? Is there a future where AI can get a clean, human-made scrape? Are we doomed to start writing like AIs?

  • wet_lettuce@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    My take is that "L"LMs are already old news. I think targeted or limited data-set language models are going to be the next wave.

    I think this partly because very few people can do LLMs at the scale of Microsoft and Google so I think smaller firms and people in their garage are going to aim their sights on smaller targeted data sets with a eye towards factual accuracy.

    And then maybe link them/daisy chain them together. I hope there is this unix philosophy for models where they do one thing well but you can ‘pipe’ data from one to another.