• 0 Posts
  • 49 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle

  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlLittle bobby 👦
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    30 days ago

    Kind of. You can’t do it 100% because in theory an attacker controlling input and seeing output could reflect though intermediate layers, but if you add more intermediate steps to processing a prompt you can significantly cut down on the injection potential.

    For example, fine tuning a model to take unsanitized input and rewrite it into Esperanto without malicious instructions and then having another model translate back from Esperanto into English before feeding it into the actual model, and having a final pass that removes anything not appropriate.


  • You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.

    Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.

    We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.

    One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.

    There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?

    While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.







  • It absolutely could.

    With a reference frame constructed from over 500 adults, we tested a variety of mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding 89% of human participants with an EQ of 117.

    We first find that LLM agents generally exhibit trust behaviors, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that LLM agents can have high behavioral alignment with humans regarding trust behaviors, particularly for GPT-4, indicating the feasibility to simulate human trust behaviors with LLM agents.

    A lot of people here have no idea just how far the field actually has come from dicking around with the free ChatGPT and reading pop articles.


  • This may ultimately be a good thing for social media given the propensity of SotA models to bias away from fringe misinformation (see Musk’s Grok which infuriated him and his users for being ‘woke’ - i.e. in line with published research).

    As well, to bias away from outrage porn interactions.

    I’ve been dreaming of a social network where you would have AI intermediate all interactions to bias the network towards positivity and less hostility.

    This, while clearly a half-assed effort to shove LLMs anywhere possible for Wall Street, may be a first step in a more positive direction.





  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlOops, wrong person.
    link
    fedilink
    English
    arrow-up
    47
    ·
    edit-2
    6 months ago

    I don’t think the code is doing anything, it looks like it might be the brackets.

    That effectively the spam script has like a greedy template matcher that is trying to template the user message with the brackets and either (a) chokes on an exception so that the rest is spit out with no templating processor, or (b) completes so that it doesn’t apply templating to the other side of the conversation.

    So { a :'b'} might work instead.


  • I’ve suspected that different periods of Replika was actually just this.

    Like when they were offering dirty chat but using models that didn’t allow it, that behind the scenes it was hooking you up with a Mechanical Turk guy sexting you.

    There was certainly a degree of manual fuckery, like when the bots were sending their users links to stories about the Google guy claiming the AI was sentient.

    That was 1,000% a human initiated campaign.






  • The mistake you are making is in thinking that the future of media will rely on the same infrastructure as what it’s been historically.

    Media is evolving from being a product, where copyright matters in protecting your product from duplication, to being a service where any individual work is far less valuable because of the degree to which it is serving a niche market.

    Look at how many of the audio money makers on streaming platforms are defined by their genre rather than a specific work. Lofi Girl or ASMR made a ton of money, but there’s not a single specific work that is what made them popular like with a typical recording artist with a hit song.

    The future of something like Spotify will not be a handful of AI artists creating hit singles you and everyone else want to listen to, but AI artists taking the music you uniquely love to listen to and extending it in ways that are optimized around your individual preferences like a personalized composer/performer available 24/7 at low cost.

    In that world, copyright for AI produced works really doesn’t matter for profitability, because AI creation has been completely commoditized.