Please do not give me shit for using Facebook. It’s how I keep in touch with relatives, most of whom live abroad, and my brother, who is has ASD prefers to communicate with me that way.

I would rather not use it, but I would prefer keeping in touch with my brother.

That said, I would not let AI keep in touch with my brother for me.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    3 months ago

    This may ultimately be a good thing for social media given the propensity of SotA models to bias away from fringe misinformation (see Musk’s Grok which infuriated him and his users for being ‘woke’ - i.e. in line with published research).

    As well, to bias away from outrage porn interactions.

    I’ve been dreaming of a social network where you would have AI intermediate all interactions to bias the network towards positivity and less hostility.

    This, while clearly a half-assed effort to shove LLMs anywhere possible for Wall Street, may be a first step in a more positive direction.

    • Flying Squid@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      3 months ago

      I’ve been dreaming of a social network where you would have AI intermediate all interactions to bias the network towards positivity and less hostility.

      I don’t know that is a realistic idea. I don’t know if AI at our current level could accurately discern positivity from hostility well enough. There’s too much emotion in language that I think would require a deep understanding of emotion itself to sort that out properly.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        3 months ago

        It absolutely could.

        With a reference frame constructed from over 500 adults, we tested a variety of mainstream LLMs. Most achieved above-average EQ scores, with GPT-4 exceeding 89% of human participants with an EQ of 117.

        We first find that LLM agents generally exhibit trust behaviors, referred to as agent trust, under the framework of Trust Games, which are widely recognized in behavioral economics. Then, we discover that LLM agents can have high behavioral alignment with humans regarding trust behaviors, particularly for GPT-4, indicating the feasibility to simulate human trust behaviors with LLM agents.

        A lot of people here have no idea just how far the field actually has come from dicking around with the free ChatGPT and reading pop articles.