The catarrhine yerba mate enjoyer who invented a perpetual motion machine, by dreaming at night and devouring its own dreams through the day.

Кўис кредис ессе, Беллум?

  • 1 Post
  • 109 Comments
Joined 3 years ago
cake
Cake day: April 9th, 2021

help-circle





  • If you want, you could use GMail filters to delete those emails automatically. Here’s how:

    1. click the engine button (settings), then “see all settings”, then “filters and blocked addresses”.
    2. click “create a new filter”. Add “top of Google search” to the field “has the words”, leave other fields blank.
    3. click “create filter”, then check the “delete it” box, then “create filter” again.
    4. repeat steps 2-3 for other shit that SEO spam is likely to mention.

    Important: never use as a filter anything that legitimate users might reasonably say. Only things that you’re fairly certain to come from a spammer.

    EDIT: I repeated two steps without noticing it. My bad.


  • Lvxferre@lemmy.mltoTechnology@lemmy.mlTransparent Aluminium
    link
    fedilink
    arrow-up
    97
    arrow-down
    7
    ·
    edit-2
    6 months ago

    Misleading name, on the same level as calling water “non-explosive hydrogen”. That said the material looks promising, as a glass replacement for some applications (the text mentions a few of them, like armoured windows).

    (It is not a metal; it’s a ceramic, mostly oxygen with bits and bobs of aluminium and nitrogen. Interesting nonetheless, even if I’m picking on the name.)



  • My guesses:

    • Toner’s role is being underplayed by the video. She’s potentially calling Altman out, for underrating the dangers of AI.
    • At least Altman is lying about something - about how much OpenAI is going towards AGI in the short term. The above might’ve bought the bullshit fully, while Sutskever knows that it’s bullshit.
    • I’m not sure if the board is also lying or not.
    • The boiling point was likely OpenAI potentially receiving some cash grant from some scummy party, that would be in a moral grey area considering the "non-"profit goals of the company.
    • Everybody will get a bit more of free popcorn for a while. 🍿 This mess is far from over.

  • I don’t know (…or care, really) about USA so I’ll speak on more general grounds.

    There’s a lot of stuff in social media that makes it a great soapbox for social manipulation:

    • low cost, wide reaching: it’s easy to be heard
    • decontextualisation: it gives more room for assumers¹ to do their shit, and make an incorrect context out of nowhere.
    • virality: it’s easy to start a witch hunt. Cue to the pitchfork emporium / Twitter MC of the day.
    • upvote/like-based systems: people don’t upvote your content (increasing its visibility) because you’re right, they do it because you say it confidently.
    • on the Internet, nobody knows that you’re a dog: concern trolling made easy.

    Now look at what @[email protected] said: “Dunno man, seems like it might be the fascists.”. IMO that user is being spot on, those five things make social media specially easy to manipulate for fascists². And they’re mostly the ones creating this dichotomisation of society³, because that’s how they’re able to congregate the nutjobs into a political discourse. Suddenly the village idiot doesn’t simply say “they’re hiding aliens from us” (stupid, but morally OK), the discourse becomes “the Jews are hiding aliens from us” (stupid and Antisemitic).

    1. By “assumers” I mean individuals who are quick to draw conclusions based on little to no reasoning, evidence, or thought. This plague exists since the dawn of time, it’s just that decontextualisation gives them more room to assume shit out of nowhere.
    2. Fascists often babble about “virtue signalling”, without realising that themselves are prone to signal adherence to their stupid beliefs. They don’t want to be in the receiving end of their own witch hunts.
    3. By “society” I mean at the very least Western Europe plus the Americas; probably more. It is not exclusive to USA.

  • Lvxferre@lemmy.mltoTechnology@lemmy.ml*Permanently Deleted*
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    edit-2
    8 months ago

    Given that it’s pointing straight to “no”, should I interpret “AI” as “additional irony”?

    …seriously, model-based generation is in its infancy. Currently it outputs mostly trash; you need to spend quite a bit of time to sort something useful out of it. If anyone here actually believes that it’s smart, I have a bridge to sell you.




  • Not even parrots - the birds are actually smart.

    I’m not a lawyer but I can see a good way for lawyers to use ChatGPT: tell it to list laws that are potentially related to the case, then manually check those laws to see if they apply. This would work nicely in countries with Roman law; and perhaps in countries with tribal law too (the article is from USA), as long as the model is fed with older cases for precedent.

    And… really, that’s the best use for those bots IMO - asking it to sort, filter and search information from messy and large systems. Letting it write things for you, like those two lawyers did, is worse than laziness: it stinks stupidity.

    It’s also immoral. The lawyer is a human being, thus someone who can be held responsible for one’s actions; ChatGPT is not and, as such, it should not be in charge of decisions that affect human lives.




  • Even if this wasn’t Elon Musk, the very idea of your boss having control over your finances sounds dumb as a brick.

    [Musk] "And for some reason PayPal, once it became eBay, not only did they not implement the rest of the list, but they actually rolled back a bunch of key features, which is crazy. So PayPal is actually a less complete product than what we came up with in July of 2000, so 23 years ago.”

    “And for some reason not only they didn’t implement a lot of my stupid ideas, but they reverted some of my dumbest takes that still went through. And 23 years later I still didn’t learn.”