• 0 Posts
  • 23 Comments
Joined 7 months ago
cake
Cake day: March 3rd, 2024

help-circle
  • The flaw of the question is assuming there is a clear dividing line between species. Evolutionary change is a continuous process. We only have dividing lines where we see differences in long dead ones in the fossil record, or we see enough differences in living ones. The question has no answer, only a long explanation of how that isn’t how any of this works.







  • I tried it with my abliterated local model, thinking that maybe its alteration would help, and it gave the same answer. I asked if it was sure and it then corrected itself (maybe reexamining the word in a different way?) I then asked how many Rs in “strawberries” thinking it would either see a new word and give the same incorrect answer, or since it was still in context focus it would say something about it also being 3 Rs. Nope. It said 4 Rs! I then said “really?”, and it corrected itself once again.

    LLMs are very useful as long as know how to maximize their power, and you don’t assume whatever they spit out is absolutely right. I’ve had great luck using mine to help with programming (basically as a Google but formatting things far better than if I looked up stuff), but I’ve found some of the simplest errors in the middle of a lot of helpful things. It’s at an assistant level, and you need to remember that assistant helps you, they don’t do the work for you.


  • Only if it changes laws of physics. Which I suppose could be in the realm of possibility, since none of us could outthink a ASI. I imagine three outcomes (assuming getting to ASI) - it determines that no, silly humans, the math says you’re too far gone. Or, yes, it can develop X and Y beyond our comprehension to change the state of reality and make things better in some or all ways. And lastly, it says it found the problem and solution, and the problem is the Earth is contaminated with humans that consume and pollute too much. And it is deploying the solution now.

    I forgot the fourth, that I’ve seen in a few places (satirically, but could be true). The ASI analyses what we’ve done, tries to figure out what could be done to help, and then suicides itself out of frustration, anger, sadness, etc.


  • Kiosks of any sort can vary, from fast food to grocery to other types. There are some that work well and make self service far faster and easier, and others that routinely have issues. I’ve never used McD’s, but I have used Sheetz a lot before and it flows very well both in displaying the options as well as suggestive selling that isn’t in your face and disruptive. As for groceries, Publix has always been perfect for me, while some others not as much. Walmart’s is 50/50 on if it will work okay or have some issue.

    I wonder if there’s a list of what manufacturer supplies what kiosks and a correlation can be made.

    Outside of the ordering, McDs has never been the best, but as they’ve dropped in quality to drive profits and still meet the demand that persists regardless, so have others. My favorite used to be Burger King in the 90s, but I will go to McD instead of stepping foot in a BK at this point, that’s how bad they are.

    And the stupid thing is, none of them are doing anything much different. The quality doesn’t have to be this low in food and service. I can only assume the bottom line is greater if they sacrifice everything needed to keep standards up and maintain just enough to keep a minimum demand flowing.










  • If anything I think the development of actual AGI will come first and give us insight on why some organic mass can do what it does. I’ve seen many AI experts say that one reason they got into the field was to try and figure out the human brain indirectly. I’ve also seen one person (I can’t recall the name) say we already have a form of rudimentary AGI existing now - corporations.


  • Rhaedas@fedia.iotoProgrammer Humor@programming.dev"prompt engineering"
    link
    fedilink
    arrow-up
    66
    arrow-down
    4
    ·
    edit-2
    6 months ago

    LLMs are just very complex and intricate mirrors of ourselves because they use our past ramblings to pull from for the best responses to a prompt. They only feel like they are intelligent because we can’t see the inner workings like the IF/THEN statements of ELIZA, and yet many people still were convinced that was talking to them. Humans are wired to anthropomorphize, often to a fault.

    I say that while also believing we may yet develop actual AGI of some sort, which will probably use LLMs as a database to pull from. And what is concerning is that even though LLMs are not “thinking” themselves, how we’ve dived head first ignoring the dangers of misuse and many flaws they have is telling on how we’ll ignore avoiding problems in AI development, such as the misalignment problem that is basically been shelved by AI companies replaced by profits and being first.

    HAL from 2001/2010 was a great lesson - it’s not the AI…the humans were the monsters all along.