• bbuez@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    2
    ·
    9 months ago

    We do not have a rigorous model of the brain, yet we have designed LLMs. Experts of decades in ML recognize that there is no intelligence happening here, because yes, we don’t understand intelligence, certainly not enough to build one.

    If we want to take from definitions, here is Merriam Webster

    (1)

    : the ability to learn or understand or to deal with new or trying >situations : reason

    also : the skilled use of reason

    (2)

    : the ability to apply knowledge to manipulate one’s >environment or to think abstractly as measured by objective >criteria (such as tests)

    The context stack is the closest thing we have to being able to retain and apply old info to newer context, the rest is in the name. Generative Pre-Trained language models, their given output is baked by a statiscial model finding similar text, also coined Stocastic parrots by some ML researchers, I find it to be a more fitting name. There’s also no doubt of their potential (and already practiced) utility, but a long shot of being able to be considered a person by law.