• lloram239@feddit.de
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    I asked for one that excluded simple calculators but included human beings.

    “Intelligence, at its core, involves the ability to model the world in order to predict and respond effectively to future events.”

      • lloram239@feddit.de
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        The whole argument of the article is just stupid. So ChatGPT ain’t intelligent because it can’t see picture, has hands and doesn’t have a body? By that logic blind humans aren’t either or paralyzed ones or amputees? The thing the article fails to realize is that those are all just sensory inputs. The more sensory inputs you get, the more cross-correlations between those the AI can figure out. Of course ChatGPT won’t be able to do anything clever with sensory inputs it doesn’t have, just like a human trying to listen to radiowaves with their ears. But human sensory inputs aren’t special, they are just what evolution figure out was “good enough” for survival. The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.

        • @lloram239

          > But human sensory inputs aren’t special

          It’s not about sensory inputs, it’s about having a model of the world and objects in it and ability to make predictions.

          > The important part is that the AI can figure out the pattern in the data it does get and so far AI systems are doing very well.

          GPT cannot “figure” anything out. That’s the point. It only probabilistically generates text. That’s what it does, there is no model of the world behind it, no predictions, no"figuring out".

          • lloram239@feddit.de
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            It’s not about sensory inputs, it’s about having a model of the world and objects in it and ability to make predictions.

            And how do you think that model gets build? From processing sensory inputs. And yes, language models do build internal models of the world from that.

            GPT cannot “figure” anything out.

            That nonsense of a claim doesn’t get any more true from repeating. Seriously, it’s profoundly idiotic given everything ChatGPT can do.

            It only probabilistically generates text.

            So what? In what way does that limit its ability to reason about the world? Predictions about the world are probabilistic by nature, since the future hasn’t happened yet.