ChatGPT is as zooterkin described it, designed to take a prompt and generate a response that is likely to be correct or appropriate based on the pool of training material it was given. It's programmed to frame its responses in polite language because the engineers who programmed it decided that would make for a better user experience. It is not "modeled on human thought processes", not even vaguely.
My mother programmed me to be polite so people would have a better user experience. And I am being serious.
If someone asks you "Is pizza better hot or cold?", you're going to immediately think about your own experience - your memory of eating pizza, whether you liked it, whether you have ever eaten cold leftover pizza and if so how that experience compared to eating pizza when it was hot - and you're going to communicate an opinion based on that experience.
Or rather you are going to answer it based on your learned experience which includes memory.
ChatGPT doesn't "know" what pizza is, it just knows that it's a noun and has a list of words that were associated with it in its training data. It doesn't know what "hot" or "cold" are beyond their being adjectives that can be applied to the noun you gave, and because it can't have experiences it is wholly unaware of any actual difference between hot pizza and cold pizza beyond how the adjectives are spelled and the fact that your entering the binary operator "or" is a contextual clue that, whatever hot and cold are, they are mutually exclusive. What ChatGPT is programmed to do is take your string of words and generate a cloud of other, consistently associated words from its training set, select the most common of those, use them to compose a response that respects the English rules of grammar, the contextual clues you provided, and whatever conversational flavor its programmers chose to mandate, and return that result to you. It is nothing like how humans think when asked such a question.
From earlier.
This what I think the large language models are in fact showing, much of what we call intelligence, that we take as evidence of us being "I" is beginning to seem like something that doesn't require that.
Plus the article says these generative AIs do seem to be starting to create for themselves "internal" representations of the world.
Sounds awfully like how we think we think.
Sounds to me very much like how we develop on our hardware... sorry brain, we start with a ton of inputs, start to react to them, that reaction becomes input (feedback) and so on.
With ChatGTP 3 it is not allowed to remember past experiences and use those as part of its internal feedback. Other researchers creating generative AIs don't impose this wiping of the slate and as I note above, they (the generative AIs) seem to be creating internal representations of the external world. Add in that the researchers themselves say i.e. they don't know how the outputs are generated.
With that in mind
And many of us think our intelligence and whole "meness" is an emergent property of the brain, we may indeed be mimicking our own intelligence as we develop these systems without meaning to.
Obviously any "meness" arising in these emergent properties won't be the same as ours, not got the same inputs, hardware is different and so on.
Fascinating times.