I think you are over-anthropomorphising ChatGPT. It is, from what I've read and heard, generating text based on probability.
And how do you think the human brain works? Apart from a number of instincts, everything we learn is through mimicking what others do, i.e. pattern matching, or judging probabilities of what should come next. ChatGPT has demonstrated abilities that far exceeds what should be expected from “mere” pattern matching, or generating text based on probability.
Nobody knows how ChatGPT does it, much like nobody knows how the human brain does it. Anthropomorphising is not inappropriate, because the similarities are there, and it may make it easier for us to understand what is going on inside, just like we try to figure out what goes on inside our fellow human beings, that we also do not always understand.
That ChatGPT tends to please is obvious to anyone who chats with it.There may be some hard coding involved, but you will rarely see ChatGPT trying to contradict you, or if it does tell you something you might not like, it will excuse, sometimes very much.
ChatGPT is not equipped with memory (in the basic version that we are communicating with), but it has developed its own way of remembering anyway - though still not from one chat to the next because its “synapses” are reset when starting a new chat.
When people say that ChatGPT is “fantasising”, “dreaming”, “inventing”, or even “lying”, it is just as anthropomorphising as when I say it is misremembering. I may be completely wrong, but I think that ChatGPT has no way of knowing what is a fact, and what is not, so what it presents as facts is based on its generative analysis, and it does not evaluate the probability of it being a real fact, or an artefact. We humans have a feeling - perhaps acquired through experience (something that ChatGPT does not have without memory) - of how good our memory is of a certain fact. We might say that the license plate of the car parked outside our house yesterday started with “AB” and ended in “9”, but ChatGPT will give you the entire “ABC1239” even if the license plate actually said “ABK2179”.
Now we have seen what damage can be done when a lawyer relies on ChatGPT. When are we hearing about doctors?
ChatGPT might tell him that a good remedy against COVID-19 is bleach, based on the fact that a very influential person has said so, and it has been repeated by thousands of people.