I said above that LLMs mimic human behaviour, as I don't believe they "think", they are not duplicating the "how" of how humans think BUT I also believe they should make us rethink about a lot of our assumptions that we "think".It seems to me that human dreams may be a closer analog to LLM hallucinations. To me, dreams have a very Markov-chain feel to them, as my brain is constantly choosing what happens next based on what just happened. No planning, just sequential extrapolation based on what's happened in the dream so far, whatever is going on in my life at the moment, my mood, etc. As I understand it,
that sort of sequential extrapolation is how LLMs choose the next word.