• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

It seems to me that human dreams may be a closer analog to LLM hallucinations. To me, dreams have a very Markov-chain feel to them, as my brain is constantly choosing what happens next based on what just happened. No planning, just sequential extrapolation based on what's happened in the dream so far, whatever is going on in my life at the moment, my mood, etc. As I understand it,
that sort of sequential extrapolation is how LLMs choose the next word.
I said above that LLMs mimic human behaviour, as I don't believe they "think", they are not duplicating the "how" of how humans think BUT I also believe they should make us rethink about a lot of our assumptions that we "think".
 
"dysfunction of the sensory apparatus" they aren't, they are what your quoted source says they are. In a visual hallucination my sensory apparatus i.e. eyes aren't malfunctioning.
There's more to the sensory apparatus than just the eyes. The optic nerves and visual cortex are part of the sensory apparatus.

That aside I would have much preferred if they had used the word "malfunctioning" for LLM AIs malfunctioning, using a word like hallucination helps feed into the idea that these LLMs are doing more than mimicking human behaviour.
But LLMs aren't malfunctioning when they hallucinate. They are functioning as designed.
 

Back
Top Bottom