There might be similarities. Neural networks are modeled around how we think neurons work. But it's very loose analogy. Neural networks do not really emulate neurons, and we don't really understand how they work enough to emulate them.
We know neural networks as a mathematical concept can learn by example, we know why, we know how, we know how much network of specific size can learn. Real brain most likely works in similar way, but we don't know how much they are the same.
Also the understanding on text inside LLM and inside brain might be similar. Most likely it is. But again, we don't know. We don't understand how LLMs are exactly stored inside our mathematically well understood neural networks, much less real brains.
But most likely both mathematical networks and real ones are mainly about relations between concepts. Where concept on the border of the network is a word .. and deeper you go, more abstract it is. Few words having similar meaning, that meaning being some class of similar objects .. its grammar function .. relation to other words .. to possibly deep philosophical concepts which branch to huge volumes of texts.
LLMs have nothing to do with perception of reality through human senses, with perception of time, with emotions, with inner monologue .. but imho there is something very similar to LLM as a part of our brain.
I also think it is ok to say LLM "knows" something, or even "understands" something.
I even think it can be to some extent self aware .. I mean it can have a concept of itself. It can talk about itself, even if it's just what it was thought. Doesn't sound like much, but I suggest this example from testing of Bing chat here (the whole video is great, but see the time code):
https://youtu.be/jHwHPyWkShk?si=sXB-e4nRaBFryqFu&t=869
You can see here like Bing with his (commanded) super confidence blabs some nonsense .. but then realizes it is nonsense by seeing its own response. And then it comes to conclusion (based on large power to analyze text) that him, the author of the response, is broken. And it seems to be sincerely unhappy about it.
Sure, we can agree that it's pretty far from what is usually understood as "self awareness" .. but imho it's fascinating how something which just analyzes text can go mad.