Are we not ourselves something similar, i.e. pattern-matching systems? Apart from some genetically coded instincts, everything we do is based on what we have seen others doing.
I have certainly seen ChatGPT producing factually correct output, not just something that looks like it. It seems to me that it’s main weakness is that it doesn’t know when it might be wrong. A human has an idea when she probably remembers an address wrongly, but ChatGPT is always dead sure, even when the address is completely wrong. I think it is because ChatGPT doesn’t have enough experience with its own memory, whereas we know we have been wrong many times.