angrysoba
Philosophile
From what I have heard from Francois Chollet (inventor(?) of the ARC-AGI test) many of the types of cognitive errors that get discovered in these LLMs are corrected by hard-coding it into the LLM in the same way that some of the guardrails are. If that's the case, and it only learns about 3 R's in strawberry because someone has literally written it in rather than it deducing it from large amounts of texts then it really is a very dumb "intelligence".