Problem folk have is that we don't know how humans do "reasoning", we can't even define it so how do we know these AIs aren't doing what humans do in our "black boxes"?
From the failures of the current set of AI I am now more and more convinced that much of human "reasoning" is anything but what we have always assumed it to be i.e. reasoning - the folk definition of human reasoning "the action of thinking about something in a logical, sensible way" is a just so story.
Most of the newer AIs are better than most humans are at the tasks we test them with.
p-zombies rule!
From the failures of the current set of AI I am now more and more convinced that much of human "reasoning" is anything but what we have always assumed it to be i.e. reasoning - the folk definition of human reasoning "the action of thinking about something in a logical, sensible way" is a just so story.
Most of the newer AIs are better than most humans are at the tasks we test them with.
p-zombies rule!


