I don't know much about them other than that I am sure I wouldn't learn anything useful from their work.
David Chalmers is the "hard problem consciousness" guy. He asserts that
experience cannot be explained by any physical process. He bases this on his misconception that "p-zombies" are conceivable, and must therefore be logically possible.
A p-zombie is a creature that acts in all ways like a human being, but does not have experiences - so-called "qualia". Under materialism, and even more so under strict behaviourism, this is an incoherent notion. All of Chalmers work amounts to:
If we assume that dualism is true, we can show that materialism is false.
Not much of an accomplishment, really.
John Searle is the bloke who came up with the (in)famous Chinese Room, which I see as a good test for Introductory Philosophy students. If at the end of the semester you can't take Searle's argument apart inside of ten minutes, you fail.
The Chinese Room is a room containing a man and a whole lot of books. Also a pen and a stack of paper.
Slips of paper with symbols on them come through a slot in the wall. Following step-by-step instructions in the books, the man writes another set of symbols on a piece of paper and pushes it back out through the slot.
Now Searle tells us what's going on. The symbols are written Chinese. The slips of paper coming in are questions. The slips of paper going out are answers. The books contain a set of rules allowing you to transform any arbitrary question in Chinese into an appropriate answer, without knowing any Chinese yourself.
The man in the room knows no Chinese. The books are just books. Yet to an outside observer, the room appears to understand Chinesse.
Searle argues that since there is no part of the system that understands Chinese, the system cannot understand Chinese, and therefore artificial intelligence is impossible. Never mind that this is clearly the fallacy of composition; never mind that the exact same applies to human beings; never mind that this search for a seat of consciousness is straight out of dualism.
Never mind that the system
does understand Chinese.
It's particularly ironic that Searle accuses others of dualism: "strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter". That's not dualism at all; that's behaviourism.
To round things out, we can take a look at Mary's Room, posited by Frank Jackson. Jackson has us imagine a scientist named Mary, who knows everything there is to know about the physical properties and processes and perception of colour, but who has lived her entire life locked in a room that is entirely black and white, and so has never experience colour herself.
One day Mary awakens to see a red rose in the middle of her room. Does she learn anything new about colour?
Jackson argues that she does: She has, for the first time, experience colour herself, something she knew nothing about. Daniel Dennett argues that Jackson is a pillock; the premise is that Mary knows all there is to know about the physical perception of colour, and that by definition includes what it is like to experience the colour of a red rose.
Dennett is right, of course; Jackson has hauled qualia into the argument, and as soon as someone starts talking about qualia, you know you can safely ignore everything they have to say. Qualia are, by definition, what is left over when you have taken away everything physical from a mental state or process. Since mental processes (and states) are entirely physical, qualia do not exist.
Jackson thinks he has disproved physicalism by this thought experiment, just as Berkeley did in his day. He is no more right than Berkeley, of course, and has far less excuse.