• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

They are not. Look at the design of a Turing-machine; it's not at all modular.

It's also not a design, it's a thought experiment. What makes the Turing machine work is not specified. Think of a Turing machine as a functional spec, possibly.
 
It's also not a design, it's a thought experiment. What makes the Turing machine work is not specified. Think of a Turing machine as a functional spec, possibly.

But Turing machines -- well, finite approximations of Turing machines, anyway; all you need is a longer tape -- have been built to the specifications of Alan Turing. And they're not at all modular. And they're still Turing-complete.
 
I've already said that what is needed is a functional specification.

That's interesting. When I suggested this earlier, you vehemently denied (posts 502 and 666) that functional specifications were at all meaningful.


That seems to be the reverse of what is being attempted with the neural net approach, where we don't fully understand what the brain is doing, but we try to duplicate the general layout.

Actually, we understand quite a bit about what the brain is doing, and there's much more research in duplicating the behavior of individual neurons than there is in duplicating the "general layout" of the brain as a whole, because the brain as a whole has too many neurons to be tractably modelled.

Where do you think the "Hebbian learning rule" came from, if we don't understand what the brain is doing?
 
Circular reasoning.

You: "Computers and brains are different!"
Me: "How so ?"
You: "Because you can't replace brains with computers!"
Me: "Why not ?"
You: "Because there are differences between computers and brains!"

I've noticed this tendency in the consciousness debates to put forward entirely hypothetical arguments as if they'd actually happened. We haven't actually managed to replace any part of the nervous system so far - even the part whose function is (as far as we know) fully understood.

As usual, my arguments have been misrepresented. I've claiming that there is insufficient evidence that justifies the claim that the brain, or components of the brain, could be replaced with some form of computer equipment. In this case, I think that the onus is on the people making such a positive claim to justify it with hard evidence that all the functionality of the brain can be duplicated by computers. As an aside, it's noteworthy that the brain doesn't, in fact, function like a computer. That's merely another problem to be overcome should an effort be made to firstly, define all the functions of the brain, and secondly to demonstrate how computers could perform all those functions.
 
In this case, I think that the onus is on the people making such a positive claim to justify it with hard evidence that all the functionality of the brain can be duplicated by computers.

Easily done. Artificial neural networks are Turing-equivalent; Turing machines are neural-network equivalent. That's a mathematical theorem, which is about as hard evidence as it gets.

Since artificial neural networks mimic a subset of human neural function, it's therefore provable that human neural function is also Turing equivalent. Since there's no known or hypothesized capacity that is greater than Turing equivalent, there's no known or hypothesized way in which Turing machines can be less powerful than human brains.

Which puts the ball back in your court. If you are hypothesizing that there is something -- anything -- that humans can do that Turing machines cannot, you should be able at a minimum to describe what that something is, since it's your hypothesis.

And that's more or less what Lucas-Penrose tried, using the Godelian argument. Unfortunately, they demonstrably got it wrong in several regards. Which means we're back to the "there's no known or hypothesized way" in which human brains can be more powerful.

Game-Set-Match.
 
The Chinese Room Argument. I devised my own version once before I knew who Serle was. I don't think I could do the rebuttals justice. See Replies to the Chinese Room Argument.
Well not quite the same.

Seale's argument was about understanding whereas mine is about consciousness.

I think Searle just made a blunder when he said that the Chinese Room does not understand Chinese because the people operating it did not understand Chinese.

That would not seem be be relevant since they are just part of the machinery in this case.

Also, as far as I recall, Searle did not provide a definition for understand.

Personally I have no trouble saying the Chinese Room understands Chinese, just so long as it is actually able to carry on a conversation in Chinese - even if in very slow motion.

Similarly I would have no problem saying that a desk checked program understands just so long as it is able to meet the behavioural criteria at whatever speed.

I would baulk, however, at any certainty about either being conscious.

But I could still not put my finger on just what it is that they are not.
 
Exactly.

When we define consciousness as the ideal result of a material process then we end up with an material world defined ideally.

When we define consciousness as the material result of an ideal we end up with a ideal world defined materially.

We need to outgrow the limitations of our language.
Personally I would sidestep both positions and avoid ontology altogether, other than to observe trivially that whatever is, is.

I think philosophers have reckoned that their game was to outgrow the limitations of language for some time now.

But I think that sometime in the middle of the last century it was realised that we can never completely achieve that objective.
 
I would. Definitely.
And I cannot put my finger on the precise reason I wouldn't.
My point here is that simple systems already display simple self-awareness. Why would there be anything to prevent complex systems from exhibiting complex self-awareness?
But self-awareness is not the problem. If I build a robot with cameras and touch sensors and program it to build a model of it's environment by which it can navigate, then by definition it is self-aware because it could not navigate its environment if it did not also model itself.

But what I am getting at the the "what it is like" component of consciousness.

I might program my robot to classify sensory data according to state A, things that are helpful to survival and state B, things that are unhelpful to survival.

But as complex as the robot might become I could not think of states A and B ever being more than data - I could not think of them as being pleasure and pain.
 
Why not?

~~ Paul
I just find it implausible is all.

That the product of billions of pencil marks on paper over a billion years might be a moment of pain, or a moment of pleasure like the brief sound of Satchmo's trumpet.

And yet the program could be run starting at a point which included a lifetime of memories.

I am not saying this is any knock down argument, it is just putting my doubts into words.
 
Last edited:
Whatever is different between a neuron and a computer. As it is at present entirely impossible to replace a neuron with a computer, I think that's still something of an issue.
Sounds like the invisible angel theory of flight before we understood aerodynamics. Why do you assume something is missing if you don't even know if it exists?

That seems to be the reverse of what is being attempted with the neural net approach, where we don't fully understand what the brain is doing, but we try to duplicate the general layout.
As the Germans did by reverse engineering British radar equipment. We can in fact learn by reverse engineering. It's not at all a new procedure.
 
Circular reasoning.

You: "Computers and brains are different!"
Me: "How so ?"
You: "Because you can't replace brains with computers!"
Me: "Why not ?"
You: "Because there are differences between computers and brains!"
I think Belz is right. You are going to need to explain what the difference is before you can assert that there is a fundamental difference.
 
As usual, my arguments have been misrepresented.
I'm honestly not so sure.

I've claiming that there is insufficient evidence that justifies the claim that the brain, or components of the brain, could be replaced with some form of computer equipment.
Based on what theory (here is where I think you will find the problem with your argument).

In this case, I think that the onus is on the people making such a positive claim to justify it with hard evidence that all the functionality of the brain can be duplicated by computers.
We think it can and we are attempting to falsify the hypothesis. You are simply saying no.

As an aside, it's noteworthy that the brain doesn't, in fact, function like a computer.
This is a broad statement. As an aside, in many ways, the braind DOES, in fact, function like a computer.
 
I would baulk, however, at any certainty about either being conscious.
I think you've found a distinction without any difference. Searle was speaking in the coloquial sense when he said understand. Otherwise there would have been no point to his thought experiment.
 
I think you've found a distinction without any difference. Searle was speaking in the coloquial sense when he said understand. Otherwise there would have been no point to his thought experiment.
I am not sure what the colloquial sense of understanding is. But even in a colloquial sense there is a difference between understanding and consciousness.

And I think the point to the thought experiment was about meaning wasn't it?
 
I am not sure what the colloquial sense of understanding is.
The ability to comprehend concrete and abstract concepts?

But even in a colloquial sense there is a difference between understanding and consciousness.
Not really no. How does one understand without consciousness?

It's not make or break for the discussion. If you want to disagree that is fine. There's millions of pages that detail Searle's intention.

And I think the point to the thought experiment was about meaning wasn't it?
Searle is trying to debunk strong AI (a computer that is aware).

Stanford Encyclopedia said:
Chinese room
The Chinese Room argument, devised by John Searle, is an argument against the possibility of true artificial intelligence.
 
Last edited:
The ability to comprehend concrete and abstract concepts?
Isn't comprehend just another word for understand?
Not really no. How does one understand without consciousness?
Can I not be conscious of something without understanding it?
It's not make or break for the discussion. If you want to disagree that is fine. There's millions of pages that detail Searle's intention.
The only relevant ones being the ones actually written by Searle
Searle is trying to debunk strong AI (a computer that is aware).
Yes, that is his overall aim, the particular point he is making in the Chinese room argument is, as far as I remember, about meaning.

By the way, don't treat SEP as gospel, I have often pointed out places where it is egregiously inaccurate.
 

Back
Top Bottom