Dancing David
Penultimate Amazing
I assume you mean tell the difference empirically?
P.S. I am still thinking about your last two posts, have not forgotten![]()
Any way to tell the difference will do.
I assume you mean tell the difference empirically?
P.S. I am still thinking about your last two posts, have not forgotten![]()
They are not. Look at the design of a Turing-machine; it's not at all modular.
It's also not a design, it's a thought experiment. What makes the Turing machine work is not specified. Think of a Turing machine as a functional spec, possibly.
I've already said that what is needed is a functional specification.
That seems to be the reverse of what is being attempted with the neural net approach, where we don't fully understand what the brain is doing, but we try to duplicate the general layout.
Circular reasoning.
You: "Computers and brains are different!"
Me: "How so ?"
You: "Because you can't replace brains with computers!"
Me: "Why not ?"
You: "Because there are differences between computers and brains!"
In this case, I think that the onus is on the people making such a positive claim to justify it with hard evidence that all the functionality of the brain can be duplicated by computers.
In what way does it not function like a computer?As an aside, it's noteworthy that the brain doesn't, in fact, function like a computer.
In what way does it not function like a computer?
Well not quite the same.The Chinese Room Argument. I devised my own version once before I knew who Serle was. I don't think I could do the rebuttals justice. See Replies to the Chinese Room Argument.
Personally I would sidestep both positions and avoid ontology altogether, other than to observe trivially that whatever is, is.Exactly.
When we define consciousness as the ideal result of a material process then we end up with an material world defined ideally.
When we define consciousness as the material result of an ideal we end up with a ideal world defined materially.
We need to outgrow the limitations of our language.
And I cannot put my finger on the precise reason I wouldn't.I would. Definitely.
But self-awareness is not the problem. If I build a robot with cameras and touch sensors and program it to build a model of it's environment by which it can navigate, then by definition it is self-aware because it could not navigate its environment if it did not also model itself.My point here is that simple systems already display simple self-awareness. Why would there be anything to prevent complex systems from exhibiting complex self-awareness?
I just find it implausible is all.Why not?
~~ Paul
Sounds like the invisible angel theory of flight before we understood aerodynamics. Why do you assume something is missing if you don't even know if it exists?Whatever is different between a neuron and a computer. As it is at present entirely impossible to replace a neuron with a computer, I think that's still something of an issue.
As the Germans did by reverse engineering British radar equipment. We can in fact learn by reverse engineering. It's not at all a new procedure.That seems to be the reverse of what is being attempted with the neural net approach, where we don't fully understand what the brain is doing, but we try to duplicate the general layout.
I think Belz is right. You are going to need to explain what the difference is before you can assert that there is a fundamental difference.Circular reasoning.
You: "Computers and brains are different!"
Me: "How so ?"
You: "Because you can't replace brains with computers!"
Me: "Why not ?"
You: "Because there are differences between computers and brains!"
I'm honestly not so sure.As usual, my arguments have been misrepresented.
Based on what theory (here is where I think you will find the problem with your argument).I've claiming that there is insufficient evidence that justifies the claim that the brain, or components of the brain, could be replaced with some form of computer equipment.
We think it can and we are attempting to falsify the hypothesis. You are simply saying no.In this case, I think that the onus is on the people making such a positive claim to justify it with hard evidence that all the functionality of the brain can be duplicated by computers.
This is a broad statement. As an aside, in many ways, the braind DOES, in fact, function like a computer.As an aside, it's noteworthy that the brain doesn't, in fact, function like a computer.
I think you've found a distinction without any difference. Searle was speaking in the coloquial sense when he said understand. Otherwise there would have been no point to his thought experiment.I would baulk, however, at any certainty about either being conscious.
I am not sure what the colloquial sense of understanding is. But even in a colloquial sense there is a difference between understanding and consciousness.I think you've found a distinction without any difference. Searle was speaking in the coloquial sense when he said understand. Otherwise there would have been no point to his thought experiment.
The ability to comprehend concrete and abstract concepts?I am not sure what the colloquial sense of understanding is.
Not really no. How does one understand without consciousness?But even in a colloquial sense there is a difference between understanding and consciousness.
Searle is trying to debunk strong AI (a computer that is aware).And I think the point to the thought experiment was about meaning wasn't it?
Stanford Encyclopedia said:Chinese room
The Chinese Room argument, devised by John Searle, is an argument against the possibility of true artificial intelligence.
Isn't comprehend just another word for understand?The ability to comprehend concrete and abstract concepts?
Can I not be conscious of something without understanding it?Not really no. How does one understand without consciousness?
The only relevant ones being the ones actually written by SearleIt's not make or break for the discussion. If you want to disagree that is fine. There's millions of pages that detail Searle's intention.
Yes, that is his overall aim, the particular point he is making in the Chinese room argument is, as far as I remember, about meaning.Searle is trying to debunk strong AI (a computer that is aware).