My take on why indeed the study of consciousness may not be as simple

I don't think so either. However, a phenomenon like receptive aphasia really turns the question around so that we can look at it from a different direction. Certain parts of the brain can be said to be correlated with the ability to comprehend spoken language, but we're still left with all of the same unanswered questions we had before. The one which intrigues me most is the fact that Miss Ruby has Wernicke's aphasia, and yet she can always clearly understand what she herself is saying. Her words are always purposeful, and this shouldn't be possible.

But it gets better (or worse, or... well, I don't know.) I had a traumatic brain injury in a car accident many years ago. I've had PET, CAT, and MRI scans done. I have copies of all of these, and
the MRI scan has some notes on it. Basically, they said that I had general cerebral atrophy and neuronal damage (analagous to what you'd expect to see in something along the lines of, say, late-stage Korsakoff's Syndrome.) As y'all can clearly see, however, I'm typing complete sentences right now. :rolleyes: There's definitely no way that I should have been able to get a masters' degree in social work. Now, there are complex neurological reasons why everything may have worked out this way and I'm not going to explore them all now, but the point is that knowing which abilities will be affected by manipulating corresponding portions of the brain can be a little like knowing that we can nudge two atoms together with a sledgehammer.

This is a problem tackled by the neurologist Kurt Goldstein.
http://en.wikipedia.org/wiki/Kurt_Goldstein

who suggested a non-localised theory of aphasia

http://www.ling.fju.edu.tw/neurolng/goldstein.htm

I have his book "The Organism"

http://www.amazon.com/Organism-Kurt-Goldstein/dp/0942299973

which I need to get around to reading.

Your posts have motivated me too sooner than later.

This review of the implications of Goldstein's Holism is also interesting.

http://www.natureinstitute.org/pub/ic/ic2/goldstein.htm
 
I can't agree. I think there are some very important differences. However, there are without question some very important parallels. I think you are simply focusing on the differences. Your microscope for hammer analogy simply dosen't work IMO.

We can significanlty simulate brain processes by computer. This is an objective and substantial advancement.

We can simulate many processes by computer. We can simulate weather systems, for example. Using the computer to gain insight into how things work is a worthwhile endeavour - provided we don't fall into the trap of thinking that a simulation is the thing simulated.
 
Sorry to come in late. I presume someone has pointed out that it is not possible to put the brain in all its possible states merely by reading books. Therefore there are states that Mary cannot put herself in, except perhaps if surgery is allowed. Therefore whether or not Mary learns anything new when she leaves the room is not a comment about physicalism, but about the limitations of human book-learning.

I assume it's also been noted that knowing everything there is to know about the human vision system is tantamount to knowing everything.

~~ Paul

I agree that MR doesn't disprove physicalism, but it does throw doubt on certain models of how the brain works.
 
The only way around this is to claim that physicalism holds that book learning should be sufficient to convey all possible internal experiences. Does anyone think this?

~~ Paul

I think it's implicit in the Strong AI computations model of consciousness that passing information into a system is equivalent to experience.

If it's not equivalent, then how would a computer program differentiate between different sources of data?
 
We can simulate many processes by computer. We can simulate weather systems, for example. Using the computer to gain insight into how things work is a worthwhile endeavour - provided we don't fall into the trap of thinking that a simulation is the thing simulated.
If the thing being simulated is an informational process, then the simulation is identical to the thing being simulated.

There is no difference in outcome between a program run on a physical computer and a program run on a virtual computer. If there were, no-one would use virtual computers.
 
If the thing being simulated is an informational process, then the simulation is identical to the thing being simulated.

There is no difference in outcome between a program run on a physical computer and a program run on a virtual computer. If there were, no-one would use virtual computers.

That's just a way of saying that if brain function is entirely equivalent to a computer program, then it's entirely equivalent to a computer program.

That's to assign a particular meaning to the phrase "informational process". Everything that happens in the universe is an "informational process".
 
This is a problem tackled by the neurologist Kurt Goldstein.
http://en.wikipedia.org/wiki/Kurt_Goldstein

who suggested a non-localised theory of aphasia

http://www.ling.fju.edu.tw/neurolng/goldstein.htm

I have his book "The Organism"

http://www.amazon.com/Organism-Kurt-Goldstein/dp/0942299973

which I need to get around to reading.

Your posts have motivated me too sooner than later.

This review of the implications of Goldstein's Holism is also interesting.

http://www.natureinstitute.org/pub/ic/ic2/goldstein.htm


Nothing against Goldstein but you may want to study more recent research and stuff. Probably an interesting historical foundation for the events of apahsia but i doubt it is really up to date.

Current theories, models and testing show a number of phenomena, certain brain events seem to involves different areas of the brain, other events are very localized in function.

A brief look at Goldstein's material is interesting but not really informed by the last 30+ years of research.

:)
 
It is. That doesn't address the point.

That's precisely the point that has been discussed about Mary's Room - that information passing into a system is not equivalent to experience. No matter how much information Mary has about "red" - state S1 - it will not equate to the experience of red unless the data is received in a particular way.

There are philosophers, apparently, who seem to claim that this is a "red" herring, and that looked at properly, experience apparently disappears as a concept, and we can just concentrate on neurons. That seems absurd to me, but at least it's a way to resolve the issue. But if you accept that experience is real, and can only be obtained by sensory means, then that poses a significant problem for the computational model of consciousness, because a computer program doesn't care how it gets its data - indeed, the architecture of computers is designed to conceal, in multiple layers, the origin of all data inputs.

For example - I've written a program which collects data from telephone exchanges, collates it and writes it to a database. In order to test it, it's possible to use an emulation program to generate the data. It's also possible to dump the data into large files, and use those repeatedly in test cycles.

If the program has an "experience" then it's identical for any of these different means of obtaining information. This is entirely different to Mary, for whom the only way to enter state S2 is to receive visual data.
 
That's precisely the point that has been discussed about Mary's Room - that information passing into a system is not equivalent to experience.
Yes. Which Mary's Room completely fails to establish for a number of different reasons.

No matter how much information Mary has about "red" - state S1 - it will not equate to the experience of red unless the data is received in a particular way.
That's the assertion. It's not true.

There are philosophers, apparently, who seem to claim that this is a "red" herring, and that looked at properly, experience apparently disappears as a concept, and we can just concentrate on neurons.
That's perfectly valid. I don't necessarily take that route, but experience, if there is to be such a thing, is neural processes.

That seems absurd to me, but at least it's a way to resolve the issue. But if you accept that experience is real, and can only be obtained by sensory means, then that poses a significant problem for the computational model of consciousness, because a computer program doesn't care how it gets its data - indeed, the architecture of computers is designed to conceal, in multiple layers, the origin of all data inputs.
Of course, I don't accept that experience can only be obtained by sensory means, because it's simply not true.

For example - I've written a program which collects data from telephone exchanges, collates it and writes it to a database.
Okay.

In order to test it, it's possible to use an emulation program to generate the data.
Sure.

It's also possible to dump the data into large files, and use those repeatedly in test cycles.
Sure.

If the program has an "experience" then it's identical for any of these different means of obtaining information.
Yes.

This is entirely different to Mary, for whom the only way to enter state S2 is to receive visual data.
No, not even remotely.

The problem with the Mary's Room argument is that it rests on a premise that is physically impossible.

If we accept the premise, then Mary already includes state S2.

RandFan and some others are arguing that the premise is impossible, and that the argument fails to prove anything for that reason. They are correct.

I am arguing that if we grant the impossible premise, the argument then asserts a second, contradictory premise, and fails to prove anything anyway. This is also correct.

We are making different points, but we are not disagreeing. The Mary's Room argument fails no matter what you do.
 
Last edited:
Nothing against Goldstein but you may want to study more recent research and stuff. Probably an interesting historical foundation for the events of apahsia but i doubt it is really up to date.

Current theories, models and testing show a number of phenomena, certain brain events seem to involves different areas of the brain, other events are very localized in function.

A brief look at Goldstein's material is interesting but not really informed by the last 30+ years of research.

:)

I have to start somewhere. :)

P.S. Weekend and that so I am tied up. Will revert later re: your previous post.
 
Last edited:
No.


No.


No.

I keep forgetting why I put the Pixy on ignore. Ah yes, the combination of unfounded assertion and unsupported denial. Occasionally backed up with an irrelevant URL that has little to do with the subject under discussion.
 
westprog said:
That's precisely the point that has been discussed about Mary's Room - that information passing into a system is not equivalent to experience. No matter how much information Mary has about "red" - state S1 - it will not equate to the experience of red unless the data is received in a particular way.
Received or internally generated in particular ways, yes.

There are philosophers, apparently, who seem to claim that this is a "red" herring, and that looked at properly, experience apparently disappears as a concept, and we can just concentrate on neurons. That seems absurd to me, but at least it's a way to resolve the issue. But if you accept that experience is real, and can only be obtained by sensory means, then that poses a significant problem for the computational model of consciousness, because a computer program doesn't care how it gets its data - indeed, the architecture of computers is designed to conceal, in multiple layers, the origin of all data inputs.
I'm not sure what you're saying here. What do you mean by "experience ... can only be obtained by sensory means"? I can generate internal experiences at will.

For example - I've written a program which collects data from telephone exchanges, collates it and writes it to a database. In order to test it, it's possible to use an emulation program to generate the data. It's also possible to dump the data into large files, and use those repeatedly in test cycles.

If the program has an "experience" then it's identical for any of these different means of obtaining information. This is entirely different to Mary, for whom the only way to enter state S2 is to receive visual data.
Which could be obtained from the real world or generated by the vat circuitry of the "brain in a vat."

~~ Paul
 
Last edited:
I keep forgetting why I put the Pixy on ignore. Ah yes, the combination of unfounded assertion and unsupported denial. Occasionally backed up with an irrelevant URL that has little to do with the subject under discussion.
I'm sorry, but repeating an incorrect assertion doesn't make it less wrong.

And if you fail to read the explanation of why you are wrong the first time, there seems little point in repeating it.
 
Which could be obtained from the read world or generated by the vat circuitry of the "brain in a vat."
Indeed. For Mary to acquire the knowledge that she is supposed to have, she would need to be a brain in a vat.

If one day we unplug the vat circuitry, plug in some CCDs, and wheel her outside to look at a rose, she would simply say yep, that's red.
 
Malerin said:
Their brains change when they learn, but scientists do not have to adopt a particular brain state in order to learn about brain states. Yet that is exactly what is being asserted: in order to have complete knowledge of brain state X, one must replicate brain state X in their brain. That does not go on in neurological studies, nor is there any reason to think it will ever become a necessary condition. It's ad hoc.
You're equivocating on the term complete knowledge. I suggest you stop for a moment and tell us whether complete knowledge includes the fact that I now have or once had the internal experience in question.

~~ Paul
 
I don't think he's so much equivocating as completely ignoring the question. He discusses complete knowledge in one sentence, and then just forgets that it was even mentioned in the next. And fails to see that this might be significant.
 
That's precisely the point that has been discussed about Mary's Room - that information passing into a system is not equivalent to experience. No matter how much information Mary has about "red" - state S1 - it will not equate to the experience of red unless the data is received in a particular way.

There are philosophers, apparently, who seem to claim that this is a "red" herring, and that looked at properly, experience apparently disappears as a concept, and we can just concentrate on neurons. That seems absurd to me, but at least it's a way to resolve the issue. But if you accept that experience is real, and can only be obtained by sensory means, then that poses a significant problem for the computational model of consciousness, because a computer program doesn't care how it gets its data - indeed, the architecture of computers is designed to conceal, in multiple layers, the origin of all data inputs.

For example - I've written a program which collects data from telephone exchanges, collates it and writes it to a database. In order to test it, it's possible to use an emulation program to generate the data. It's also possible to dump the data into large files, and use those repeatedly in test cycles.

If the program has an "experience" then it's identical for any of these different means of obtaining information. This is entirely different to Mary, for whom the only way to enter state S2 is to receive visual data.

You literally have no idea what you are talking about.

First, the human brain doesn't care how it gets data either. Whether the photons come from a real red object or merely an image of something red, or even if there are photons at all and the neural stimulation comes from something like pressure (ever pressed on your eyeballs?) is all irrelevant. All that matters is that retinal neurons fire in an equivalent way.

Second, computer programs require a certain structure of input at precisely the right point in the process. Take the program you supposedly wrote (yet seem to be utterly ignorant of the actual functioning) -- you can't just plop random words of data down anywhere and expect the program to behave as you want, as you are claiming. The input has to be both 1) structured in a way the program expects and 2) wherever the program looks for it. At the very least, the words need to be big or little endian (and that is just the tip of the iceberg) and need to be written at precise locations in memory or hardware registers.

Or are you claiming I can write "input" to a random memory location and your program will work properly?

Funny, but once the picture is painted correctly, it starts to sound pretty much like a human brain, which needs a certain structure of input (retinal neuron excitation in a pattern equivalent to that which red wavelength photons produce) at a precise location in it's process (the retinal neurons).
 

Back
Top Bottom