• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
I understand that these signals are result from an aggregation of neural firings, but I've read in several places that there is not yet a consensus on whether or not they're thought to play a functional role within the brain, which would mean they're not "just noise". Beelzebuddy's above post which prompted my questions suggests that they may be something akin to a "system clock".

What I meant by asking what the results of that anesthesiology study would mean in programming terms was, what would be the equivalent in programming to a process that requires different subroutines to use different clockspeeds, and have different output depending on whether or not the two clocks are in or out of phase with each other?
The just noise/not noise thing goes back to waaay earlier in the thread. To clarify what I was saying earlier, brain waves are produced by synchronous neural populations, and those populations probably synchronize for a good reason, but the brain waves themselves don't do anything and are a noisy side effect. If they did do anything, we should be able to replicate it by putting someone in an oscillating magnetic field, but we've done that in the past and it doesn't seem to do anything.

As for the second question, I could maybe BS something if you want, but it'd be no more likely to be true than Piggy's hidden dualism. There's lots of possible reasons why you'd run into weird phase effects when you feed weird voltages into a processor or randomly corrupt working memory in a program, actions which are broadly analogous to what's happening here with anesthesia. Without knowing much more about which exact parts of the brain are being affected by propofol and how, it's all just wild guessing.
 
The just noise/not noise thing goes back to waaay earlier in the thread. To clarify what I was saying earlier, brain waves are produced by synchronous neural populations, and those populations probably synchronize for a good reason, but the brain waves themselves don't do anything and are a noisy side effect. If they did do anything, we should be able to replicate it by putting someone in an oscillating magnetic field, but we've done that in the past and it doesn't seem to do anything.

As for the second question, I could maybe BS something if you want, but it'd be no more likely to be true than Piggy's hidden dualism. There's lots of possible reasons why you'd run into weird phase effects when you feed weird voltages into a processor or randomly corrupt working memory in a program, actions which are broadly analogous to what's happening here with anesthesia. Without knowing much more about which exact parts of the brain are being affected by propofol and how, it's all just wild guessing.

Thanks again. Not to be disrespectful, but you and Piggy keep reminding me of a time, around 25 years ago when I was a student in Daniel Dennet's class, and he would occasionally come into class in a huff because he and John Searle were engaged in some kind of public back-and-forth tiff in which each kept finding new ways to accuse the other of being a Dualist. :D Sorry. Carry on.
 
Noise relative to what?
Relative to brain activity, since that's what we're talking about.

Brain activity produces EM fields, which we can detect with an EEG. These fields are pretty weak, though. Extremely weak at the surface of the scalp, weak still if you're right at the neuron.

Stand under a fluorescent ceiling light and the EM field from the light, at your scalp or right at the neuron, is significantly stronger than the EM field from your neural activity.

TMS works, but the field from a TMS device is on the order of that used for MRI, way beyond anything the brain itself generates.

I'll ask you what I'm asking others.

If you propose that you can replicate a behavior performed by animal bodies in a machine, what behavior is it that you propose to replicate?
As I've said many times: Information processing. That is, after all, what the brain does, and what computers do.
 
Thanks again. Not to be disrespectful, but you and Piggy keep reminding me of a time, around 25 years ago when I was a student in Daniel Dennet's class, and he would occasionally come into class in a huff because he and John Searle were engaged in some kind of public back-and-forth tiff in which each kept finding new ways to accuse the other of being a Dualist. :D Sorry. Carry on.
That would have been so cool!

I'm not sure that John Searle is a dualist, he asserts that he is not, but he does assert that certain others are dualists when they clearly are not.

He's most definitely wrong, I'm just not sure he's a dualist.
 
That would have been so cool!

I'm not sure that John Searle is a dualist, he asserts that he is not, but he does assert that certain others are dualists when they clearly are not.

He's most definitely wrong, I'm just not sure he's a dualist.

John Searle is wrong about what?
 
Why a "magic bean" is necessary for consciousness -- except it's not magic, and it's not a bean

There are basically two camps on these threads -- which I'll call the informationalist camp and the physicalist camp.

The informationalists say that all which is needed for consciousness is for the brain to process information about the world around it. No further physical processes are necessary in the brain above and beyond those required to process information about what's coming in. In the case of dreams, that can be explained by retrieving such information from memory.

The physicalists maintain that this approach is insufficient, that the generation of conscious experience requires its own specialized hardware in addition to the hardware needed to "process information about" (not a physicalist term, but we'll borrow it) the "input" coming into the brain.

Some in the informationalist camp have derisively labeled this additional specialized hardware as -- for some reason -- a "magic bean". (Why they use the term "magic" rather than, say, "superfluous", I can't say.)

But the Achilles heel for the informationalist position is simply this -- if we take the simple example of performance of color in the human brain, for instance… color is not "information about" light.

As we've seen, color is not a property or quality of light, nor of the tissues in our brains.

Light does have distinct properties, of course -- speed, wavelength, frequency, amplitude… but none of these are qualia in our phenomenology, which is simply to say that we don't consciously experience any of that, for reasons that have been clearly explained upthread.

What we do experience is color, and brightness (meaning that quality that makes you squint from the glare). And neither of those are properties of either light or brain tissue.

Therefore, you can "process information about" light six ways to Sunday and you'll never get color. You'll never get red, because red is not "information about" light. It's not there to be "processed".

Trace the physics of the neural chain, and you'll never find red. It ain't there. (I'm ready to be proven wrong about this, of course, but I'll need a clear explanation of where the red comes from.)

In other words, the informationalists want something for nothing. So, in fact, it is they who are peddling "magic".

In the real world, you don't get something for nothing. If some new and unique behavior is being performed, then it's being performed by some sort of process involving matter and energy.

We can get a behavioral response to light from the bounce-back system. All you need are wires and chips and such. We can build machines that respond in different ways to different types of light, and in doing so these machines do indeed differentiate between light which our human brains respond to by performing different colors.

What you don't get from this system, is the performance or production of color itself.

There is no point in this chain reaction which we can point to and say, "This is where the red occurs".

Yet in our brains, red does occur.

But not all the time. We can run subliminal experiments in which we expose a human brain to a red square, for instance, in timeframes too brief for the brain to respond with any phenomenology at all -- in other words, the brain of the animal never has any conscious experience of seeing a red square.

And yet, if we do this repeatedly, and consistently follow the red square with a specific unrelated image, we find that the brain learns something from the process nonetheless -- so in later testing, for example, we can show the subject the previously subliminal images at consciously perceptible timeframes, and ask the subject to guess what image will come next, and we find that they do significantly better than random chance at the prediction task.

So in some cases, a human brain will perform red in response to having a certain type of light shone into the eye. In other cases it won't.

Now, some have said, well, the brain has "learned" to perform red. But if that's true, why hasn't my dog learned this trick?

No, that won't wash. In fact, performing red is a built-in function of the human brain -- or, at least, most human brains.

And since red isn't a property of light, the brain must be doing something above and beyond "processing information about" light in order to produce the red.

It is the contention of the physicalist camp that the production of red by the brain is not passive, and is not sufficiently or adequately explained by simply passing it off as "processing information about light" or retrieving such information from memory.

All real behavior requires some sort of hardware, and the performance or production of any sort of phenomenology is real behavior. It is behavior which produces something new and unique.

That's why a "rope brain" can't be conscious. Ropes can't perform red… or the smell of lemons… or the sensation of being sick. All of these are somehow actively produced by the brain, and their production is a behavior above and beyond the "bounce-back" system of the neural chain.

And because of the laws of physics, this extra work -- the production of the phenomenology -- must be performed by some sort of hardware. To label this hardware "magic" is not only wrong-headed but, I must say, rather childish.

All the physicalists are saying is that if you want to build a conscious machine, there must be some hardware dedicated to generating the phenogram, the phenomenology, behaviors like color and sound and pain and pleasure.

Merely "processing information about" the input won't cut it, because if it did, we'd always be experiencing some sort of phenomenology, even when dead asleep, but we know that this isn't the case.

The brain is doing something different, something additional, when consciousness cranks up. And in the real world, that "doing something" must be handled by some sort of hardware dedicated to the task.

That ain't magic. And it ain't beans.
 
John Searle is wrong about what?
Most notably, his Chinese Room argument, and his responses to those pointing out its many flaws. The Chinese Room argument is full of holes, and his arguments about those holes are full of holes. He doesn't seem to be very responsive on the subject of the second-order holes.

His other work may be better, but I haven't looked very closely because of the multitude of errors he makes in his one paper that I have studied. (It was on the reading list for my Foundations of the Cognitive Sciences course back in, what, 1985? So it would have been fairly new at the time.)

He's also wrong about his attribution of dualism to those who point out the logical failings of his Chinese Room argument, and that's what I was alluding to.
 
Last edited:
Most notably, his Chinese Room argument, and his responses to those pointing out its many flaws. The Chinese Room argument is full of holes, and his arguments about those holes are full of holes. He doesn't seem to be very responsive on the subject of the second-order holes.

I have to agree with PixyMisa on this point. The Chinese Room thought experiment is entirely specious.
 
Where does the red come from?

Piggy, describe for me the quale red. Say, someone with red/green color blindness seriously wants to know what red looks like, and assume you really care about communicating everything you can about the red quale to him.

How would you describe it? And, while you're crafting your response, pay attention to what you are doing and how you are doing it.
 
Piggy, describe for me the quale red. Say, someone with red/green color blindness seriously wants to know what red looks like, and assume you really care about communicating everything you can about the red quale to him.

How would you describe it? And, while you're crafting your response, pay attention to what you are doing and how you are doing it.

You can't describe it to them.

That's because of the mechanics of communication and meaning.

Experience and imagination use the same real estate in the brain. Because of that fact, for instance, it's possible in some cases to communicate with people suffering from "locked in syndrome" by asking them to imagine different things and monitoring their brain behavior. So they could respond "no" by imagining that they're playing tennis, and respond "yes" by imagining that they're walking through their house.

So if I tell you that I just bought a red Camaro, the sound of the words "red Camaro" set off activity in the areas of your brain involved with language, and these in turn trigger activity in the parts of your brain which are involved in actually seeing a red Camaro in real life.

If your brain doesn't perform red, then there's no real estate in your brain to set off any activity in.

It is therefore impossible for you to ever understand what I mean when I say "red", and no amount of explanation on my part will ever do so, because your brain lacks the ability to have the response which is what "understanding red" is.
 
Most notably, his Chinese Room argument, and his responses to those pointing out its many flaws. The Chinese Room argument is full of holes, and his arguments about those holes are full of holes. He doesn't seem to be very responsive on the subject of the second-order holes.

His other work may be better, but I haven't looked very closely because of the multitude of errors he makes in his one paper that I have studied. (It was on the reading list for my Foundations of the Cognitive Sciences course back in, what, 1985? So it would have been fairly new at the time.)

He's also wrong about his attribution of dualism to those who point out the logical failings of his Chinese Room argument, and that's what I was alluding to.

It's been a few years since I read Searle, but I think the point of the Chinese RoomWP is sometimes misinterpreted. Anyhoo, the problem I see with it is how it implies there's a magic bean of "understanding" which, BTW, dovetails with the Philosophical ZombieWP, a robot that acts exactly like a person (would pass any Turning Test) yet has a quale-free internal experience.

I don't recall Piggy addressing the Philosopher's Zombie question.
 
Last edited:
I don't recall Piggy addressing the Philosopher's Zombie question.

P-zombie thought experiments are even more specious than the Chinese room, and therefore don't need addressing. All p-zombie scenarios assume their conclusion.
 
Do we know whether or not the phenogram is generated in realtime or in retrospect? Is it anything like the illusion of free will, where decisions to act are actually executed by unconsciouis parts of the brain, but later remembered as being conscious decisions? Do we actually navigate against the phenogram or is it some sort of data compression used to store memories about all the sense data we've responded to?
 
Do we know whether or not the phenogram is generated in realtime or in retrospect? Is it anything like the illusion of free will, where decisions to act are actually executed by unconsciouis parts of the brain, but later remembered as being conscious decisions? Do we actually navigate against the phenogram or is it some sort of data compression used to store memories about all the sense data we've responded to?

There's always a time lag with the phenogram. Everything you experience is about a half-second behind reality. Which indicates that there's a lot of illusion going on, especially when it comes to decision-making.

One of the big questions is what the function of the thing really is.

On the one hand, you can't drive while unconscious. On the other hand, if something like swerving to avoid a truck required exclusive navigation against the phenogram, you'd be dead.

Perhaps the phenogram is a kind of stabilizer, a reference point that keeps the bounce-back system working not just with reference to distant objects but also with reference to past and future.

Excellent question, and I don't know that there's a clear answer at the moment.
 
Why a "magic bean" is necessary for consciousness -- except it's not magic, and it's not a bean

<snip>
Qualia exists, I tell you! It exists! And I'm going to repeat that entirely baseless assertion for more than two solid pages of text (I copy+pasted to check) until you agree that it's true!
 
Qualia exists, I tell you! It exists! And I'm going to repeat that entirely baseless assertion for more than two solid pages of text (I copy+pasted to check) until you agree that it's true!

Wait, let me get this straight….

You're claiming that your body produces no phenomenology?

And you're expecting me to believe that?
 
Status
Not open for further replies.

Back
Top Bottom