On Consciousness

Is consciousness physical or metaphysical?


  • Total voters
    94
  • Poll closed .
Status
Not open for further replies.
Yes, and I found your definitions the most convincing I have seen.

Perhaps I should have written that people do not agree on the definition of consciousness ...

You should not find his definition convincing, because it's unworkable. Nobody studying consciousness in the brain uses it, because you can't.

Pixy says it's self-referential information processing, but that goes on all the time in systems that aren't conscious, including non-conscious processes in the brain itself.

If you use that definition, you can't do research at all.

The definition is simply the phenomenology, the stuff that starts going on when you wake up or start dreaming.

You might want to look at the "integrated information" model of consciousness, btw.

But Pixy's definition doesn't work and is not used.
 
You should not find his definition convincing, because it's unworkable. Nobody studying consciousness in the brain uses it, because you can't.
Unfortunately, all your assertions in support of this assertion are untrue.

Pixy says it's self-referential information processing, but that goes on all the time in systems that aren't conscious, including non-conscious processes in the brain itself.
No it doesn't, by definition. If my definition disagrees with your definition, all that means is that we have two different definitions. If you're so tied up with your definition that you don't realise that, that's your problem.

If you use that definition, you can't do research at all.
Non-sequitur.

You might want to look at the "integrated information" model of consciousness, btw.
I have. I don't think it's worthwhile. It makes no distinction between complex systems with introspection and complex systems without, and that's the entire point. That's precisely how we identify consciousness. My definition is the definition everyone uses day-today, all the way back to Descartes.
 
Consciousness is, at its core, the creation of a point of view. And that's achieved through integration.

At least, that's certainly how it appears now.

And the more we discover about the mechanism, the more that's borne out.
It is true that human consciousness is the only one we really something about.

We can't just leap off into the air.

What we know about consciousness, we know from studying brains which are conscious.

The synchronous pulsing is a signature process, a defining process. And we see this integrated, synchronous nature reflected in how the phenomenology actually manifests.
Now you seem to limit yourself to only considering human consciousness. If synchronous pulsing is a defining process, there cannot be consciousness without synchronous pulsing. To me it seems like claiming that all transportation has to be on legs!

If we stop looking for those key processes which we see in the conscious systems we observe, then there's nothing to look for anymore.
I do not understand that argument. Why should we stop looking for signs of consciousness if we cannot find key processes of human consciousness? Should we discount a priori that ant hills could be conscious?

It might not. But when none of the components are present that we see in brains which we know are conscious, and no analogous structures are present or apparently could be present, then we have no way of concluding that consciousness could occur.
What do you mean by "analogous structures"? Could computer memories and processes form analogous structures to structures of human consciousness?

I certainly do not think that trees, or anthills are conscious, but I do not see why they cannot be conscious by definition. If signs could be found of self-awareness (which I believe is a sign of consciousness), I would not discount the possibility merely because I could not identify the processes behind it.

Consciousness is, in one sense, either/or, and in another it's a matter of degrees.

In the brain stem is the either/or switch. It's either on or off.
I do not think so. I believe that consciousness grows in a human foetus, and that there will not be a stage where the brain stem suddenly turns consciousness on. I could be wrong, of course.

Defining consciousness is not difficult, it's simply the phenomenology.
On these pages, many have tried, but few have succeeded.

The necessary sophistication is determined by observing normal and impaired brains, and in other ways, to see what activity is required for what task.
You are again limiting the scope to human consciousness.

Consciousness appears to be relatively late and complex.
"Late" in what sense?

But who knows, things are often more simple than they first appear.
Self-awareness can certainly be implemented with amazingly simple means. If intelligence is also necessary, then nothing indicates that it is simple.

But I don't know of anyone in the field who thinks that a fly's neural apparatus is capable of performing consciousness.

Do you?
No. I just read articles in Scientific American. I do not know anybody in the field at all. Do you?
 
Inside your head is color. That's not in the light.

Your analogy applies, but you've interpreted it exactly backward.

We are not "aware of colored light" because there is no such thing. You can't give color to light. How would you do it?

Color is our brain's way of making us aware of light.

Colors and sounds and smells are not external events, they are internal ones.

Photons bouncing off things, molecules in air bouncing around, those are external events.
I do not see the point in this exercise. To me it seems like an attempt to make it impossible to have consciousness outside the biological one that we know.

Our eyes register a light wave with wavelength 505 nm. This is stored internally as "green light". If it was stored internally in a computer brain as hex 1F9, or as ASCII '505' it would still be the same: the internal representation of the wavelength 505 nm. For convenience, we call it "green", and by that we mean "one of the wavelengths that are stored internally as green".

All of this is obvious, but why point it out all the time?
 
No it doesn't, by definition. If my definition disagrees with your definition, all that means is that we have two different definitions. If you're so tied up with your definition that you don't realise that, that's your problem.

Actually, it's your problem.

There's nothing I can do about your persistence in using an unworkable definition, or about your failure to find out why it's unworkable.

That allows you to say that it's only an "assertion" of mine, because you don't know what the state of the research is.

Of course, you can't cite anyone using that definition.

Plain and simple, nobody in the field uses your definition because it doesn't work.

I know you have your head in the sand about that, but that's a situation you have caused for yourself.
 
I have. I don't think it's worthwhile. It makes no distinction between complex systems with introspection and complex systems without, and that's the entire point. That's precisely how we identify consciousness. My definition is the definition everyone uses day-today, all the way back to Descartes.

Introspection and Descartes. Wow.

You are very behind the times, Pixy.

Please, do read some cognitive neuroscience from the past few years. I promise you it will help.
 
Agreed, but if we're not talking the same language we're not going to get anywhere.


Certainly. I have objections to the word quale, of course, but otherwise that is indisputable.


The problem is, the statement is still not true. The colour violet has a physical reality. The experience of the colour violet has a physical reality. The two are not, of course, remotely the same, but the experience is a map of the physical colour.

It's not a perfect map; it's not even a one-to-one map; but it is a map.


Sure. But terminology matters.

No. A wavelength of light is not a color. There's nothing violetish about the violet wavelength.

However, the problem is that the word "color" was invented before details about light and brain function were known. The word is the problem, not the ideas.

If you interpret my argument with a bit more charity, understanding that language is flawed, I think your objections would likely dissolve. The meaning of the word "color" needs to shift to accommodate concepts that didn't exist when the word was coined. Then a productive conversation can commence.

When I look at a violet thing, I know that it's just reflecting a wavelength of light to my eyes. The violetness I impose on it, however, is a creation of my brain. There's nothing violet about the light. Sure there's a scientific definition of "violet" as a range of wavelengths of light, but there's also the sensory definition of "violetness" which has to do with its meaning and associations, learned and prewired, that happens only in the brain (called by some a quale) and can happen without the presence of the scientifically prescribed wavelengths of physical electromagnetic radiation.

What, precisely, is your objection to the word "quale," Pixy? Flesh it out for us if you will.
 
Last edited:
You should not find his definition convincing, because it's unworkable. Nobody studying consciousness in the brain uses it, because you can't.
What do you mean by "use"? What other definition yields an unambiguous answer like this one, and is not limited to consciousness in human brains?

Pixy says it's self-referential information processing, but that goes on all the time in systems that aren't conscious, including non-conscious processes in the brain itself.

If you use that definition, you can't do research at all.
I think you just need more specialised terms for human consciousness. You point about non-conscious processes in the brain itself is well made, and I think that especially in cases like this, the self-referential information processing comes in handy, because it can solve such contradictions as the decisions that are taken before the subject is aware of the decision. With Pixy's definition, the decision is taken by a conscious process, without the immediate knowledge of another conscious process that is responsible for our inner dialogue that we normally think of when we think of consciousness.

The definition is simply the phenomenology, the stuff that starts going on when you wake up or start dreaming.
And that is so vague that is is useless.

You might want to look at the "integrated information" model of consciousness, btw.
I just did, and I like it, thank you!

I especially like how consciousness is always measured in degrees, through the number Φ.

My only critique of IIT at this stage is that it is not clear to me that everything with a high Φ is highly conscious. As far as I can see, a computer model could have just as high Φ when the program is halted as when running. It is as if something is missing.

The article that I read (from Scientific American, naturally :) )mentioned other problems, such as the difficulty in actually calculating Φ (which makes it impractical), and it is also not able to solve the problem of the apparently unconscious processes that are such an important part of our consciousness. Again, I think that something along the more process-oriented definition of Pixy could supplement the theory.

But otherwise, it looks fine.
 
Now you seem to limit yourself to only considering human consciousness. If synchronous pulsing is a defining process, there cannot be consciousness without synchronous pulsing. To me it seems like claiming that all transportation has to be on legs!

It seems to you?

Why does it seem that way to you?

Do you have any reason to say this, or are you just pulling ideas out of the air?
 
I do not see the point in this exercise. To me it seems like an attempt to make it impossible to have consciousness outside the biological one that we know.

Our eyes register a light wave with wavelength 505 nm. This is stored internally as "green light". If it was stored internally in a computer brain as hex 1F9, or as ASCII '505' it would still be the same: the internal representation of the wavelength 505 nm. For convenience, we call it "green", and by that we mean "one of the wavelengths that are stored internally as green".

All of this is obvious, but why point it out all the time?

How would this make it impossible to have non-biological consciousness? I don't see any barrier there at all.

And phrases like "stored internally as green light" appear hopelessly vague to me.

The glaring problem with your post here, though, is that you equate something like "ASCII 505" with phenomenology. The two are completely different.

A label for the thing, and the thing itself, aren't the same.

You can make a machine that responds to light which makes our brains perform green in a number of ways. It could do all sorts of things, whatever.

But it won't perform green.

First, there's no green in the light, so it can't get green from there.

You can't just make it respond to the light and say "There, it sees green" because as we saw earlier, different brains do different things in response to such light (nothing, performing gray, performing green, performing other colors, performing smells etc.) so the "green" label is (a) totally arbitrary, and (b) not inherent in any way in the light.

Which all means, if you want a machine to perform green, you have to BUILD it so that it performs that action.

You cannot simply build a machine that responds to light which our brains react to by performing green, and expect that it will magically also perform green even though it's not designed and built to.

You can't say, "But it saw the green light" because there's no such thing as "green light".
 
However, the problem is that the word "color" was invented before details about light and brain function were known. The word is the problem, not the ideas.

Indeed, and that's a huge problem right now.

All new paradigms have this problem.

Currently, when we talk about, for instance, "fire", we use the same word for the thing as for the phenomenology it causes in our brains.

Makes communication tricky.
 
What do you mean by "use"? What other definition yields an unambiguous answer like this one, and is not limited to consciousness in human brains?

Dude, please stop saying I'm limiting things to human brains. I'm not.

It's just the case right now that we have to use human brains to study consciousness because we KNOW human brains are conscious and we can talk to humans.

Of course, in some experiments, we can use monkeys. There are ways of getting around the language issue.

When I say "use", I mean if you get into the lab and attempt to study a conscious system -- in a human or a monkey or a dog or whatever -- if you try to work with Pixy's definition, you're stuck b/c there are false positives all over the place.

The brain is full of self-referential info processing, and most of it has no effect on consciousness at all.

We also find it in systems that nobody thinks are conscious.

False positives left and right.

This is an advantage for Pixy's ideas, but in practice, it fails completely.
 
I think you just need more specialised terms for human consciousness. You point about non-conscious processes in the brain itself is well made, and I think that especially in cases like this, the self-referential information processing comes in handy, because it can solve such contradictions as the decisions that are taken before the subject is aware of the decision. With Pixy's definition, the decision is taken by a conscious process, without the immediate knowledge of another conscious process that is responsible for our inner dialogue that we normally think of when we think of consciousness.

The only way you can do that is to label things as "conscious" which clearly are not.

It doesn't work.

Nobody does research with that definition because you can't.
 
My only critique of IIT at this stage is that it is not clear to me that everything with a high Φ is highly conscious.

I agree. One example might be a chorus, for example, in which there is more information when the chorus is singing than when each member sings individually.

So I think IIT might be more useful for measuring II in conscious systems than it will be in deciding which systems are conscious.
 
How would this make it impossible to have non-biological consciousness? I don't see any barrier there at all.
It has been used as an argument against artificial consciousness that a machine cannot "experience green". But then on the other hand, a human cannot experience 1F9 …

And phrases like "stored internally as green light" appear hopelessly vague to me.
And it will remain vague as long as we do not know exactly how experiences are stored.

You can make a machine that responds to light which makes our brains perform green in a number of ways. It could do all sorts of things, whatever.

But it won't perform green.
I do not know what it means to "perform green".

First, there's no green in the light, so it can't get green from there.
It can perform a mapping of wavelengths just like a human brain.

You can't just make it respond to the light and say "There, it sees green" because as we saw earlier, different brains do different things in response to such light (nothing, performing gray, performing green, performing other colors, performing smells etc.) so the "green" label is (a) totally arbitrary, and (b) not inherent in any way in the light.
So now different brains are also "performing" different things when seeing such light. Are you sure this leads to something useful?
 
Dude, please stop saying I'm limiting things to human brains. I'm not.
Well, you did say that processes going on in human brains are "defining", so I think I was excused for that mistake.

When I say "use", I mean if you get into the lab and attempt to study a conscious system -- in a human or a monkey or a dog or whatever -- if you try to work with Pixy's definition, you're stuck b/c there are false positives all over the place.
How do you know these are false positives? I will readily admit that I find that definition too broad, but it is difficult to narrow it without losing it unambiguity. Perhaps IIT could help.

The brain is full of self-referential info processing, and most of it has no effect on consciousness at all.
Or human consciousness consists of more than one internal consciousnesses of which we are only aware of one (if we are not schizophrenic).

We also find it in systems that nobody thinks are conscious.

False positives left and right.

This is an advantage for Pixy's ideas, but in practice, it fails completely.
I think that this simple concept needs to be complemented with something more complex, such as IIT, or a measure of intelligence.
 
The only way you can do that is to label things as "conscious" which clearly are not.
I do not see it as clearly as you do. I also gave an example of how unconscious decision-making causes problems for concepts of consciousness that do not allow for hitherto "unconscious" processes to be redefined as "conscious".
 
I do not see it as clearly as you do. I also gave an example of how unconscious decision-making causes problems for concepts of consciousness that do not allow for hitherto "unconscious" processes to be redefined as "conscious".

Do you think it's possible that those processes might have a consciousness of their own?
 
I do not see it as clearly as you do.

Well, that's true.

I also gave an example of how unconscious decision-making causes problems for concepts of consciousness that do not allow for hitherto "unconscious" processes to be redefined as "conscious".

No, you didn't.

You only described an effort to shoehorn non-conscious processes into a jury-rigged definition of consciousness for convenience.
 
Status
Not open for further replies.

Back
Top Bottom