The Hard Problem of Gravity

No. The "Global Workspace" is a synthesis of those communications. If you re-wire the brain with silicon - or anything else - you still have those communications, and you still get that synthesis.

There is no global access. There's just communication between neurons.

And P-Zombies are always non-sensical. If you think you have an argument involving P-Zombies, it just means that you got something wrong somewhere.

My point is that we simply don't know enough yet about actual human phenomenal consciousness. It may be that it is precisely as you define it or it may be simply a different animal altogether. This position is reinforced by Bernard Baars, the creator of GWT, as we've earlier discussed.

What you do is create a rigid definition of consciousness as it's understood in AI and then simply assume that human consciousness is completely qualitatively analogous. You then defend your definition. For me this is not science but more a word game.

When considering GWT it's clear that AI may not be analogous with human consciousness. The global access state may be switched by a self-referencing loop. It's quite possible though we don't know this yet. But it fundamentally is not a self-referencing loop itself. It's simply a means of rapid information transfer.

Nick
 
Last edited:
Well, there are programs that remember information.

Not in the sense that human beings remember information. Computers put data in and put data out. Human beings construct and experience memory, on a continuous basis.
 
My point is that we simply don't know enough yet about actual human phenomenal consciousness.
Yeah, we do.

We know it's produced by the brain. That is sufficient to demonstrate that you are wrong.

What you do is create a rigid definition of consciousness as it's understood in AI and then simply assume that human consciousness is completely qualitatively analogous. For me this is not science but more a word game.
Wrong.

What I (Hostadter, Dennett, and many others) did was to examine what is the fundamental difference between systems we call conscious and systems we don't call conscious, and then realise that this is our operational definition. You keep piling random baggage on whenever you get an answer you don't like, but our fundamental definition of consciousness is exactly as I have described it.

When considering GWT it's clear that AI may not be analogous with human consciousness.
Wrong. GWT is equally applicable to AI and humans - and not very interesting.

The global access state may be switched by a self-referencing loop.
There is no global access state. That's impossible. All there is is signals passing from neuron to neuron.

It's quite possible though we don't know this yet.
There has to be self-reference. That is what, at its core, consciousness means.

But it fundamentally is not a self-referencing loop itself.
Wrong. It can't not be a self-referencing loop.

It's simply a means of rapid information transfer.
Completely wrong. There is no such process. All there is is signals passing from neuron to neuron. The global workspace is a synthesis - or, if you prefer, a metaphor - for that collective neural activity.
 
Wrong.

What I (Hostadter, Dennett, and many others) did was to examine what is the fundamental difference between systems we call conscious and systems we don't call conscious, and then realise that this is our operational definition. You keep piling random baggage on whenever you get an answer you don't like, but our fundamental definition of consciousness is exactly as I have described it.

You're not describing consciousness, then. You've redefined the problem to be something it isn't and then claimed to have solved it. You're basically just playing a labeling game rendering your conclusions a complete non sequitur. In essence, all your argumentation in support of your position is...how shall we say it...? Irrelevant
 
Last edited:
Not in the sense that human beings remember information. Computers put data in and put data out. Human beings construct and experience memory, on a continuous basis.

I don't need to construct the memory of being taught bond pricing in order to price a bond. Once learned I simply do it automatically. I don't have to think about it.
 
I don't need to construct the memory of being taught bond pricing in order to price a bond. Once learned I simply do it automatically. I don't have to think about it.

That's how human memory works. It makes you a different person. It makes you work differently. If you want to consciously recall being taught bond pricing, you might well not be able to do it, but you have the effects of it memorised nevertheless.

I suppose that 'dodger and Pixy will claim that that is just what program X can do. I don't think it is.
 
True, but an preference that isn't rational doesn't have to have a rationale.

No. Irrational choices can easily have rationales.

Unless you're saying that preferences are truly without any cause at all.
 
That's how human memory works.

Technically it is one aspect of one part of how human memory works.

I suppose that 'dodger and Pixy will claim that that is just what program X can do. I don't think it is.

Claiming is unnecessary when such self-rewriting programs exist.
 
That's how human memory works. It makes you a different person. It makes you work differently. If you want to consciously recall being taught bond pricing, you might well not be able to do it, but you have the effects of it memorised nevertheless.
That's how memory works. You remember stuff. Whether you're a pocket calculator or a Princess of Mars.

I suppose that 'dodger and Pixy will claim that that is just what program X can do. I don't think it is.
Try pulling all the memory out of your computer. I promise you, it will be a different person.
 
No. Irrational choices can easily have rationales.
I agree with that. I'm only saying that it doesn't have to have a rationale.
Unless you're saying that preferences are truly without any cause at all.
Only if you equate rationale with cause. I don't.
 
I agree with that. I'm only saying that it doesn't have to have a rationale.

And I'm saying such a thing must be acausal.

Only if you equate rationale with cause. I don't.

They are equivalent things. Having a reason for doing something is much the same as saying the reasons for doing something are why I did something.
 
And I'm saying such a thing must be acausal.



They are equivalent things. Having a reason for doing something is much the same as saying the reasons for doing something are why I did something.

Exactly. People somehow manage to equate 'having a reason for making a choice' into 'having no choice'.

People always have reasons or 'causes' for the actions they take. If you know someone really well, you could probably predict what actions they would take in certain circumstances.

How you can jump from these simple facts to 'we have no choice in our actions' is beyond me.
 
Where do you get "forever"? At present, consciousness (and all those difficult to define words that are involved with humanity) aren't in the realm of science. Maybe one day they will be.

... where by "maybe one day they will be" you mean "If I make the disclaimer that 'present day magic might be future technology' then people can't accuse me of advocating magic."

Guess what -- I am still gonna accuse you. You are advocating magic.

If consciousness can be described mathematically, then it is in the realm of science. If not, it is magic -- forever. You have made your choice. Tell the other wizard's guild members I said hi.
 
Rocket, the rationale isn't "Oh, I'm awake so therefore I must be able to have qualia". The experience of being awake is qualitative; any subjective experience at all is qualia. Its not an additional property of being conscious and awake -- it is consciousness.

I'm literally stunned that you don't seem to be picking up on this in the slightest :confused:

Well, I didn't know your definition of "conscious" was so broad.

We can explain waking behavior. We can explain subjective experience -- it is simply what it is like to be something.

So where is the HPC in all of this? If "qualia" are merely subjective experience then they aren't a mystery at all.

I already gave an example of such in the thought experiment I proposed in post #353. The subject of the thought experiment is conscious, in the physiological sense, but does not have knowledge of anything because they are sensorially cut off from their environment.

Consciousness does not have a tautological relationship with knowledge; it is merely the necessary requisite for it. Just as an object cannot register weight unless it has mass, so an entity cannot have knowledge unless it is conscious. There is absolutely no logical contradiction or circular reasoning in this statement. For the life of me, I cannot understand why you don't see this.

Alright. But if you are going to define consciousness such that such a creature is conscious -- even though it has zero knowledge of itself -- then there isn't anything to pursue. Your definition is equivalent to mere existence, and thus is utterly useless.

That just the problem. Neither you, or anyone else has an operational definition of qualitative experience [i.e consciousness]. There are various methods of defining and modeling computational functions but absolutely nothing in the way of describing how such functions translate into conscious thought.

You are dead wrong on both points.

Pixy, for example, has a very simple operational definition of consciousness. You disagree with it. So what. That doesn't mean he doesn't have a definition.

And I am very capable of describing how computational functions translate into conscious thought -- under my definition of consciousness.

So we come back to the HPC, apparently -- Pixy and I are entirely able to describe what we are talking about while you sit there and shake your head and say "No, you are still missing something. What it is, I cannot put my finger on, but it is something."




Okay, so what is the difference between neurons of a conscious brain and an unconscious brain?

The flow of information between them.

What is it about the activity of some neurons that produces qualitative experiences?

Self reference and reasoning.

How do the contributions of all those neurons come together in the unified experience of being conscious?

Any system that references itself and reasons can be said to be conscious, under various definitions of "conscious."

If you want to know how neurons come together to form human consciousness, be prepared to spend a few years with your head in books -- and that is just to learn what we haven't figured out yet. Talk to Nick227, he seems to be an expert on human consciousness theories.

Is an organism simply having neurons sufficient for generating consciousness?

Is an object simply having carbon atoms sufficient for generating diamond? If you pour some transistors into a box will numbers be calculated?

What a silly question.
 
The question about more complex programming systems is - can this multi-processor, multi-level extremely complicated programming intelligence system be emulated - even in principle - as a series of single instructions on one big program. If it can, then that is what it is, and we have no reason to believe that such a program would develop into something else simply by complicating it.

So there are things that can't be described mathematically?

Can you name a few? Because that would be quite a move for your philosophical career, to invalidate all monisms. You might even get a few books out of it.
 
You're not describing consciousness, then. You've redefined the problem to be something it isn't and then claimed to have solved it. You're basically just playing a labeling game rendering your conclusions a complete non sequitur. In essence, all your argumentation in support of your position is...how shall we say it...? Irrelevant

You claim we are not describing consciousness, yet you cannot describe where we are incorrect.

What a curious situation...
 
... where by "maybe one day they will be" you mean "If I make the disclaimer that 'present day magic might be future technology' then people can't accuse me of advocating magic."

Guess what -- I am still gonna accuse you. You are advocating magic.

If consciousness can be described mathematically, then it is in the realm of science. If not, it is magic -- forever. You have made your choice. Tell the other wizard's guild members I said hi.

This is absurdity. If I say there is no mathematical model that describes consciousness, then that's the same as believing in magic? So if if physicists claim that there's no reconciliation at present between quantum theory and the General Theory of Relativity, they can be classed with Harry Potter and Gandalf?

This is not only nonsense, it's the kind of nonsense that opens the door wide to all kinds of pseudo-science. Why not believe the astrologers and the homeopaths? They have a theory, and that's all that matters, right? Never mind if there is supporting evidence.
 

Back
Top Bottom