The Hard Problem of Gravity

Yes we do.

Yes they do.

That "specific frequency" is the operating speed of the self-referential loop. It's nothing more, and no-one in the field suggests that it is anything more.

Global access does not happen at the neuronal level. It's an outcome of the switching network. Introspection of the "global" state is by self-reference.

Sorry, you're talking nonsense. It's not just that consciousness works by self-reference, we define it as self-reference. We've always defined it as self-reference.

Take a look at Descarte's cogito. He's talking about self-referential information processing.


Wrong. It's not only a proven fact, it's true by definition.

Read Hofstadter.

Pixy,

I find you very adept at creating what looks like a convincing case on the surface. Really, you should work in marketing.

However....if you care to do a word search on the previously linked paper you will see that the authors do not mention "self-reference" once. Why then do you insist that the authors are talking about self-reference? They are not. Maybe you want to claim that this is what they're talking about, even though they don't use the term specifically? I've read the paper several times and as far as I can tell this is simply not true.

And it's this standard of accuracy that for me sets the tone for the rest of your posts. You claim that consciousness is always defined as self-reference. Sorry, that's simply not true. Consciousness is frequently defined as phenomenality, and it is clearly this definition to which GWT refers. Dennett, a leading Strong AI theorist, who I guess in principle should agree with you, defines consciousness, I believe, as that to which there is access, in other words phenomenality.

AI theory may well have some value in understanding specifically human consciousness, I don't dispute it. But to say that it has already got there, to me that is just way overstating the case. As far as GWT is concerned, which is the predominating theoretical model, AI runs straight away into problems explaining phenomenality in a system where it's clear that there exist two levels of neuronal processing - conscious and unconscious.

Nick
 
Last edited:
So...you're saying that it's the presence of feedbacks loops that makes one set of signals conscious and another not?

Nick


No, I'm saying that your argument that self-reference (feedback loops) not being a part of that model -- as I tried to tell you earlier -- is wrong. The nervous system is built on self-referential loops, constantly updating information.

40 Hz event related potentials are self-referential loops.

You said that you wanted to read more about it earlier -- start reading the literature on EEG. I didn't respond earlier because I didn't think you would want to do it. The reading isn't easy and there is a huge literature, but that's what EEGs look at primarily. An EEG measures summated excitatory and inhibitory post-synaptic potentials. If you look at an EEG you will see a graphic representation of the change in potentials over time, and those potentials (in a normal person) change in a largely sinusoidal pattern. That sinusoidal pattern is primarily created by thalamocortical relay loops -- knock out the connection between thalamus and cortex and what you see is very slow waves coming from the cortex itself.

Anytime you talk about EEG activity you are talking about graphical representations of feedback loops. The authors of that paper did not mention it because it is understood that that is what they are discussing -- cortico-cortical, thalamocortical, limbo-cortical, and brainstem-cortical relay loops.

You can also look into the literature on 40 Hz event related potentials. I don't keep up with it myself because it is still at a very early stage and is not particularly powerful as an explanation of anything.

Not-very interesting historical-biographical information about it -- my wife's grandmother's second husband was the head of the Neuropsychology department at the University of Houston and worked on this stuff. He's actually the guy who talked me out of going to graduate school in psychology and going to medical school instead. He died many years ago of mesothelioma, so see where that got him.
 
We are not (directly) conscious of cerebellar function. That does not mean that it's not conscious itself, just that we don't have access.

Fair enough, but we must admit that we cannot know that it is conscious -- except by definition, certainly not by experience -- so I can see where other people stand on it. Even if I don't agree with them.

If we define consciousness as awareness, and if we define awareness as attention directed 'toward information' and the ability to change behavior based on that information, then we must conclude that the neurons in the cerebellum are conscious in some sense.

But it all depends on the definitions used. Some, I think, would say that since we are not aware of that information, then it cannot be conscious, so we must search for a better set of definitions.


It does depend on how the feedback loop operates - it needs to be self-referential, not merely referential.


Right, but I'm a little concerned that the definitions might lack something. But based on what we currently know they're clearly the best thing going. I'm not certain that I could define the process any other way and make sense out of it.

As you well know there are different levels of consciousness with different levels of complexity involved. Perhaps we should repeat that more often?
 
So...you're saying that it's the presence of feedbacks loops that makes one set of signals conscious and another not?

Nick

I'd like a tight definition of "feedback loop". It seems to me that it implies that an object has it's behaviour affected by its environment, and that it in turn effects the environment. If that's the case, then it's a fairly universal arrangement. If it's not that, then what is it?
 
I'd like a tight definition of "feedback loop". It seems to me that it implies that an object has it's behaviour affected by its environment, and that it in turn effects the environment. If that's the case, then it's a fairly universal arrangement. If it's not that, then what is it?
No, that's a feedback mechanism. That's the general concept of feedback; we're talking about something more specific.

A feedback loop modifies its behaviour by monitoring its own behaviour.
 
I'd like a tight definition of "feedback loop". It seems to me that it implies that an object has it's behaviour affected by its environment, and that it in turn effects the environment. If that's the case, then it's a fairly universal arrangement. If it's not that, then what is it?

Yes, it is fairly universal.

So is the arrangement of particles in atoms -- nuclei have protons and the electrons float around outside that.

Yet, despite the fact that their "arrangements" are universal, gold and flourine behave very differently.

So what is your point?
 
What I mean is that this happens in the brain as well as computers.
Doesn't matter where it happens. Perhaps there's a communications issue.

This is what I'm referring to:
cyberneticsWP
Cybernetics is preeminent when the system under scrutiny is involved in a closed signal loop, where action by the system in an environment causes some change in the environment and that change is manifest to the system via information / feedback that causes the system to adapt to new conditions: the system changes its behaviour.

This sounds pretty close to your definition of feedback loops to me, so I was wondering how close it was.
 
Last edited:
Doesn't matter where it happens. Perhaps there's a communications issue.
Yep, sorry, I missed your meaning.

But no, that's not what I'm talking about. That's feedback, sure, but I'm talking about information processing systems that monitor their own activity directly, not just ones that monitor changes to the state of the environment (even when those changes are the result of the activity of the system).
 
And again the category error. The apple doesn't have mathematical properties. It has physical properties which can be mathematically and scientifically modelled. These include its chemical composition and its shape.

Mathematical properties and physical properties are one and the same.

You clearly don't even understand what a "category" is, so I find it ironic that you would accuse someone else of making a category error.

Properties are just categorization. Suppose I categorize an apple as "round." Is that a mathematical property or a physical property?

Physical objects always behave in certain ways due to their physical properties. They don't have mathematical properties. We use mathematics to approximately model behaviour. Objects in the real world are not doing mathematics.

This is a critical point, because if you think that physical objects are doing mathematics, then you can fall into the trap of believing that a mathematical simulation is the same as the thing being simulated. In the end, you can stop believing in reality altogether.

Wrong.

Mathematics is a way to describe reality. We use mathematics to describe behavior. In fact, it is the only way to fully describe behavior.

That is kind of the whole point of mathematics.

There is no "magic world of mathematics" floating around independent of reality. It is a description of reality and as such it is entirely dependent on reality. No reality, no mathematics.

This is sort of trivial to understand, since if there was no reality, there would be no humans, and if there were no humans, there would be no mathematics.

It's always possible to abstract some behaviour of any system. However, it's important to recognise that this always involves discarding information.

In the case of the generation of consciousness, it's possible to conjecture that the essential element is the digital network. Certainly the brain can be abstracted as a digital network. But we don't know whether this is leaving out an essential element.

If there is neurological research indicating without ambiguity that issues of timing and of biochemical processes have no role in creating consciousness, then that would be interesting. How one would perform such experiments, I don't know, given that brains are fairly sensitive and tend to stop working altogether if subjected to too much interference. But such research would be far more convincing than the traditional AI assertion technique.

Completely irrelevant, and your claim to the contrary betrays how little you know.

Any behavior resulting from timing or biochemical processes can also be exhibited by networks of transistors.

If your tire went flat, and you couldn't find another tire, but some engineer came up and said "here, replace the wheel with this widget, it will behave just like the tire as far as your car is concerned," what would you tell him?

Would you stubbornly insist that the behavior of the tire isn't what allows the car to go? Would you insist that it is instead some magical property of an actual rubber tire that can't be exhibited by any other entity in the universe, and that no matter what his widget does the car won't be "going" at all?

It is obvious that a human brain and a digital electronic computer share certain properties which can be mathematically modelled in the same way. It's also obvious that there are other properties which they do not share.

No, it is not obvious at all. There is no scientific reason why one could not replace a neuron in a brain with a suitably advanced cybernetic device programmed to emulate that neuron and have the brain function exactly as before. None.

If you can think of one, feel free to share it with us.

To assume that two systems which share any property are thereby entirely equivalent in function is plain silly.

Many of the theorems of mathematics and computer science seem silly to those who are not educated in the relevant subjects.

Luckally, "silly" doesn't mean squat in science. Sound mathematical descriptions do.

I find this "I know all about this and you don't" attitude annoying and a little desperate. There are a lot of people thinking about this subject who are smarter than anyone posting on this thread. They manage to disagree on almost every point. There might be a consensus among people researching AI, but it does not extend to everyone else who has relevant knowledge.

If your arguments are good enough (which they clearly aren't) then you won't feel the need to constantly proclaim how smart you are.

I am not proclaiming how smart I am. I am proclaiming how uneducated in this field you are.

If you want to establish that the creation of consciousness (as well as other functions of the brain) is a matter of only one neurological behaviour out of many, then you will need to demonstrate that. Of course, you could fall back to the default procedure of saying "all the people educated in this subject agree". It's not true, but it's at least quick and gives a nice warm glow.

If you want to establish that the substrate of consciousness might be limited to biological neural networks, then you will need to demonstrate that.

It is quite ironic that you are able to imagine how all these different systems in the universe might act as switches -- when it suits your argument -- yet you are completely unable to imagine how anything besides a biological brain might exhibit all the behaviors of consciousness.

Perhaps because the latter doesn't suit your argument?
 
No, I'm saying that your argument that self-reference (feedback loops) not being a part of that model -- as I tried to tell you earlier -- is wrong. The nervous system is built on self-referential loops, constantly updating information.

Hi INW,

I do feel I'm being misunderstood here.

I am not saying that self-reference feedback loops are not part of the model. My contention was that they are not what makes the difference between conscious and unconscious processing. My contention is that consciousness (as in phenomenal awareness) is not inherently related to self-reference.

I could be wrong, but this is my contention.

40 Hz event related potentials are self-referential loops.

You said that you wanted to read more about it earlier -- start reading the literature on EEG. I didn't respond earlier because I didn't think you would want to do it. The reading isn't easy and there is a huge literature, but that's what EEGs look at primarily. An EEG measures summated excitatory and inhibitory post-synaptic potentials. If you look at an EEG you will see a graphic representation of the change in potentials over time, and those potentials (in a normal person) change in a largely sinusoidal pattern. That sinusoidal pattern is primarily created by thalamocortical relay loops -- knock out the connection between thalamus and cortex and what you see is very slow waves coming from the cortex itself.

I can follow that. I used to have one of those things where you stuck sensors on your temples and plugged it into Windows 3.1!

Anytime you talk about EEG activity you are talking about graphical representations of feedback loops. The authors of that paper did not mention it because it is understood that that is what they are discussing -- cortico-cortical, thalamocortical, limbo-cortical, and brainstem-cortical relay loops.

You can also look into the literature on 40 Hz event related potentials. I don't keep up with it myself because it is still at a very early stage and is not particularly powerful as an explanation of anything.

I dare something very exciting will one day be discovered in this area, connecting phenomenality to neurology. What's so intriguing to me is that, of course, in reality no one is actually experiencing when we investigate in this sub-brain arena. Phenomenality must have been some quirk of evolutionary adaptation, quite possibly involving feedback loops I will happily admit, and the notion of selfhood came along later.

Not-very interesting historical-biographical information about it -- my wife's grandmother's second husband was the head of the Neuropsychology department at the University of Houston and worked on this stuff. He's actually the guy who talked me out of going to graduate school in psychology and going to medical school instead. He died many years ago of mesothelioma, so see where that got him.

Sounds like he still made it to a reasonable age, if my maths is right, unless your wife's grandmother went for a much younger guy.

Nick
 
Last edited:
Pixy,

I find you very adept at creating what looks like a convincing case on the surface. Really, you should work in marketing.

However....if you care to do a word search on the previously linked paper you will see that the authors do not mention "self-reference" once. Why then do you insist that the authors are talking about self-reference? They are not. Maybe you want to claim that this is what they're talking about, even though they don't use the term specifically? I've read the paper several times and as far as I can tell this is simply not true.
As Icheneumonwasp pointed out, they don't talk about it in obvious terms, because it's assumed. That's simply how the brain works, so any model built to explain brain function is built on self-referential feedback loops. But if you understand what they are building on, everything they are talking about is self-reference.

And it's this standard of accuracy that for me sets the tone for the rest of your posts. You claim that consciousness is always defined as self-reference. Sorry, that's simply not true.
Sorry, yes it is.

Consciousness is frequently defined as phenomenality, and it is clearly this definition to which GWT refers. Dennett, a leading Strong AI theorist, who I guess in principle should agree with you, defines consciousness, I believe, as that to which there is access, in other words phenomenality.
That's not consciousness. That's awareness. They are not the same thing.

And who, exactly, defines consciousness as "phenomenality", and what do they think it's supposed to mean?

AI theory may well have some value in understanding specifically human consciousness, I don't dispute it. But to say that it has already got there, to me that is just way overstating the case.
In what way is SHRDLU not conscious?

As far as GWT is concerned, which is the predominating theoretical model
Says who? Apart from you, that is.

AI runs straight away into problems explaining phenomenality in a system where it's clear that there exist two levels of neuronal processing - conscious and unconscious.
No it doesn't. Every part of this claim is false. This has been addressed dozens of times, but you don't appear to be paying attention.

Read Hofstadter.
 
So...you're saying that it's the presence of feedbacks loops that makes one set of signals conscious and another not?
NO, DAMMIT!

Signals aren't conscious. Sets of signals aren't conscious. How can they be? A signal is just a signal.

Consciousness is a process of self-reference. The feedback loops are the structure of the self-reference.

The feedback loop means that the signal is a signal in a conscious system. It's still just a signal. There aren't two types of neural processing, as you keep insisting, only one. But there are different arrangements of subnetworks. Some of those arrangements are feedback loops, allowing for self-reference, and hence consciousness.

But - and here's another point you have failed to grasp - there's not just one such loop, there's lots of them. And each one can be regarded as a conscious entity in itself. You don't have access to their internal states, because you are another conscious entity. (In the case of split-brain patients, you are two other conscious entities.)

But you synthesize the information arriving from all this other processing into a general model of the world - and a model of your mental processes as well. (The latter being rather less accurate than the former.)

There is no global anything at this level. Neurons cannot talk to anything but their adjacent neurons. GWT is a synthesis that sits on top of all this processing. GWT is impossible without this self-reference.

Read Hofstadter. Listen to the lecture series. All of this is covered.
 
As Icheneumonwasp pointed out, they don't talk about it in obvious terms, because it's assumed. That's simply how the brain works, so any model built to explain brain function is built on self-referential feedback loops. But if you understand what they are building on, everything they are talking about is self-reference.

Pixy,

You are missing big chunks out here. If you read the paper, the authors ask themselves why global access should equal consciousness and effectively admit that they don't know, it's just a contention of GWT...

Gaillard said:
Why would this ignited state correspond to a conscious state? The key idea behind the workspace model is that because of its massive interconnectivity, the active coherent assembly of workspace neurons can distribute its contents to a great variety of other brain processors, thus making this information globally available. The global workspace model postulates that this global availability of information is what we subjectively experience as a conscious state.

You are confusing...

(i) the innate presence of feedback loops in vibrating systems implicated in consciousness

with

(ii) the notion that self-reference inevitably leads to consciousness

The scientists know that the waves of coherent neuronal activity are happening but they do not know that this definably is consciousness. If they did they would say so.

This whole thing is not made any clearer by your insistence that consciousness is defined as "self-reference." It's crystal clear, I submit, that even in this paper we're discussing it is not defined as such. Just read the quote above. They are talking about phenomenality. They are talking about what there is access to.

Dan Dennett, the godfather of Strong AI, defines it like this. AFAICT, you are pretty much alone in defining it as self-reference. And doing so for me does you no favours because at every point where you might jump in awareness and actually realise the nature of the problem being discussed you simply leap back behind a definition which just doesn't fit to the context. And your brain goes back around in it's own feedback loop, something it will no doubt continue to do until you can bring yourself to creep out from behind this rigid protective definition and see if actually there is something meaningful being discussed here.

That's not consciousness. That's awareness. They are not the same thing.

Then you are using the terms differently from myself and INW. Read his post here please.

And who, exactly, defines consciousness as "phenomenality", and what do they think it's supposed to mean?

A big chunk of the problem is that there simply is no universally agreed definition specific to this domain of inquiry. But "what we have access to" is probably as good as it gets, and this is basically phenomenality.

In what way is SHRDLU not conscious?

I don't know if SHRDLU is conscious or not. And programming it to say "Yes, I am" doesn't, I'm afraid, convince me.

At some point you are going to have to accept that the reality of human brains leads us into certain problems that Strong AI struggles with...

* There appears to be both conscious and unconscious processing
* Conscious processing does appear to be associated with global access
* The neuronal markers for conscious access do appear to involve a considerable state change. (Even scientists use terms like "ignition" in describing what happens neuronally).

Thus, evidence is mounting to suggest that consciousness may not be quite as simple as AI thinks it is. We don't know yet.

Read Hofstadter.

I'm going to pick up GEB again. I don't have a problem with Hofstadter.

Nick
 
Pixy,

You are missing big chunks out here. If you read the paper, the authors ask themselves why global access should equal consciousness and effectively admit that they don't know, it's just a contention of GWT...
Yeah. And?

You are confusing...

(i) the innate presence of feedback loops in vibrating systems implicated in consciousness
The what?

with

(ii) the notion that self-reference inevitably leads to consciousness
No. One more time, Nick:

Consciousness is self-referential information processing.

The scientists know that the waves of coherent neuronal activity are happening but they do not know that this definably is consciousness. If they did they would say so.
Right. So?

This whole thing is not made any clearer by your insistence that consciousness is defined as "self-reference."
That's what the word means. If that's not what you mean, you're not talking about consciousness.

Again I point you to Decartes' cogito. And Hofstadter.

It's crystal clear, I submit, that even in this paper we're discussing it is not defined as such. Just read the quote above. They are talking about phenomenality.
They what?

They are talking about what there is access to.
There is no access to. There is only access by.

Dan Dennett, the godfather of Strong AI, defines it like this.
Cite.

AFAICT, you are pretty much alone in defining it as self-reference.
Nope. I direct you again to Descartes and to every right-minded philosopher and neuroscientist since.

And doing so for me does you no favours because at every point where you might jump in awareness and actually realise the nature of the problem being discussed you simply leap back behind a definition which just doesn't fit to the context.
Nick, your problem is that you're not talking about consciousness. You're talking about awareness. They are not the same.

And your brain goes back around in it's own feedback loop, something it will no doubt continue to do until you can bring yourself to creep out from behind this rigid protective definition and see if actually there is something meaningful being discussed here.
Read Hofstadter.

Then you are using the terms differently from myself and INW. Read his post here please.
Have. What's your point?

A big chunk of the problem is that there simply is no universally agreed definition specific to this domain of inquiry. But "what we have access to" is probably as good as it gets, and this is basically phenomenality.
You mean awareness. Not consciousness. And certainly not "phenomenality".

I don't know if SHRDLU is conscious or not. And programming it to say "Yes, I am" doesn't, I'm afraid, convince me.
That is not remotely what it does. It not only interprets and follows instructions - working out the details of how to follow them, and guessing what you mean when the instructions are vague - it can also explain its own actions.

So again, is SHRDLU conscious?

At some point you are going to have to accept that the reality of human brains leads us into certain problems that Strong AI struggles with...
No. These problems exist only in your lack of understanding.

* There appears to be both conscious and unconscious processing
Explained by self-reference.

* Conscious processing does appear to be associated with global access
There is no global access. It's a synthesis.

* The neuronal markers for conscious access do appear to involve a considerable state change. (Even scientists use terms like "ignition" in describing what happens neuronally).
No they don't. You've completely misunderstood everything they are talking about.

Thus, evidence is mounting to suggest that consciousness may not be quite as simple as AI thinks it is.
Nope.

We don't know yet.
Yes we do. You're still wrong.

I'm going to pick up GEB again. I don't have a problem with Hofstadter.
Good. Because he will tell you exactly what I have told you. Albeit better, because he has 700 pages to work with.
 
Yeah. And?

They are saying that they don't know why the phenomenom of global access should equate to consciousness. And by consciousness here they are refering to phenomenality.

They're not talking about "what consciousness is" mechanistically (where your definition might or might not be useful). Rather they are trying to account for the phenomena of consciousness itself (subjective experience) in material, neuronal terms. They are investigating brain activity to try and understand how certain neuronal behaviour creates phenomenality. And the result thus far is that they have discovered certain brain activity associated with consciousness but they do not yet understand just why or how this relates to actual phenomenal consciousness. It is not clear yet.

Consciousness is self-referential information processing.

But this definition is not appropriate to the context of the investigation Gaillard, Dehaene et al are involved in. They are trying to uncover just how certain neuronal activity creates the actual phenomena of consciousness. You are effectively saying "it just does it," but this is problematic because there does appear to be concurrent unconscious processing.

Again I point you to Decartes' cogito. And Hofstadter.

I point you to Damasio's Descartes' Error. If you're taking the cogito as axiomatic then of course your arguments are going to be circular and unfruitful. Descartes' cogito applies only to the narrative self. Likely Hofstadter too.



You'll have to give me a little more time for that one.


Nope. I direct you again to Descartes and to every right-minded philosopher and neuroscientist since.

Descartes doesn't really seem to be held in such high regard these days, from what I read. More the useful source of a number of intriguing and engaging errors.


Have. What's your point?

He's using the terms "consciousness" and "awareness" as meaning the same, AFAICT. When one is discussing phenomenality and trying to understand phenomenal consciousness this is how it is.


So again, is SHRDLU conscious?

I don't know.

Nick
 

Back
Top Bottom