The Hard Problem of Gravity

So what you're saying is that its relative to the observer, right? I guess thats one way to look at it and I don't think its necessarily wrong. But, on the same token, I prefer to look at things from a birds eye view. If an 'environment', simulated or otherwise, is generated via logical ops then, to me, its counts as being computational.

That is fine but like I said then you need another word to distinguish what a puddle does from what a bacterium does.

while I'm convinced that the physical media of the computation is just as, if not more, important.

Pixy is right -- the Church-Turing thesis shows that substrate is irrelevant in principle.

In practice, though, you are probably correct -- making an android act just like a human is probably going to converge on constructing a bonified human from scratch because that is by far the easiest way to do it.

Given enough resources, though, one could make a gigantic gundam act just like a human. Heck, we could make a gigantic gundam think it was a human, if we were tricky enough. But that would be like making a calculator using buckets and pulleys (or stones).
 
Pixy is right -- the Church-Turing thesis shows that substrate is irrelevant in principle.

In practice, though, you are probably correct -- making an android act just like a human is probably going to converge on constructing a bonified human from scratch because that is by far the easiest way to do it.

Given enough resources, though, one could make a gigantic gundam act just like a human. Heck, we could make a gigantic gundam think it was a human, if we were tricky enough. But that would be like making a calculator using buckets and pulleys (or stones).

I suppose that once human mental functions are understood in finer detail attempts to simulate it will follow suit. It could be that its possible to simulate what humans experience as consciousness on abiotic materials. But, with that in mind, such simulation may be so horribly inefficient that it would be necessary to create a medium suitably analogous to biology in order to support 'real' consciousness.

It all really depends on whether its necessary for an entity to be 'alive' in order for it to generate consciousness efficiently, or at all. There are significant physical differences between organisms and 'inanimate' systems. I suspect very strongly that such differences will be relevant in creating artificial consciousness. Can't really know for sure, atm.
 
Last edited:
AkuManiMani said:
I have quite a handful of memories from when I was a toddler and even a few vague ones from late infancy.

What years, specifically ? I have one memory from my second birthday, now obscured by remembering the memory of the memory of the memory, and a few bits of my third year. From my fourth year onwards things become suddenly sharper.

I've asked my mom what my age was during the events I could recall. Judging from what shes told me they go back atleast as early as 1-2 years of age. Atleast one memory was from before I'd learned to walk. It was in my great aunt's living room and I was sitting on my mother's lap. My older cousin walked over and tried to play with me but something about him repulsed and upset me so I hit him and plunged my face into my mother's chest. When I brought up the memory to my mother she laughed and said that she remembered that particular event. According to her, my cousin always had poor oral hygiene and that I was hiding from his stinky breath :covereyes

Regardless of my exact age at the time of the memories, the point is that they show that consciousness precedes language.
 
Last edited:
Yes, I believe the behaviour could be mimicked by a computer, for sure. I'm still not convinced about whether the computer sees "in the light" yet though.
The point is, thatis just another behaviour.

It strikes me that if we accept natural selection and that unconscious processing is possible then there must have been some highly favoured evolutionary event that led to actual phenomenality.
Forget "phenomenality". Forget you ever heard the experession. It's a philosophical dead end.

Until we know more about this I doubt anyone can say whether AI is truly analogous to human consciousness.
What, exactly, do we need to know, and why?

The other alternative is that, qualititatively, there is actually only one form of consciousness (unconsciousness doesn't exist) and that the human is actually fully conscious but our apparent experience of consciousness is acutely limited.
Um, what? I'm having a hard time unpacking that. Much of it appears correct, but what do you mean by "unconsciousness doesn't exist"? If you mean that there is always some conscious processing going on in the brain short of actual death and decay, then yes, that's correct (by my definition of consciousness, at least). But that doesn't mean that the top-level consciousness - the one we recognise as us - is always active.
 
Pixy is right -- the Church-Turing thesis shows that substrate is irrelevant in principle.
Yep.

Now, if we were arguing about what is possible in practice (particularly right now), we'd arrive at a different answer. For some of the questions, anyway. SHRDLU is still SHRDLU. :)
 
Forget "phenomenality". Forget you ever heard the experession. It's a philosophical dead end.

But this is, and has been for over a decade, the core issue in consciousness research...the HPC. There are theoretical models that allow one to overcome it, but research into human consciousness does not currently, as I see it, reinforce them.

For example, what for me is seen when we examine modern research into GWT is that the version of Strong AI you offer does not appear compatible. Relatedly, Dan Dennett had to rework his Strong AI model in the face of evidence which either disagreed with it or reinforced the "theatre" model. Guys like Dennett don't do this lightly. He had to climb down in the face of evidence which simply didn't back up his version of Strong AI.

So, what I see is that current research does not appear to be moving so much in the direction of ratifying Strong AI as being the basis of human consciousness. This is my perception and may be based on inadequate data but this is how it seems to me.

What, exactly, do we need to know, and why?

We need to know the answer to the question Blackmore puts to Baars in her interview book, and which I've quoted many times on this and other threads. What creates the difference between processing apparently going on in the dark and that in the light?

Um, what? I'm having a hard time unpacking that. Much of it appears correct, but what do you mean by "unconsciousness doesn't exist"?

I mean there's no actual qualitative difference between a conscious event and an unconscious one. It simply appears that there is.

Nick
 
Last edited:
But this is, and has been for over a decade, the core issue in consciousness research...the HPC.
No. Sorry Nick, this is a peculiar delusion of yours and needs to be addressed.

HPC is a sideshow, promoted by immaterialist philosophers who would otherwise be forced to teach undergrad classes. At best, it is utterly void of meaning and content.

There are theoretical models that allow one to overcome it, but research into human consciousness does not currently, as I see it, reinforce them.
Nothing is required to overcome HPC, because HPC is not even logically coherent.

For example, what for me is seen when we examine modern research into GWT is that the version of Strong AI you offer does not appear compatible.
Wrong. GWT is impossible without self-referential information processing.

Relatedly, Dan Dennett had to rework his Strong AI model in the face of evidence which either disagreed with it or reinforced the "theatre" model.
Dan Dennett is describing much higher level models, which in turn require self-referential information processing to exist. As does the GWT. As has been pointed out previously.

Guys like Dennett don't do this lightly. He had to climb down in the face of evidence which simply didn't back up his version of Strong AI.
What evidence?

So, what I see is that current research does not appear to be moving so much in the direction of ratifying Strong AI as being the basis of human consciousness.
What is that even supposed to mean, Nick?

This is my perception and may be based on inadequate data but this is how it seems to me.
Do you also have a theory about the brontosaurus?

We need to know the answer to the question Blackmore puts to Baars in her interview book, and which I've quoted many times on this and other threads. What creates the difference between processing apparently going on in the dark and that in the light?
Self-reference.

I mean there's no actual qualitative difference between a conscious event and an unconscious one. It simply appears that there is.
There are no conscious events.
 
Last edited:
I've asked my mom what my age was during the events I could recall.

Well, sometimes our moms' memories are somewhat lacking.

Regardless of my exact age at the time of the memories, the point is that they show that consciousness precedes language.

So ? You still observe and learn by imitation.

And by the way, I don't know at what age you started talking, but by 12 months I was a conversation machine. So clearly I understood lots about language by then.
 
I've asked my mom what my age was during the events I could recall. Judging from what shes told me they go back atleast as early as 1-2 years of age. Atleast one memory was from before I'd learned to walk. It was in my great aunt's living room and I was sitting on my mother's lap. My older cousin walked over and tried to play with me but something about him repulsed and upset me so I hit him and plunged my face into my mother's chest. When I brought up the memory to my mother she laughed and said that she remembered that particular event. According to her, my cousin always had poor oral hygiene and that I was hiding from his stinky breath :covereyes

Regardless of my exact age at the time of the memories, the point is that they show that consciousness precedes language.

Such events indicate that, it's true- but you could have a memory from a time when you weren't conscious. Memory is a construction. You might have remembered something accurately, but superimposed your current consciousness upon it.

Sorry, but if we are going to be strict about this, we have to be even-handed.
 
No. Sorry Nick, this is a peculiar delusion of yours and needs to be addressed.

HPC is a sideshow, promoted by immaterialist philosophers who would otherwise be forced to teach undergrad classes. At best, it is utterly void of meaning and content.

Well, I would agree that the HPC is not as solid as it might seem, but for me it's clear that there are still issues. For Blackmore and Baars it seems this is also the case.

Wrong. GWT is impossible without self-referential information processing.

That's not really the point. The point is that self-reference is not what makes the difference between what GWT proposes as conscious and unconscious processing. Thus there is still an explanatory gap here and whilst this remains the HPC can creep in. With a computer it would be much simpler, I agree. But with a human it's harder to work out what's going on. We don't know enough about the neural basis of actual phenomenal consciousness yet. What does this 40Hz mean? Why should information in the global workplace be consciously available whilst similar concurrently processed information is not? We don't know the answers to these questions yet and this is why Baars says, get back to us in 100 years.

Dan Dennett is describing much higher level models, which in turn require self-referential information processing to exist. As does the GWT. As has been pointed out previously.

All models will require some level of self-referential processing to exist. This does not however close the explanatory gap. Dennett's problem is the same as yours. You develop what appears to be a working model that can replicate many of the features of human consciousness in a machine. But, as actual research into brain function continues so elements of the theory become increasingly unsound. The brain developed in a manner quite different to a computer and we just don't know enough to replicate what it's doing. And because we don't know enough we're still left with the HPC always lurking in the wings.

I don't personally believe the HPC is valid, but it's clear for me that until we know more about the neural basis of phenomenality, about global availability, then we can't discount it completely.

Nick
 
The point is that self-reference is not what makes the difference between what GWT proposes as conscious and unconscious processing.
Nick


I think that is wrong, I'm afraid. It is precisely some form of self-reference that differentiates unconscious and conscious processing in that model.

As far as we can tell, directed attention is specifically tied to a body map, as it must be, since attention directed to a particular part of space requires some grounding -- that grounding is in the map of the body that is constructed in the parietal lobe. That is why directed attention is also "housed" in the parietal lobe.

That is a form of self-reference.

As much as people try to fool themselves into thinking that there is some generalized, free-floating "awareness" that exists independent of grounding in a body map I don't think such an experience truly exists. What does exist as a feeling in mystical experiences is a breakdown in the distinction between body map and external world. But that is not the same thing as a "feeling of awareness" completely free of a body map in the first place. Awareness is always awareness from some perspective to some other "thing" -- it is ordered on the same intentional lines as emotion, feeling, language, etc.
 
I think that is wrong, I'm afraid. It is precisely some form of self-reference that differentiates unconscious and conscious processing in that model.

As far as we can tell, directed attention is specifically tied to a body map, as it must be, since attention directed to a particular part of space requires some grounding -- that grounding is in the map of the body that is constructed in the parietal lobe. That is why directed attention is also "housed" in the parietal lobe.

That is a form of self-reference.

I agree, but the point I'm trying to make is subtly different, I believe. "Self-evaluatory" processes direct our attention constantly, no doubt about it. What we attend to is largely dictated by our notions of selfhood, whether biological or psychological.

However, phenomenality itself is not inherently self-referencing, and this means that the difference between conscious and unconscious processing in the GWT model is not simply that one set of data self-references and the other doesn't.

We know that consciousness/global access is "switched on" by self-evaluatory processes, but we don't know why global access = consciousness in the first place and whether there is a real qualitative difference between conscious and unconscious processing.

Strong AI adherents will maintain that there is no difference and that phenomenality itself tends towards being an erroneous concept. However, this position is not proven AFAIA and if anything the evidence seems to me to be more pointing back the other way.

Until we understand why consciousness = global access we won't know whether or not an HPC exists.

Nick

ETA: By way of evidence for the last two paragraphs I quote again the paper Lupus linked earlier in this thread...

Dehaene et al said:
....through joint bottom-up propagation and top-down attentional amplification, the ensuing brain-scale neural assembly must “ignite” into a self-sustained reverberant state of coherent activity that involves many neurons distributed throughout the brain.

Why would this ignited state correspond to a conscious state? The key idea behind the workspace model is that because of its massive interconnectivity, the active coherent assembly of workspace neurons can distribute its contents to a great variety of other brain processors, thus making this information globally available. The global workspace model postulates that this global availability of information is what we subjectively experience as a conscious state.

In the first paragraph above the authors describe their notion of the neuronal activity that corresponds to actual phenomenal consciousness, viz "“ignite” into a self-sustained reverberant state of coherent activity that involves many neurons distributed throughout the brain." To me the choice of terms indicates that they clearly believe conscious vision to be quite a spectacular phenomenom, and not merely the addition of a bit of self-referencing into data.

It seems to me that "self-evaluatory" systems will constantly monitor visual data, but in order to render information conscious a considerable neural process must be triggered.
 
Last edited:
Such events indicate that, it's true- but you could have a memory from a time when you weren't conscious. Memory is a construction. You might have remembered something accurately, but superimposed your current consciousness upon it.

Sorry, but if we are going to be strict about this, we have to be even-handed.

Well, the only post hoc construction is my verbal understanding of my memory. The memory already exists as qualitative subjective impressions. Its not I acquired language that I could put words to what I consciously experience.

I guess another example would be of someone who suffers sever brain trauma and loses the capacity for language. They may be still conscious but they cannot organize their thoughts into a verbal narrative or understand idea conveyed by other via language.
 
Well, sometimes our moms' memories are somewhat lacking.

My mother's memory is rather good, actually. She often recalls the most inane details about events that happened decades ago.

Even if my mother's memory were spotty the fact that my own memories accurately coincide with hers lends credence to their accuracy.


So ? You still observe and learn by imitation.

Thats not the point. I was illustrating that consciousness is prior to social learning. We consciously experience events before we acquire words to describe them as such.


And by the way, I don't know at what age you started talking, but by 12 months I was a conversation machine. So clearly I understood lots about language by then.

My mother and other relative tell me that at about two months of age I would laugh at the punchlines on TV comedy routines. I don't recall any of this but apparently it creeped out my aunt, who was in the room at the time.

I'm pretty sure I had no idea what was being said but apparently I found something about the experience amusing :p
 
Well, the only post hoc construction is my verbal understanding of my memory. The memory already exists as qualitative subjective impressions. Its not I acquired language that I could put words to what I consciously experience.

I guess another example would be of someone who suffers sever brain trauma and loses the capacity for language. They may be still conscious but they cannot organize their thoughts into a verbal narrative or understand idea conveyed by other via language.

It's the unreliability of memory I'm referring to. It's really difficult to recall exactly how one felt. Early memories are especially fragile.
 
Thats not the point. I was illustrating that consciousness is prior to social learning. We consciously experience events before we acquire words to describe them as such.

Sure. They're still things you identify by observing others.

My mother and other relative tell me that at about two months of age I would laugh at the punchlines on TV comedy routines. I don't recall any of this but apparently it creeped out my aunt, who was in the room at the time.

I'm pretty sure I had no idea what was being said but apparently I found something about the experience amusing :p

That's certainly it. Or you laughed because other people laughed.
 
It's the unreliability of memory I'm referring to. It's really difficult to recall exactly how one felt. Early memories are especially fragile.

I suppose childhood memories have the same epistemological status as fossil evidence. They may be old and fragmented but they're the best window we have to the past :)
 
I agree, but the point I'm trying to make is subtly different, I believe. "Self-evaluatory" processes direct our attention constantly, no doubt about it. What we attend to is largely dictated by our notions of selfhood, whether biological or psychological.

However, phenomenality itself is not inherently self-referencing, and this means that the difference between conscious and unconscious processing in the GWT model is not simply that one set of data self-references and the other doesn't.

We know that consciousness/global access is "switched on" by self-evaluatory processes, but we don't know why global access = consciousness in the first place and whether there is a real qualitative difference between conscious and unconscious processing.

Strong AI adherents will maintain that there is no difference and that phenomenality itself tends towards being an erroneous concept. However, this position is not proven AFAIA and if anything the evidence seems to me to be more pointing back the other way.

Until we understand why consciousness = global access we won't know whether or not an HPC exists.

Nick

ETA: By way of evidence for the last two paragraphs I quote again the paper Lupus linked earlier in this thread...



In the first paragraph above the authors describe their notion of the neuronal activity that corresponds to actual phenomenal consciousness, viz "“ignite” into a self-sustained reverberant state of coherent activity that involves many neurons distributed throughout the brain." To me the choice of terms indicates that they clearly believe conscious vision to be quite a spectacular phenomenom, and not merely the addition of a bit of self-referencing into data.

It seems to me that "self-evaluatory" systems will constantly monitor visual data, but in order to render information conscious a considerable neural process must be triggered.

I don't think your evaluation fits what they say in their model. This reverberant state involves frontal and parietal structures. As I tried to mention earlier, they speak of these areas for two reasons. Both parietal and frontal areas are involved in directed attention and the parietal lobe is specifically important for the body map. Frontal lobes are vitally important for the pairing of emotional input/feeling states with motor output -- all of which is a part of consciousness.

They use the term "global" but this cannot mean that the 'reverberant loops' involve the entire brain. If that occurred, there would be no feeling, no experience. It is the localized involvement of parietal and frontal structures in these reverberant loops that are thought to constitute a sense of self.

Once again, this is not the story self that can self-reflect. This is more akin to Dennett's "body self". That is the self-reference that Pixy has been speaking about.
 

Back
Top Bottom