westprog
Philosopher
- Joined
- Dec 1, 2006
- Messages
- 8,928
Of course not. If this is a simulation, then we've come to know a whole lot about it.
We know a whole lot about the simulation, but very little indeed about "reality".
Of course not. If this is a simulation, then we've come to know a whole lot about it.
So what you're saying is that its relative to the observer, right? I guess thats one way to look at it and I don't think its necessarily wrong. But, on the same token, I prefer to look at things from a birds eye view. If an 'environment', simulated or otherwise, is generated via logical ops then, to me, its counts as being computational.
while I'm convinced that the physical media of the computation is just as, if not more, important.
Pixy is right -- the Church-Turing thesis shows that substrate is irrelevant in principle.
In practice, though, you are probably correct -- making an android act just like a human is probably going to converge on constructing a bonified human from scratch because that is by far the easiest way to do it.
Given enough resources, though, one could make a gigantic gundam act just like a human. Heck, we could make a gigantic gundam think it was a human, if we were tricky enough. But that would be like making a calculator using buckets and pulleys (or stones).
AkuManiMani said:I have quite a handful of memories from when I was a toddler and even a few vague ones from late infancy.
What years, specifically ? I have one memory from my second birthday, now obscured by remembering the memory of the memory of the memory, and a few bits of my third year. From my fourth year onwards things become suddenly sharper.

The point is, thatis just another behaviour.Yes, I believe the behaviour could be mimicked by a computer, for sure. I'm still not convinced about whether the computer sees "in the light" yet though.
Forget "phenomenality". Forget you ever heard the experession. It's a philosophical dead end.It strikes me that if we accept natural selection and that unconscious processing is possible then there must have been some highly favoured evolutionary event that led to actual phenomenality.
What, exactly, do we need to know, and why?Until we know more about this I doubt anyone can say whether AI is truly analogous to human consciousness.
Um, what? I'm having a hard time unpacking that. Much of it appears correct, but what do you mean by "unconsciousness doesn't exist"? If you mean that there is always some conscious processing going on in the brain short of actual death and decay, then yes, that's correct (by my definition of consciousness, at least). But that doesn't mean that the top-level consciousness - the one we recognise as us - is always active.The other alternative is that, qualititatively, there is actually only one form of consciousness (unconsciousness doesn't exist) and that the human is actually fully conscious but our apparent experience of consciousness is acutely limited.
Yep.Pixy is right -- the Church-Turing thesis shows that substrate is irrelevant in principle.
Forget "phenomenality". Forget you ever heard the experession. It's a philosophical dead end.
What, exactly, do we need to know, and why?
Um, what? I'm having a hard time unpacking that. Much of it appears correct, but what do you mean by "unconsciousness doesn't exist"?
No. Sorry Nick, this is a peculiar delusion of yours and needs to be addressed.But this is, and has been for over a decade, the core issue in consciousness research...the HPC.
Nothing is required to overcome HPC, because HPC is not even logically coherent.There are theoretical models that allow one to overcome it, but research into human consciousness does not currently, as I see it, reinforce them.
Wrong. GWT is impossible without self-referential information processing.For example, what for me is seen when we examine modern research into GWT is that the version of Strong AI you offer does not appear compatible.
Dan Dennett is describing much higher level models, which in turn require self-referential information processing to exist. As does the GWT. As has been pointed out previously.Relatedly, Dan Dennett had to rework his Strong AI model in the face of evidence which either disagreed with it or reinforced the "theatre" model.
What evidence?Guys like Dennett don't do this lightly. He had to climb down in the face of evidence which simply didn't back up his version of Strong AI.
What is that even supposed to mean, Nick?So, what I see is that current research does not appear to be moving so much in the direction of ratifying Strong AI as being the basis of human consciousness.
Do you also have a theory about the brontosaurus?This is my perception and may be based on inadequate data but this is how it seems to me.
Self-reference.We need to know the answer to the question Blackmore puts to Baars in her interview book, and which I've quoted many times on this and other threads. What creates the difference between processing apparently going on in the dark and that in the light?
There are no conscious events.I mean there's no actual qualitative difference between a conscious event and an unconscious one. It simply appears that there is.
We know a whole lot about the simulation, but very little indeed about "reality".
I've asked my mom what my age was during the events I could recall.
Regardless of my exact age at the time of the memories, the point is that they show that consciousness precedes language.
I've asked my mom what my age was during the events I could recall. Judging from what shes told me they go back atleast as early as 1-2 years of age. Atleast one memory was from before I'd learned to walk. It was in my great aunt's living room and I was sitting on my mother's lap. My older cousin walked over and tried to play with me but something about him repulsed and upset me so I hit him and plunged my face into my mother's chest. When I brought up the memory to my mother she laughed and said that she remembered that particular event. According to her, my cousin always had poor oral hygiene and that I was hiding from his stinky breath
Regardless of my exact age at the time of the memories, the point is that they show that consciousness precedes language.
No. Sorry Nick, this is a peculiar delusion of yours and needs to be addressed.
HPC is a sideshow, promoted by immaterialist philosophers who would otherwise be forced to teach undergrad classes. At best, it is utterly void of meaning and content.
Wrong. GWT is impossible without self-referential information processing.
Dan Dennett is describing much higher level models, which in turn require self-referential information processing to exist. As does the GWT. As has been pointed out previously.
The point is that self-reference is not what makes the difference between what GWT proposes as conscious and unconscious processing.
Nick
I think that is wrong, I'm afraid. It is precisely some form of self-reference that differentiates unconscious and conscious processing in that model.
As far as we can tell, directed attention is specifically tied to a body map, as it must be, since attention directed to a particular part of space requires some grounding -- that grounding is in the map of the body that is constructed in the parietal lobe. That is why directed attention is also "housed" in the parietal lobe.
That is a form of self-reference.
Dehaene et al said:....through joint bottom-up propagation and top-down attentional amplification, the ensuing brain-scale neural assembly must “ignite” into a self-sustained reverberant state of coherent activity that involves many neurons distributed throughout the brain.
Why would this ignited state correspond to a conscious state? The key idea behind the workspace model is that because of its massive interconnectivity, the active coherent assembly of workspace neurons can distribute its contents to a great variety of other brain processors, thus making this information globally available. The global workspace model postulates that this global availability of information is what we subjectively experience as a conscious state.
Such events indicate that, it's true- but you could have a memory from a time when you weren't conscious. Memory is a construction. You might have remembered something accurately, but superimposed your current consciousness upon it.
Sorry, but if we are going to be strict about this, we have to be even-handed.
Well, sometimes our moms' memories are somewhat lacking.
So ? You still observe and learn by imitation.
And by the way, I don't know at what age you started talking, but by 12 months I was a conversation machine. So clearly I understood lots about language by then.
Well, the only post hoc construction is my verbal understanding of my memory. The memory already exists as qualitative subjective impressions. Its not I acquired language that I could put words to what I consciously experience.
I guess another example would be of someone who suffers sever brain trauma and loses the capacity for language. They may be still conscious but they cannot organize their thoughts into a verbal narrative or understand idea conveyed by other via language.
Thats not the point. I was illustrating that consciousness is prior to social learning. We consciously experience events before we acquire words to describe them as such.
My mother and other relative tell me that at about two months of age I would laugh at the punchlines on TV comedy routines. I don't recall any of this but apparently it creeped out my aunt, who was in the room at the time.
I'm pretty sure I had no idea what was being said but apparently I found something about the experience amusing![]()
It's the unreliability of memory I'm referring to. It's really difficult to recall exactly how one felt. Early memories are especially fragile.
I agree, but the point I'm trying to make is subtly different, I believe. "Self-evaluatory" processes direct our attention constantly, no doubt about it. What we attend to is largely dictated by our notions of selfhood, whether biological or psychological.
However, phenomenality itself is not inherently self-referencing, and this means that the difference between conscious and unconscious processing in the GWT model is not simply that one set of data self-references and the other doesn't.
We know that consciousness/global access is "switched on" by self-evaluatory processes, but we don't know why global access = consciousness in the first place and whether there is a real qualitative difference between conscious and unconscious processing.
Strong AI adherents will maintain that there is no difference and that phenomenality itself tends towards being an erroneous concept. However, this position is not proven AFAIA and if anything the evidence seems to me to be more pointing back the other way.
Until we understand why consciousness = global access we won't know whether or not an HPC exists.
Nick
ETA: By way of evidence for the last two paragraphs I quote again the paper Lupus linked earlier in this thread...
In the first paragraph above the authors describe their notion of the neuronal activity that corresponds to actual phenomenal consciousness, viz "“ignite” into a self-sustained reverberant state of coherent activity that involves many neurons distributed throughout the brain." To me the choice of terms indicates that they clearly believe conscious vision to be quite a spectacular phenomenom, and not merely the addition of a bit of self-referencing into data.
It seems to me that "self-evaluatory" systems will constantly monitor visual data, but in order to render information conscious a considerable neural process must be triggered.