• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Resolution of Transporter Problem

If this is so why, when Blackmore ask of Baars precisely this question, does he agree that it's an explanatory gap?

Nick

If I had to guess, I would say because they are educated and trained as psychologists and neuroscientists, respectively, whereas I am educated and trained as a computer scientist -- thus our interptetation of the question is likely vastly different.

Most likely they are referring to the specifics involved in human consciousness whereas I am referring to the generalities of any consciousness.

I would wager good money that if I told them "consciousness arises because a greater amount of reasoning of a certain type is taking place," they would respond with "uh, tell us something we don't know?"
 
If I had to guess, I would say because they are educated and trained as psychologists and neuroscientists, respectively, whereas I am educated and trained as a computer scientist -- thus our interptetation of the question is likely vastly different.

So, you're saying, effectively, that human consciousness and AI are vastly different things? This would be a little strong for me. They may be vastly different things, and that is what this whole discussion is now about. Examining human consciousness from the perspective of information being processed, it appears highly analogous to AI in many aspects. But there are still questions that come up surrounding apparent "experience," and experts in this field accept this.



Most likely they are referring to the specifics involved in human consciousness whereas I am referring to the generalities of any consciousness.

The point is...we don't know for sure if there are "generalities of any consciousness." This is the whole thing here. We don't know for sure if human conscious experience is truly analogous to machine consciousness. In certain aspects it appears to be, in others it's still problematic.

I would wager good money that if I told them "consciousness arises because a greater amount of reasoning of a certain type is taking place," they would respond with "uh, tell us something we don't know?"

Do you mean that human conscious experience involves more reasoning than human unconscious processing? If so, I'm willing to bet they wouldn't agree.

GWT basically asserts that information that is deemed valuable to unconscious processing enters consciousness. You need the consciousness before you get the reasoning. We may of course be using terminology at odds here.

Nick
 
So, you're saying, effectively, that human consciousness and AI are vastly different things?

No, I am saying that the approach taken to understanding human consciousness will be vastly different depending on whether you are a psychologist or a computer scientist.

However, human consciousness and AI are vastly different things in every way besides possibly a very high level abstract view. They have to be, by definition, because A.I. is A.I. and human consciousness is human consciousness.

[/QUOTE]The point is...we don't know for sure if there are "generalities of any consciousness." [/QUOTE]

B.S. If you have any definition for "consciousness" then you can derive generalities.

For instance, we know all animal consciousness relies on neurons. We know that neurons, when working together, form neural networks. And we know that neural networks reason.

Hence, all animal consciousness comes from reasoning. It is a simple part of the definition. If you want to disagree, then you should find some examples of animal consciousness that don't somehow involve neural networks.

Do you mean that human conscious experience involves more reasoning than human unconscious processing? If so, I'm willing to bet they wouldn't agree.

No, I mean what I said -- more reasoning of a certain type. Namely, for self consciousness, reasoning about self.

GWT basically asserts that information that is deemed valuable to unconscious processing enters consciousness. You need the consciousness before you get the reasoning. We may of course be using terminology at odds here.

Of course we are using terminology at odds -- I am a computer scientist and you are not..

Reasoning, in computer science, means "using existing facts about the world to derive new facts about the world."

So the operation of even a single neuron is considered reasoning -- it sums up inputs (facts about the world) and once a threshold is passed fires its output (a new derived fact about the world).
 
No, I am saying that the approach taken to understanding human consciousness will be vastly different depending on whether you are a psychologist or a computer scientist.

Okay

B.S. If you have any definition for "consciousness" then you can derive generalities.

We don't have a decent working definition for the word "consciousness." This fact is well noted by many researchers. It's been remarked upon for years. Some assert that it is purely a subjective phenomenom, others refute this.

For instance, we know all animal consciousness relies on neurons. We know that neurons, when working together, form neural networks. And we know that neural networks reason.

Hence, all animal consciousness comes from reasoning. It is a simple part of the definition. If you want to disagree, then you should find some examples of animal consciousness that don't somehow involve neural networks.

No, that is not necessary. I can disagree much easier using logic. Your "hence" is here not appropriate. You're jumping the gun. We don't know if other effects are significant here. As Blackmore points out and Baars agrees there are still explanatory gaps in phenomena.


No, I mean what I said -- more reasoning of a certain type. Namely, for self consciousness, reasoning about self.

I haven't read Pixy's book yet, but I'm quite sure that self does not relate a priori to what consciousness is. This issue comes up in debate on memetics also. Blackmore notes that Dennett believes consciousness to be purely memetic and she opposes this contention thus...

Susan Blackmore said:
The result of the memetic process described above is that physical, speaking, human bodies use the word ‘I’ to stand for many different things; a particular physical body; something inhabiting, controlling and owning this body; something that has beliefs, opinions, and desires; something that makes decisions; and a subject of experience. This is, I suggest, a whole concatenation of mistakes resulting in the false idea of a persisting conscious self.

The view proposed here has much in common with James’s (1890) idea of the appropriating self, and with Dennett’s (1991) “centre of narrative gravity”. There are two main differences from Dennett. First, Dennett refers to the self as a “benign user illusion”, whereas I have argued that it is malign; being the cause of much greed, fear, disappointment, and other forms of human suffering (Blackmore 2000). Second (and more relevant here) Dennett says “Human consciousness is itself a huge complex of memes....” (Dennett 1991 p 210).

There is reason to question this. Dennett’s statement implies that if a person were without memes they would not be conscious. We cannot, of course, strip someone of all their memes without destroying their personhood, but we can temporarily quieten the memes’ effects. Meditation and mindfulness can be thought of as meme-weeding techniques, designed to let go of words, logical thoughts, and other memetic constructs and leave only immediate sensory experience. The nature of this experience changes dramatically with practice, and it is common for the sense of a self who is having the experiences to disappear. This same selflessness, or union of self and world, is frequently reported in mystical experiences. But far from consciousness ceasing, it is usually described as enhanced or deepened, and with a loss of duality. If this experience can justifiably be thought of as consciousness without memes, then there is something left when the memes are gone and Dennett is wrong that consciousness is the memes. It might then be better to say that the ordinary human illusion of consciousness is a “complex of memes” but that there are other kinds of consciousness. - Journal of Consciousness Studies, Susan Blackmore



Of course we are using terminology at odds -- I am a computer scientist and you are not..

Reasoning, in computer science, means "using existing facts about the world to derive new facts about the world."

So the operation of even a single neuron is considered reasoning -- it sums up inputs (facts about the world) and once a threshold is passed fires its output (a new derived fact about the world).

I'm not disputing that neurons can be considered as reasoning. I'm disputing that reasoning about self actually creates visual phenomenology. The AI model might simply need to "switch in or out" some form of feedback loop to bring a processing stream into or out of the working memory or attention. As I see it, there are a lot of problems with saying humans do the same.

Nick
 
We don't have a decent working definition for the word "consciousness." This fact is well noted by many researchers. It's been remarked upon for years. Some assert that it is purely a subjective phenomenom, others refute this.

Ah, I see. Approaching this issue with the disclaimer "there is no working definition for what we are talking about" does allow for quite a bit of B.S.

The kind of B.S. that allows useless researchers to keep their useless jobs by convincing the uneducated that they are in fact somehow useful.

And you wonder why I am not interested in reading the "relevant material?" I would likely loose intelligence if I read such crud.

No, that is not necessary. I can disagree much easier using logic. Your "hence" is here not appropriate. You're jumping the gun. We don't know if other effects are significant here. As Blackmore points out and Baars agrees there are still explanatory gaps in phenomena.

There are other effects besides neural communication that might be significant?

Dualism, anyone?

I haven't read Pixy's book yet, but I'm quite sure that self does not relate a priori to what consciousness is. This issue comes up in debate on memetics also. Blackmore notes that Dennett believes consciousness to be purely memetic and she opposes this contention thus...

If you don't know how to reply to a statement of mine, then don't. Frankly I am growing tired of the pattern 'but Blackmore said this: <useless aggregation of words that has little to do with my statement>'

I'm disputing that reasoning about self actually creates visual phenomenology.

I know you are. And you are wrong.

At the very least a human always knows "what I am looking at is me or is not me." Guess what? That requires self.

If you bothered to define what you are talking about this would become instantly obvious.

The AI model might simply need to "switch in or out" some form of feedback loop to bring a processing stream into or out of the working memory or attention. As I see it, there are a lot of problems with saying humans do the same.

Why?
 
I haven't read Pixy's book yet, but I'm quite sure that self does not relate a priori to what consciousness is.
As I pointed out several times previously, this is not only wrong, but definitively wrong, i.e. wrong by definition.

Read Godel, Escher, Bach. It explains all the underlying concepts that rocketdodger and I are referring to, and it does so with remarkable wit and elegance, and without requiring you to spend four years studying computer science.

This issue comes up in debate on memetics also. Blackmore notes that Dennett believes consciousness to be purely memetic and she opposes this contention thus...
All Blackmore does there is move the goalposts. She notes (correctly) that people use the notion of self to refer to many different things, and then she does it herself.

I'm not disputing that neurons can be considered as reasoning. I'm disputing that reasoning about self actually creates visual phenomenology.
No, reasoning about self doesn't create "visual phenomenology", as you so clumsily put it. It creates consciousness. Rather, it is consciousness.
 
Ah, I see. Approaching this issue with the disclaimer "there is no working definition for what we are talking about" does allow for quite a bit of B.S.

The kind of B.S. that allows useless researchers to keep their useless jobs by convincing the uneducated that they are in fact somehow useful.

I am not using the lack of definition as a "back door" here. There is no generally agreed definition and this is remarked and agreed upon by many commentators. That is simply how it is, and if you actually have a grounded understanding of the issues in question you will know why....because this is actually what it's about. This is what the hard problem is about - objectivity and subjectivity. How do you create a working objective definition of something that appears to be purely subjective?


And you wonder why I am not interested in reading the "relevant material?" I would likely loose intelligence if I read such crud.

More it is that I assume you struggle when things are not clearly defined.

There are other effects besides neural communication that might be significant?

We don't know. That's the whole point here. Blackmore asks Baars the question - "explain this then?" He agrees that it's an explanatory gap.



I know you are. And you are wrong.

At the very least a human always knows "what I am looking at is me or is not me." Guess what? That requires self.

No. It requires thinking, and mirror neurons. It does not require self.



Well, one reason is that visual phenomenology is so striking that, classically, many commentators have struggled to imagine that the vast apparent qualitative difference between conscious and unconscious processing can be ascribed simply to neurons processing here in the brain rather than there.

Another reason is the apparent existence of a "self" that is experiencing this phenomenology. People like to believe there really is a self, as opposed to a user illusion.

Personally, I'm not so much bothered by the second issue. The first is still problematic when it comes down to hard evidence, as I see it. When we know more about how neurons really function maybe we get somewhere with it. Until then I guess it will be the AI fanatics on one side, insisting that it's the only way, versus the hard problem believers on the other, with the greater mass of reasonable researchers somewhere in between, awaiting actual evidence.

Nick
 
As I pointed out several times previously, this is not only wrong, but definitively wrong, i.e. wrong by definition.

And as I have pointed out several times, you do not need a sense of self present in order to be conscious. This I know personally. I do not need a dictionary.

Read Godel, Escher, Bach. It explains all the underlying concepts that rocketdodger and I are referring to, and it does so with remarkable wit and elegance, and without requiring you to spend four years studying computer science.

Well, it arrived today. I shall make a start on it.


All Blackmore does there is move the goalposts. She notes (correctly) that people use the notion of self to refer to many different things, and then she does it herself.

Well, I quoted the piece because she is here opposing the notion that self is necessary for consciousness, albeit slightly inadvertently. Dennett asserts that consciousness is purely memetic in nature. Blackmore opposes this contention, as I would, by quoting the much-reported experience that awareness of self is quite unnecessary for consciousness.


No, reasoning about self doesn't create "visual phenomenology", as you so clumsily put it. It creates consciousness. Rather, it is consciousness.

I mean for me this is just amusing. I can sit here and watch the monitor with no sense of self, that's to say no thinking. When thinking recommences so in comes the world of subject-object again, but for me it is just amusing to think that people could believe selfhood to be a priori. I guess we believed the earth was flat too once.

Nick
 
I mean for me this is just amusing. I can sit here and watch the monitor with no sense of self, that's to say no thinking.

How do you know the monitor isn't you?

Or are you seriously claiming that you can sit there and watch the monitor without knowing whether it is you or not?
 
How do you know the monitor isn't you?

Or are you seriously claiming that you can sit there and watch the monitor without knowing whether it is you or not?

If thoughts arise then for sure selfhood (narrative or psychological selfhood) is reconstructed. But without thoughts, in non-threatening situations, there's no sense of subject-object.

Nick
 
If thoughts arise then for sure selfhood (narrative or psychological selfhood) is reconstructed. But without thoughts, in non-threatening situations, there's no sense of subject-object.

Nick

Once again, you are injecting your dualism into a statement that has nothing to do with dualism.

There doesn't need to be any kind of "narrative selfhood" for the organism to be clear about the physical boundaries of self. It is quite clear that worms will curl up when you poke them and not curl up when you poke their neighbor. Is this because worms have a "narrative selfhood?"

But allow me to rephrase the question in such a way that you cannot corrupt it with dualism:

How do you know you are looking at a monitor instead of, for instance, a tree?
 
Well, I haven't taken the time to read this whole thread yet, so this may be redundant, but:

I would only use the transporter if I was completely unconscious at the time of the transportation. Otherwise I would be to worried about the original being destroyed before it could become different from the copy. Even being destroyed a tiny fraction of a second after the copy was made would make it a different person than the copy.

So yes I would do it, if I was unconscious when the copy was made and the original was destroyed while still unconscious.

And also, as far as I understand the word, I am a materialist.
 
And as I have pointed out several times, you do not need a sense of self present in order to be conscious. This I know personally. I do not need a dictionary.
Yes you do, and yes you do.

Well, it arrived today. I shall make a start on it.
Excellent! The dictionary can wait, then. Enjoy!

Well, I quoted the piece because she is here opposing the notion that self is necessary for consciousness, albeit slightly inadvertently. Dennett asserts that consciousness is purely memetic in nature. Blackmore opposes this contention, as I would, by quoting the much-reported experience that awareness of self is quite unnecessary for consciousness.
Blackmore is moving the goalposts. She, like you, needs to pick a definition and stick to it.

I mean for me this is just amusing. I can sit here and watch the monitor with no sense of self, that's to say no thinking.
No you can't.

When thinking recommences so in comes the world of subject-object again, but for me it is just amusing to think that people could believe selfhood to be a priori.
Who ever said it was?

I guess we believed the earth was flat too once.
:rolleyes:
 
Last edited:
Once again, you are injecting your dualism into a statement that has nothing to do with dualism.

There doesn't need to be any kind of "narrative selfhood" for the organism to be clear about the physical boundaries of self. It is quite clear that worms will curl up when you poke them and not curl up when you poke their neighbor. Is this because worms have a "narrative selfhood?"

But allow me to rephrase the question in such a way that you cannot corrupt it with dualism:

How do you know you are looking at a monitor instead of, for instance, a tree?

I'm not talking about dualism or monism or any kind of ism here. I'm relating experience, albeit through the inevitably dualistic medium of subject-object relationships related by language.

Whether it's a monitor or a tree makes no difference. Until thinking arises there is no sense of ownership.

The worm has biological selfhood, yes. It reacts, yes. So do humans. So what?

Nick
 
Last edited:
Yes you do, and yes you do.

I'm beginning to suspect that we will not find agreement here!

Excellent! The dictionary can wait, then. Enjoy!

I've started reading his preface to the 20th anniversary edition. I like how Hofstadter writes, though I'm beginning to suspect that what he's really talking about with "strange loops" is thinking. For me this is quite distinct from visual phenomenology. I am jumping to conclusions here though as I haven't read anywhere near enough.

Blackmore is moving the goalposts. She, like you, needs to pick a definition and stick to it.

But you cannot use the same definitions at every level of examination, Pixy. This is well understood. You can take a Strong AI perspective and just rigidly refuse to look at anything else. That's up to you. I'm fine with it but then I think you will inevitably be accused of dogmatism in some debates. Blackmore comes at it from all sides. In the piece I quoted she's contrasting AI and human consciousness and discussing the issues of selfhood and consciousness, which as she asserts (and I agree) are considerably separated.

No you can't.

oh yes I can!

Who ever said it was?

Are you agreeing now that visual phenomenology can be present without any sense of self?

Nick
 
Last edited:
I really doubt that you have stopped consciously knowing the monitor is not yourself or a tree even when you are not internally articulating any thoughts to yourself. At least, I haven't yet had the experience of turning into a vegetable just because I've zoned out.
 
Any theory of consciousness has to tackle the issue of subjective experience. It can be put very simply: Pain hurts. It is not just sense-data telling us to avoid something. If a theory denies that pain feels bad, it is doomed to failure.
 

Back
Top Bottom