• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Resolution of Transporter Problem

We do understand consciousness. I build conscious systems on a regular basis. It's not even particularly difficult.

~~~~~~~~~~

What consciousness? Consciousness in general? Sure. That's easy.

Human consciousness? No, that's really complicated. But it's not the consciousness part that's complex, it's the human part.

Could I ask you or RD a couple of questions then, about your work in AI?

1) How do you recreate both conscious and unconscious processing in a computer? The human has both. How is this recreated in one machine in AI?

2) How do you recreate, say, binocular rivalry in AI? That's to say how do you make one of two concurrent input streams conscious and the other unconscious, with the possibility to switch between them?

Nick
 
Qualia is inherently dualistic. I've been told this for at least the last twenty years by everyone from staunch self-proclaimed materialists, to staunch self-proclaimed dualists, to staunch self-proclaimed idealists. How do you spot the dualist in a debate? Watch which one brings in qualia first.

Thanks for clearing that up.

What about Nobel prizewinner, Gerald Edelman, the "neural darwinism" guy? He's a materialist who believes in qualia, rejects dualism, and subscribes to a brain-based theory of mind. I'm surprised luminaries such as yourself, Pixy, and GD, seem unaware of him. [/sarcasm]

BTW, I'm still waiting for some cites or quotes or just any explanation for your comment that my understanding of Ramachandran is laughable. Go on, Z, dazzle me with your grasp of the subject matter. Don't leave me thinking that you're just another pseudo-materialist skeptic moron who can't back up his babble.

Nick
 
Last edited:
What about Nobel prizewinner, Gerald Edelman, the "neural darwinism" guy? He's a materialist who believes in qualia, rejects dualism, and subscribes to a brain-based theory of mind. I'm surprised luminaries such as yourself, Pixy, and GD, seem unaware of him.
Edelman redefines qualia as physical properties or processes. Which removes the problem of incoherence, but renders Searle, Chalmers, and Jackson's arguments incoherent (as opposed to merely absurd).

Also, we don't need another word for experience.
 
Could I ask you or RD a couple of questions then, about your work in AI?

1) How do you recreate both conscious and unconscious processing in a computer? The human has both. How is this recreated in one machine in AI?
Quite easy. Conscious programs are self-referential. Unconscious programs aren't. Parts of one program can likewise be self-referential, whole other parts aren't.

In computer science terms, this is known as reflection.

2) How do you recreate, say, binocular rivalry in AI? That's to say how do you make one of two concurrent input streams conscious and the other unconscious, with the possibility to switch between them?
Also easy. You have two sets of data being analysed. You switch the attential of the self-referential part of the program from focusing - reflecting - on one stream to the other. What's known as attention.

Programmers do this all the time. Not all of them; the bulk of programming is still what we might call unconscious. But self-referential, self-aware systems are becoming more and more important as complexity grows. Beyond a certain point, we can't manage the system any more; it needs to manage itself.
 
Quite easy. Conscious programs are self-referential. Unconscious programs aren't. Parts of one program can likewise be self-referential, whole other parts aren't.

In computer science terms, this is known as reflection.

Yes, this part is easy. But how does this create actual conscious awareness? You can give a program the means to reference itself but how does this make it necessarily conscious? In humans, unconscious programs self-monitor all the time. Self-referencing doesn't appear to me to be relevant - that you arbitrarily designate an aspect of data processing as Self - so what? How could this create conscious awareness?


Also easy. You have two sets of data being analysed. You switch the attential of the self-referential part of the program from focusing - reflecting - on one stream to the other. What's known as attention.

Yes, I appreciate that the cortical-thalamic axis or whatever can apparently switch between data streams in this manner in a human. But how does this create the effect of visual consciousness? This is what I want to know. What creates the qualitative difference between conscious and unconscious processing in humans or in AI?

Nick
 
Last edited:
Edelman redefines qualia as physical properties or processes. Which removes the problem of incoherence, but renders Searle, Chalmers, and Jackson's arguments incoherent (as opposed to merely absurd).

It seems to me that it's only you that's defining qualia as being "necessarily immaterial." So I don't think it's especially accurate to then pronounce that Edelman is re-defining them when actually he's only redefining them from your definition, which I frankly doubt he's aware of.

For me, your statements about qualia once again demonstrate that you don't really engage with the debate but seek merely some means to verbally discount propositions before they can really be investigated. This is how it seems to me.

Nick
 
It seems to me that it's only you that's defining qualia as being "necessarily immaterial." So I don't think it's especially accurate to then pronounce that Edelman is re-defining them when actually he's only redefining them from your definition, which I frankly doubt he's aware of.
Just read the Wikipedia page, Nick.

I did not create the term.

I did not define the term.

It has always referred to an immaterial property of an otherwise material system. In other words, it is inherently dualistic.

The page even has a lengthy discussion of Dennett's refutation of the notion of qualia. What, are you going to claim now that it's only me and Daniel Dennet who define qualia that way? Oh (scroll down a bit) it's only me and Daniel Dennett and Marvin Minsky?

Daniel Dennett and Marvin Minsky not only agree entirely with what I say about how the term is defined, they also agree that it is bollocks, and they agree with me on why it is bollocks.

Or, I should say, vice versa.

For me, your statements about qualia once again demonstrate that you don't really engage with the debate but seek merely some means to verbally discount propositions before they can really be investigated. This is how it seems to me.
No, Nick. If you present an incoherent argument, it will and should be dismissed immediately. It has not earned my engagement. It is not worthy of investigation.

Qualia are just such an argument.
 
Yes, this part is easy. But how does this create actual conscious awareness?
It doesn't create conscious awareness. That's what conscious awareness is.

You can give a program the means to reference itself but how does this make it necessarily conscious?
Yes.

In humans, unconscious programs self-monitor all the time.
No they don't.

Self-referencing doesn't appear to me to be relevant - that you arbitrarily designate an aspect of data processing as Self - so what?
What are you talking about? No, Nick. No. I haven't designated anything as anything.

Read Douglas Hofstadter's Godel, Escher, Bach, and don't bother rejoining the debate until you've finished it. It answers, comprehensively, deeply, and elegantly, and in far more detail than I could ever attempt on this forum, this misunderstanding of yours.

Yes, I appreciate that the cortical-thalamic axis or whatever can apparently switch between data streams in this manner in a human. But how does this create the effect of visual consciousness?
That's what visual consciousness is.

This is what I want to know. What creates the qualitative difference between conscious and unconscious processing in humans or in AI?
Self-reference.

This question is covered in depth in the MIT lecture series - Jeremy Wolfe (the lecturer) is a researcher into visual perception, and he covers all the stages of visual perception, from the retina (actually, from the iris) right through to conscious awareness, with detours into some of the more interesting pathologies. Again, it's both more detailed and more entertaining than anything I could attempt here.
 
Self-reference.

This question is covered in depth in the MIT lecture series - Jeremy Wolfe (the lecturer) is a researcher into visual perception, and he covers all the stages of visual perception, from the retina (actually, from the iris) right through to conscious awareness, with detours into some of the more interesting pathologies. Again, it's both more detailed and more entertaining than anything I could attempt here.


I will admit that I have a meager high school education, and I am not well read in philosophy or computer science. However, AI has always interested me, and throughout my life I have learned to program under many different languages. Sadly, I have never written any AI greater than scripted intelligence for the behavior of some monsters in relation to the behavior of a player.

When I first came to the conclusion many years ago, that free will could not logically exist due to cause and effect, I wondered why the illusion of free will was so strong. The first thing I thought of was AI. I tried to think of the algorithms involved in my thought process and what part of them could be causing such a strong feeling of "I'm in control of my own decisions". The conclusion I came to was self-reference(but I certainly wasn't calling it that at the time). I figure, that the reason people all feel "In control" is because our own experiences, our past "choices" feed back into this perpetual algorithm that defines the present moment of thought processing. This creates a strong illusion that "we" are a part of this decision making process.

I dunno, I am a layman at everything.

A layman with lots of "quotations".

Can I get that MIT lecture series for free anywhere?
 
For a start, there is a basic definition. The meaning of the term is reasonably well agreed upon imo.

Which has nothing to do with whether the definition is coherent.

If you disagree, why don't you give a coherent definition right here? You can make it as simple as you want in the interest of clarity -- it doesn't even have to agree with the "basic" definition you speak of.

Secondarily, I don't find it so odd that it's hard to give precise definitions for terms used in philosophy. Frequently this is the case.

I agree. Which speaks volumes about both the usefulness and honesty of philosophers.

Thirdly, the word "consciousness," as used in phrases like "consciousness research," has far less of a coherent definition.

No, it doesn't -- it has a very coherent definition. It is simply an implied definition. When people use it they mean human consciousness. What could be more coherent than a behavior that all humans exhibit?
 
1) How do you recreate both conscious and unconscious processing in a computer? The human has both. How is this recreated in one machine in AI?

Define "conscious." I find it interesting that you, in an immediately previous post, made the comment that "consciousness" has no coherent definitions and here you are asking me how to reproduce it. Wtf?

If you mean "conscious like a human," then I have no single answer, because humans are beyond the complexity of any program that has been written and we can only really emulate separate independent modules so far due to technological limitations.

If you mean "conscious like a dog," then I have no simple answer, because dogs have a ton of behavior that is non-trivial to replicate even with state of the art technology.

If you mean "conscious like a worm" then the answer is along the lines of what Pixy already gave you, although I don't agree that all self-referencing can be labeled "reflection."

Furthermore, note that a phrase such as "conscious like a X" really means "exhibits all the behaviors of a X" and nothing more because you necessarily have no idea what it is like to be a X. And yes, that also applies to X == "a specific human."

2) How do you recreate, say, binocular rivalry in AI? That's to say how do you make one of two concurrent input streams conscious and the other unconscious, with the possibility to switch between them?

If you define what you mean by "conscious" then I would be happy to tell you. Otherwise, that question (like the previous one) is meaningless. I might point out, however, that the notion of an input stream being "conscious" seems strange given most possible meanings of "conscious."
 
Yes, this part is easy. But how does this create actual conscious awareness? You can give a program the means to reference itself but how does this make it necessarily conscious? In humans, unconscious programs self-monitor all the time. Self-referencing doesn't appear to me to be relevant - that you arbitrarily designate an aspect of data processing as Self - so what? How could this create conscious awareness?

Again with the undefined terms...

If you want "conscious" to mean "awareness of self" then of course self reference makes it necessarily conscious.

If you want "conscious" to mean "awarness of self like a human is aware of self" then of course self reference won't make it necessarily conscious -- you also need all the other complicated processing that takes place in a human.

Yes, I appreciate that the cortical-thalamic axis or whatever can apparently switch between data streams in this manner in a human. But how does this create the effect of visual consciousness? This is what I want to know. What creates the qualitative difference between conscious and unconscious processing in humans or in AI?

And again.... what is the "effect of visual consciousness?"

Assuming at some point you can define what you mean, the answer isn't that difficult -- I already walked you through it in a different thread. Remember, reasoning?

When you drive home, millions of objects are perceived by the neurons in your retina and even at many levels of your visual cortex. Yet, you are only consciously aware (as in "aware of them in a manner a human would call 'consciously'") of a few of them. Why do you think that is?

What is the difference between something you are consciously aware of and something you aren't? Hint: the answer is reasoning.

Human consciousness == reasoning like a human.
 
It doesn't create conscious awareness. That's what conscious awareness is.

Yes.

So you don't know, basically. I don't see that self-consciousness has much to do with it here. Say you have a bunch of data processing going on and you mark out a region with sensors feeding back information. So what? How does this create, say, visual awareness? The argument is nonsensical.

The only route the materialist has at this juncture, as I see it, is to assert that consciousness is an inherent property of certain types of system, or of all systems, and this stance must be inherently fraught also.


No they don't.

Come off it, Pixy. Autonomic systems self-monitor. They feed back into the nervous system constantly. So why am I not conscious of them?

This is what I'm asking here. Strong AI theorists will always claim that consciousness itself is an inherent property and thus doesn't in any way need to be created or explained. This may be so, but it's just a theory and how can it be proven?


That's what visual consciousness is.

Self-reference.

Nonsense. I don't need a sense of self to see a tree. I need it to articulate the statement "I see the tree," but I don't need it to see the tree. The switching that's undertaken by the amygdala in binocular rivalry doesn't use selfhood, I'm sure of it.


This question is covered in depth in the MIT lecture series - Jeremy Wolfe (the lecturer) is a researcher into visual perception, and he covers all the stages of visual perception, from the retina (actually, from the iris) right through to conscious awareness, with detours into some of the more interesting pathologies. Again, it's both more detailed and more entertaining than anything I could attempt here.

Which lecture?

Nick
 
Last edited:
This question is covered in depth in the MIT lecture series - Jeremy Wolfe (the lecturer) is a researcher into visual perception, and he covers all the stages of visual perception, from the retina (actually, from the iris) right through to conscious awareness, with detours into some of the more interesting pathologies. Again, it's both more detailed and more entertaining than anything I could attempt here.

Well, I just listened to 66 mins of Wolfe's lecture #6 (Perceiving: Interpreting the Information), which I'm assuming is the one you're referring to and he didn't touch conscious awareness. He's not looking at the hard problem at all. Not that there's anything wrong with this, but he's just not going there. I've listened to a couple of others and it's the same. He's looking at all the easy problems, which is great, but he's not AFAICanSee dealing with, say, what makes me conscious of "this" but not "this", even though both are being concurrently processed by similar circuitry. It doesn't even seem thus far to be the kind of thing he would go into.

Nick
 
Last edited:
This is what I'm asking here. Strong AI theorists will always claim that consciousness itself is an inherent property and thus doesn't in any way need to be created or explained. This may be so, but it's just a theory and how can it be proven?

How can you prove that anyone else has subjective experience besides yourself?

Nonsense. I don't need a sense of self to see a tree. I need it to articulate the statement "I see the tree," but I don't need it to see the tree. The switching that's undertaken by the amygdala in binocular rivalry doesn't use selfhood, I'm sure of it.

You need a sense of self to be aware of a tree on the level of "My girlfriend and I sat under that tree on our third date -- she is now my wife and I had three children with her -- I saw one of them play soccer the other day -- he makes a good goalie," etc.

Which is typically what people mean when they say "aware." There is some reasoning going on about the tree. Otherwise you wouldn't consider it different from the hundreds of other trees in the forest.
 
How can you prove that anyone else has subjective experience besides yourself?

I can't. Though I'm rapidly coming to the conclusion that you don't, becausssseee....just what has this to do with anything? (You and Belz should really get together and form some kind of philosophy tag wrestling team)

What we're discussing is called, I think, contrastive phenomenology - trying to find out what makes the difference in relatively similar conscious and unconscious processing/events.


You need a sense of self to be aware of a tree on the level of "My girlfriend and I sat under that tree on our third date -- she is now my wife and I had three children with her -- I saw one of them play soccer the other day -- he makes a good goalie," etc.

Which is typically what people mean when they say "aware." There is some reasoning going on about the tree. Otherwise you wouldn't consider it different from the hundreds of other trees in the forest.

dear god in heaven, RD, you'd try the patience of a saint. Being conscious of the tree means it appears in the field of view. You can see it. Making a story up about it, or placing into some subject-object relationship, is post hoc processing.

Nick
 
Last edited:
Admittedly I have no background at all in what you currently seem to be discussing, but I wouldn't agree that just because a thing is in my field of view, I'm consciously aware of it, by the definitions of 'consciously aware' I would use. That would require some amount of focus of attention. There's plenty of stuff in my periphery that I'm not paying attention to and wouldn't describe myself as aware of until the point I give it some amount of attention. Though something like a fast movement would certainly cause me to start paying attention via whatever less obviously conscious processes pay attention to whether or not things are going 'zip'.

Thinking along those lines would make me tend to speculate that 'conscious' to 'unconscious' is a spectrum and probably by nature not something that starts at this line here and ends at that one over there.
 
Admittedly I have no background at all in what you currently seem to be discussing, but I wouldn't agree that just because a thing is in my field of view, I'm consciously aware of it, by the definitions of 'consciously aware' I would use. That would require some amount of focus of attention. There's plenty of stuff in my periphery that I'm not paying attention to and wouldn't describe myself as aware of until the point I give it some amount of attention. Though something like a fast movement would certainly cause me to start paying attention via whatever less obviously conscious processes pay attention to whether or not things are going 'zip'.

Thinking along those lines would make me tend to speculate that 'conscious' to 'unconscious' is a spectrum and probably by nature not something that starts at this line here and ends at that one over there.

In that case, I wouldn't think along those lines, if I were you. Personally, I prefer to discuss things with someone who when presented with, say, the question "Do you see the monitor?" replies with a one word answer. Up to you of course.

Nick
 
Last edited:

Back
Top Bottom