PixyMisa
Persnickety Insect
Right. There you go then.Stories are real for me. The events and characters portrayed may or may not be.
Right. There you go then.Stories are real for me. The events and characters portrayed may or may not be.
We do understand consciousness. I build conscious systems on a regular basis. It's not even particularly difficult.
~~~~~~~~~~
What consciousness? Consciousness in general? Sure. That's easy.
Human consciousness? No, that's really complicated. But it's not the consciousness part that's complex, it's the human part.
Qualia is inherently dualistic. I've been told this for at least the last twenty years by everyone from staunch self-proclaimed materialists, to staunch self-proclaimed dualists, to staunch self-proclaimed idealists. How do you spot the dualist in a debate? Watch which one brings in qualia first.
Thanks for clearing that up.
Edelman redefines qualia as physical properties or processes. Which removes the problem of incoherence, but renders Searle, Chalmers, and Jackson's arguments incoherent (as opposed to merely absurd).What about Nobel prizewinner, Gerald Edelman, the "neural darwinism" guy? He's a materialist who believes in qualia, rejects dualism, and subscribes to a brain-based theory of mind. I'm surprised luminaries such as yourself, Pixy, and GD, seem unaware of him.
Quite easy. Conscious programs are self-referential. Unconscious programs aren't. Parts of one program can likewise be self-referential, whole other parts aren't.Could I ask you or RD a couple of questions then, about your work in AI?
1) How do you recreate both conscious and unconscious processing in a computer? The human has both. How is this recreated in one machine in AI?
Also easy. You have two sets of data being analysed. You switch the attential of the self-referential part of the program from focusing - reflecting - on one stream to the other. What's known as attention.2) How do you recreate, say, binocular rivalry in AI? That's to say how do you make one of two concurrent input streams conscious and the other unconscious, with the possibility to switch between them?
Quite easy. Conscious programs are self-referential. Unconscious programs aren't. Parts of one program can likewise be self-referential, whole other parts aren't.
In computer science terms, this is known as reflection.
Also easy. You have two sets of data being analysed. You switch the attential of the self-referential part of the program from focusing - reflecting - on one stream to the other. What's known as attention.
Edelman redefines qualia as physical properties or processes. Which removes the problem of incoherence, but renders Searle, Chalmers, and Jackson's arguments incoherent (as opposed to merely absurd).
Just read the Wikipedia page, Nick.It seems to me that it's only you that's defining qualia as being "necessarily immaterial." So I don't think it's especially accurate to then pronounce that Edelman is re-defining them when actually he's only redefining them from your definition, which I frankly doubt he's aware of.
No, Nick. If you present an incoherent argument, it will and should be dismissed immediately. It has not earned my engagement. It is not worthy of investigation.For me, your statements about qualia once again demonstrate that you don't really engage with the debate but seek merely some means to verbally discount propositions before they can really be investigated. This is how it seems to me.
It doesn't create conscious awareness. That's what conscious awareness is.Yes, this part is easy. But how does this create actual conscious awareness?
Yes.You can give a program the means to reference itself but how does this make it necessarily conscious?
No they don't.In humans, unconscious programs self-monitor all the time.
What are you talking about? No, Nick. No. I haven't designated anything as anything.Self-referencing doesn't appear to me to be relevant - that you arbitrarily designate an aspect of data processing as Self - so what?
That's what visual consciousness is.Yes, I appreciate that the cortical-thalamic axis or whatever can apparently switch between data streams in this manner in a human. But how does this create the effect of visual consciousness?
Self-reference.This is what I want to know. What creates the qualitative difference between conscious and unconscious processing in humans or in AI?
Self-reference.
This question is covered in depth in the MIT lecture series - Jeremy Wolfe (the lecturer) is a researcher into visual perception, and he covers all the stages of visual perception, from the retina (actually, from the iris) right through to conscious awareness, with detours into some of the more interesting pathologies. Again, it's both more detailed and more entertaining than anything I could attempt here.
Right here. Enjoy!Can I get that MIT lecture series for free anywhere?
For a start, there is a basic definition. The meaning of the term is reasonably well agreed upon imo.
Secondarily, I don't find it so odd that it's hard to give precise definitions for terms used in philosophy. Frequently this is the case.
Thirdly, the word "consciousness," as used in phrases like "consciousness research," has far less of a coherent definition.
1) How do you recreate both conscious and unconscious processing in a computer? The human has both. How is this recreated in one machine in AI?
2) How do you recreate, say, binocular rivalry in AI? That's to say how do you make one of two concurrent input streams conscious and the other unconscious, with the possibility to switch between them?
Yes, this part is easy. But how does this create actual conscious awareness? You can give a program the means to reference itself but how does this make it necessarily conscious? In humans, unconscious programs self-monitor all the time. Self-referencing doesn't appear to me to be relevant - that you arbitrarily designate an aspect of data processing as Self - so what? How could this create conscious awareness?
Yes, I appreciate that the cortical-thalamic axis or whatever can apparently switch between data streams in this manner in a human. But how does this create the effect of visual consciousness? This is what I want to know. What creates the qualitative difference between conscious and unconscious processing in humans or in AI?
It doesn't create conscious awareness. That's what conscious awareness is.
Yes.
No they don't.
That's what visual consciousness is.
Self-reference.
This question is covered in depth in the MIT lecture series - Jeremy Wolfe (the lecturer) is a researcher into visual perception, and he covers all the stages of visual perception, from the retina (actually, from the iris) right through to conscious awareness, with detours into some of the more interesting pathologies. Again, it's both more detailed and more entertaining than anything I could attempt here.
This question is covered in depth in the MIT lecture series - Jeremy Wolfe (the lecturer) is a researcher into visual perception, and he covers all the stages of visual perception, from the retina (actually, from the iris) right through to conscious awareness, with detours into some of the more interesting pathologies. Again, it's both more detailed and more entertaining than anything I could attempt here.
This is what I'm asking here. Strong AI theorists will always claim that consciousness itself is an inherent property and thus doesn't in any way need to be created or explained. This may be so, but it's just a theory and how can it be proven?
Nonsense. I don't need a sense of self to see a tree. I need it to articulate the statement "I see the tree," but I don't need it to see the tree. The switching that's undertaken by the amygdala in binocular rivalry doesn't use selfhood, I'm sure of it.
How can you prove that anyone else has subjective experience besides yourself?
You need a sense of self to be aware of a tree on the level of "My girlfriend and I sat under that tree on our third date -- she is now my wife and I had three children with her -- I saw one of them play soccer the other day -- he makes a good goalie," etc.
Which is typically what people mean when they say "aware." There is some reasoning going on about the tree. Otherwise you wouldn't consider it different from the hundreds of other trees in the forest.
Admittedly I have no background at all in what you currently seem to be discussing, but I wouldn't agree that just because a thing is in my field of view, I'm consciously aware of it, by the definitions of 'consciously aware' I would use. That would require some amount of focus of attention. There's plenty of stuff in my periphery that I'm not paying attention to and wouldn't describe myself as aware of until the point I give it some amount of attention. Though something like a fast movement would certainly cause me to start paying attention via whatever less obviously conscious processes pay attention to whether or not things are going 'zip'.
Thinking along those lines would make me tend to speculate that 'conscious' to 'unconscious' is a spectrum and probably by nature not something that starts at this line here and ends at that one over there.