• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
I did address what you said. We've already established that you do not experience qualia or feelings worth mentioning, ergo what you have to say regarding them amounts to squat. Move along, please.
As I said, this does not even resemble anything I've ever said; it comes straight from your false premise, which you are using as a filter to twist any facts that disagree with you.

Try to address what people actually say. And no, again, you're not.
 
As I said, this does not even resemble anything I've ever said; it comes straight from your false premise, which you are using as a filter to twist any facts that disagree with you.

Try to address what people actually say. And no, again, you're not.

So you DO experience qualia? You're confusing me now, PixyMisa... :confused:
 
It's self-referential information processing.
That "definition" is about as useful as tits on a bull. Can you flesh it out at all or is that really as far as you've got so far?

How about providing a complete working computer program that you claim is conscious? Or, to save you some time, would a brainf*ck self-interpreter do the trick?

Do you think the ecosystem of the Earth (when considered in it's entirety) is conscious? If not, why not?

How about a beehive and all it's occupants?

Do the set of integers become conscious when a mathematician is working through the proof of Godel's Second Incompleteness Theorem?

Does a higher degree of self-reference create a more conscious system? How about more information processing?

Is the Mandlebrot set or any other similar fractal system conscious?

If I give you some arbitrary (but not too large) neural network to examine, can you tell me if it is conscious or not while processing data?

Are there any forms of self-referential information processing that are not conscious? If so, what are some examples, and why aren't they conscious?

Does your definition allow for arbitrarily nested conscious systems? Perhaps something like conscious bees in a conscious beehive for example.
 
That "definition" is about as useful as tits on a bull.
Then you haven't really understood it, or at least, not encountered what it's responding to.

This definition is incredibly powerful. It cuts straight through all the crap that's been said about consciousness over the millennia and provides a functional explanatory framework that matches what happens experientially and experimentally.

It's not meant to be an operational theory of the human mind. What it is is an answer to so-called "hard problem consciousness".

What do we mean when we refer to consciousness? We mean self-referential information processing. The ability to think and to think about your thinking. If you can answer the question "What are you thinking?", you're conscious.

Do we observe this behaviour in physical systems? Yes.

Can this account for all the attributes we ascribe to consciousness? All those shown to exist, yes.

There's a huge amount of detail to be filled in on the workings of the human brain. What this explanation shows, though, is that the explanatory gap is just a string of potholes to be filled, not an unbridgeable chasm as Chalmers would claim.

Can you flesh it out at all or is that really as far as you've got so far?
Read Godel, Escher, Bach. Yes, I can flesh it out as much as you like, but you'll be better off reading the book. It's a wonderful book.

Do the set of integers become conscious when a mathematician is working through the proof of Godel's Second Incompleteness Theorem?
The set of integers is a fixed abstract concept; it's not about to become anything. You also need the arithmetic operators; you need something to be happening.

Consciousness is a verb.

Does a higher degree of self-reference create a more conscious system?
What is a "higher degree" of self-reference? Do you just mean more self-referential activity?

Consciousness is not all or nothing - humans are infamously only partly conscious - and certainly not all conscious systems have equal computational capacity. But there's no higher order of consciousness, just more of it.

Is the Mandlebrot set or any other similar fractal system conscious?
No, they're sets. They don't change. The Mandelbrot set is always the Mandelbrot set.

If I give you some arbitrary (but not too large) neural network to examine, can you tell me if it is conscious or not while processing data?
Not in the sense of a generalisable formal proof; I think you'd run into Halting Problem type difficulties. But in general, yes, you should be able to tell by examining the network and its activity.

Are there any forms of self-referential information processing that are not conscious?
No. Because that is what we mean when we say that a system is conscious.

Does your definition allow for arbitrarily nested conscious systems? Perhaps something like conscious bees in a conscious beehive for example.
That's a good question. Yes, certainly. Which is not necessarily saying that bees or beehives are conscious, but that this sort of thing is clearly possible.
 
"Why don't you like my SRIPs non-explanation? Its a nice trite answer to a question I can't be bothered to even comprehend. Plus it adds to my delusions of omniscience. What could be better?" :p
 
I think classical p-zombies are as possible as square circles. I think the closest thing one can get to a p-zombie IRL would be a being with a very low level of consciousness but a very high level of intellect, or a puppet under the direction of someone who is conscious. How one would go about uncovering them (absent some way to physically identify conscious activity from the 'outside') would vary depending on the zombie in question and the method(s) of trickery being used.

If that's the case, and true consciousness can be fairly accurately determined from an outside observer, then we can do science by 'just' figuring out how the brain works, without any need for subjective introspection.
 
That "definition" is about as useful as tits on a bull. Can you flesh it out at all or is that really as far as you've got so far?

How about providing a complete working computer program that you claim is conscious? Or, to save you some time, would a brainf*ck self-interpreter do the trick?

I don't see consciousness as a black/white phenomenon. You can have boring, trivial, forms of consciousness all the way up to full human consciousness, and everything in between.

So the question "Is XYZ conscious ?" is too imprecise to answer. It's like arguing over how many water molecules you need to make something wet.
 
What do we mean when we refer to consciousness? We mean self-referential information processing. The ability to think and to think about your thinking. If you can answer the question "What are you thinking?", you're conscious..
.
Interesting you should say this as when I brought this up on the first page in last years consciousness thread I hit a brick wall.
I was critiqued by Robin and Dancing David for assuming thinking.

http://www.internationalskeptics.com/forums/showthread.php?postid=5278679#post5278679

Now I am wondering why do you get away with it ?
Is it because you substitute the word thinking with information processing ?
And creates itself with self-referential


This smacks of dishonesty.

Idolatry is rife amongst the so-called physicalists on this forum.

They are so attached to their thoughts they forget that they created them.:rolleyes:
 
So the question "Is XYZ conscious ?" is too imprecise to answer. It's like arguing over how many water molecules you need to make something wet.
However that is what is implicitly implied when it is stated that consciousness is the emergent property of particles. And that's why I asked someone who stated this in the last consciousness thread to defend this proposition with giving us an indication of how many brain cells are required before consciousness emerges.

http://www.internationalskeptics.com/forums/showthread.php?postid=5284234#post5284234

Of course I got every excuse in the world in order to avoid answering this question, but it remains unanswered.

Either this question has an answer or consciousness is not reducible.
 
However that is what is implicitly implied when it is stated that consciousness is the emergent property of particles. And that's why I asked someone who stated this in the last consciousness thread to defend this proposition with giving us an indication of how many brain cells are required before consciousness emerges.

Of course I got every excuse in the world in order to avoid answering this question, but it remains unanswered.

Either this question has an answer or consciousness is not reducible.

It's easy to come up with an answer, but there is no answer we can agree upon without having a definition we can agree upon. If you can explain objectively, and precisely, how you define consciousness, then maybe someone can make an estimate on how much brain cells or computer bits are necessary to achieve that.
 
It's easy to come up with an answer, but there is no answer we can agree upon without having a definition we can agree upon. If you can explain objectively, and precisely, how you define consciousness, then maybe someone can make an estimate on how much brain cells or computer bits are necessary to achieve that.
Also, what he said isn't true. We've gone over that many times, in terms of both neurons and transistors. Bottom line is that a minimal but clear consciousness would require a few hundred neurons, maybe twice as many transistors, but organisation and not simple quantity is key. If there are no internal feedback loops, it's not conscious.
 
It's easy to come up with an answer, but there is no answer we can agree upon without having a definition we can agree upon. If you can explain objectively, and precisely, how you define consciousness, then maybe someone can make an estimate on how much brain cells or computer bits are necessary to achieve that.

Please pay attention Orbini... :eye-poppi ...because here's the definition that PixyMisa has given (and just in case you haven't read much of the earlier parts of this thread he has given it on many occasions):

It's self-referential information processing

Pixy didn't explicitly deny that the entire ecosystem of Earth when viewed as a whole is conscious (or at least could be) and similarly with a brainf*ck self-interpreter. However he did go to the trouble of denying the other possibilities I put forward. So it seems he possibly does think Mother Earth is conscious and also a (running) brainf*ck self-interpreter but just doesn't want to come out and say it.

What I'm interested in is "the hard problem of consciousness" (as described by Chalmers). PixyMisa denies this problem exists. It seems that Pixy buys into Holfstadter's suggestion that phenomenal consciousness is some kind of hallucination generated by the brain for some reason (for the hallucination itself to "view"?), or perhaps even by the hallucination itself in some kind of "strange loop". In fact I think he believes this hallucination must also necessarily exist although I'm not sure exactly why that should be. In any case I don't find the strange loop/hallucination idea at all convincing.

(Pixy, please feel free to expand on the details. I bought my copy of GEB back in the early 80's and yeah, it's a very interesting and nicely put together book, but it didn't seem to provide a clear explanation of consciousness that I could discern. And yes, I also eagerly bought and read Dennett's "Consciousness Explained" when it came out. Again, interesting insofar as it went, but ultimately it did not explain what I feel needs to be explained.)

So, is this illusion/hallucination of consciousness meant to provide some kind of useful causal effect? If the hallucination reduces to neurons firing, then can't other neurons just take the outputs of the neurons responsible for generating the hallucination (however they do that) directly? Or is the hallucination just an accidental side effect that comes along for the ride but does nothing useful?

I don't see consciousness as a black/white phenomenon. You can have boring, trivial, forms of consciousness all the way up to full human consciousness, and everything in between.

So the question "Is XYZ conscious ?" is too imprecise to answer. It's like arguing over how many water molecules you need to make something wet.
How about if you are the "XYZ" entity? Presumably it either experiences some form of phenomenal consciousness, or it doesn't. I do. Do you? So far as I understand his point of view, Pixy believes a running brainf*ck self-interpreter also has some kind of "hallucination" going on. Of course I don't expect him to explain exactly what that might be like (for the self-interpreter itself), but there is a difference between zero, and something more than zero.

What is the most boring and trivial form of phenomenal consciousness that you can conceive of? Do you think you fully understand what Chalmers "hard problem" is getting at? That really is the only part of any broader definition of consciousness that I am truly interested in at this point.
 
If there are no internal feedback loops, it's not conscious.
So does this statement mean that an internal feedback loop is all that is required in a neural network for your "self-referential" condition to be met? If not, what does "self-referential" actually mean in your definition? How would a neural network contain a representation of itself? Or does it just need to have a reference to the container ("body") that represents it or something like that?
 
Please pay attention Orbini... :eye-poppi ...because here's the definition that PixyMisa has given (and just in case you haven't read much of the earlier parts of this thread he has given it on many occasions):

I do pay attention, and I have read a lot of the earlier parts of this thread. Just the fact that PixyMisa has stated a definition doesn't mean we all agree on that definition. I also don't know if !Kaggen was referring to that definition in the question that I replied to.

So, is this illusion/hallucination of consciousness meant to provide some kind of useful causal effect?

I don't see consciousness as an illusion or hallucination. I see it purely as a property of sufficiently advanced computation about the environment (which includes the subject itself). The useful causal effect is survival.

For example, the sensation of "pain" causes us to pay attention to the source of the pain, and come up with a remedy to fix it, and improve chances of survival.
 
How about if you are the "XYZ" entity? Presumably it either experiences some form of phenomenal consciousness, or it doesn't. I do. Do you?
What makes you think it's a property that you either have or have not ? I'm pretty sure a dog has a form of consciousness, but not nearly as sophisticated as our own, but more advanced than that of a chicken. It's a gradual scale.

What is the most boring and trivial form of phenomenal consciousness that you can conceive of?

Does it matter ? It sounds like asking what the lightest color is that you still can 'gray'.

Do you think you fully understand what Chalmers "hard problem" is getting at? That really is the only part of any broader definition of consciousness that I am truly interested in at this point.

I don't think even Chalmers understands what the "hard problem" is.
 
I think classical p-zombies are as possible as square circles. I think the closest thing one can get to a p-zombie IRL would be a being with a very low level of consciousness but a very high level of intellect, or a puppet under the direction of someone who is conscious. How one would go about uncovering them (absent some way to physically identify conscious activity from the 'outside') would vary depending on the zombie in question and the method(s) of trickery being used.

If that's the case, and true consciousness can be fairly accurately determined from an outside observer, then we can do science by 'just' figuring out how the brain works, without any need for subjective introspection.

But, in order to get to such a point, we need to converge our understandings of the 'inside' and 'outside' perspectives of consciousness, which necessarily includes introspective observation and theoretical insight. Right now, many are thinking of consciousness only in terms of what the brain is doing.

By way of analogy, this makes about as much sense as trying to understand human physiology in terms of an individual's web forum account and posting history. When you point out to certain individuals that physiology cannot be reduced to a user account they accuse you of account/body dualism and claim that concepts like biochemistry are 'incoherent', 'poorly defined' and therefore 'non-existent'. They then go back to discussing the arcane details of how we understand the dynamics of account settings and 'submit reply' and how there is no "hard problem of physiology" because organic bodies do not exist, there is only intrawebz - and so on...
 
But, in order to get to such a point, we need to converge our understandings of the 'inside' and 'outside' perspectives of consciousness, which necessarily includes introspective observation and theoretical insight. Right now, many are thinking of consciousness only in terms of what the brain is doing.

I would say, the research would involve interviews with test subjects to find out how they report their internal state. For instance, we could insert a probe into a neuron, apply a small electric pulse, and ask them what kind of sensation it causes (if any). In other words, we ask a test subject to perform an introspection, and explain to the researcher how it feels.

Also, it is very enlightening to examine patients with different kinds of brain impairments, and see how it effects their consciousness and behavior.

At some point in time, we may get to a point where we get enough understanding that we could undertake the task of building a computer with consciousness, and perform even better experiments.

These are standard and objective ways to do science.

I don't think there's any progress that can be made if the researcher sits in a comfy recliner, and performs some introspection on his own mind. People have tried that for many years, and we've made zero progress using that method.
 
I would say, the research would involve interviews with test subjects to find out how they report their internal state. For instance, we could insert a probe into a neuron, apply a small electric pulse, and ask them what kind of sensation it causes (if any). In other words, we ask a test subject to perform an introspection, and explain to the researcher how it feels.

Also, it is very enlightening to examine patients with different kinds of brain impairments, and see how it effects their consciousness and behavior.

At some point in time, we may get to a point where we get enough understanding that we could undertake the task of building a computer with consciousness, and perform even better experiments.

These are standard and objective ways to do science.

I don't think there's any progress that can be made if the researcher sits in a comfy recliner, and performs some introspection on his own mind. People have tried that for many years, and we've made zero progress using that method.

I suggest that the ideal subjects (aside from those with brain abnormalities) would be those who are skilled at meditative practices and introspection as their ability to self-examine and manipulate their internal states would be an asset.
 
I suggest that the ideal subjects (aside from those with brain abnormalities) would be those who are skilled at meditative practices and introspection as their ability to self-examine and manipulate their internal states would be an asset.

How would you know if they were really skilled with respect to accurate self-examining ? There's no guarantee that many years of introspection and meditation increases the accuracy of the results.

I'd stick with regular people, but improve the quality by increasing the sample size.
 
Status
Not open for further replies.

Back
Top Bottom