Explain consciousness to the layman.

Status
Not open for further replies.
I propose a fourth alternative--informational continuity. Specifically, there are certain kinds of things that only I am privy to, and I can develop memories of these sorts of things. In situations where those sorts of memories are genuine, I should have all rights to claim that I remember being that person.

I think this is the critical invariant for continuity, and it dodges all three of your problems. For the Ship of Theseus issue, it really doesn't matter what molecules are in my body--it matters more what the mental states represent, and whether the represented states have a "correct" causal relationship to what they represent. As for the pattern issue, so long as the information got carried, it's not a problem if everything stopped--if you were frozen in carbonite and reconstituted, you still have rights to claim to be the same person.

The cloning part of the infamous teleporter is the "tricky" one, but it's not really too bad if you think about it. This makes two bona fide separate individuals; each should care about the other just as they would their twin. Neither can claim to be the other, because neither is causally linked to the other's subjectivity in the right way. However, both can claim to have been the original, because they were both causally related in the appropriate way to their past counterparts.

This actually seems the least rational, as it does not require even similar material or processing as a human brain, but merely sufficient understanding of such to extract all relevant information. If the teleporter aliens who torture In-goers do so to extract their ssssecretsss, does that make them you? I hope I'm not strawmanning when I answer "of course it doesn't, though they can seriously screw my credit rating at that point" for you, a response which is most easily reasoned by falling back on any of the other alternatives.
 
This actually seems the least rational, as it does not require even similar material or processing as a human brain, but merely sufficient understanding of such to extract all relevant information.
Yes, it's a straw man. I pointed to another thread. Do you really want to discuss it here instead?
 
Sure. And if you honestly can't tell whether the robot is doing one or the other?

Then I can't really know. But if I perceive it as inanimate I will probably conclude it's not conscious. Why I see this threshold as inviolate, I don't know. I'm conscious, my dog's conscious, maybe a hamster's conscious, but a rock, no. I don't attribute consciousness to inanimate objects.

It is that spark of life that I perceive as irreducible, and it's OK if that puts me in the mystical camp. If I am not prepared to rigorously define life vs. not-life, I've crossed a border that allows for those non-material, non-computational consciousness theories. It just happens that consciousness is not for me the most interesting question.

Evolutionarily, self-awareness isn't the divider, nor even imagination. It's more like the ability to hold an idea in mind and apply imagination to manipulate our environment. Animals knew fire; but could they turn that familiarity into lore, pass the lore on, learn to make fire, invent uses for it? Never. This leap does not seem linear to me. Hence perhaps non-algorithmic.

The AI field could probably go far in replicating human logic in a robot. A tougher job might be programming it to mimic our illogical side. In any event at this point is remains mimicry. Will it ever "think for itself"? I'd say no, but the other people on this thread are far more qualified than I to speculate.
 
No, the one implicitly includes the possibility of the other. (Though the fact that something is not explained does not mean that all explanations are possible).

Then why did you phrase it in a way that implies that they are excusive ?

I could. Maybe I even will.

"But I won't !"

The default position for anything is that it is not understood, until it is understood.

Interesting.

It's impossible because the model would need to be bigger than the universe.

Right. Practically impossible. Not theoretically.
 
Then I can't really know. But if I perceive it as inanimate I will probably conclude it's not conscious. Why I see this threshold as inviolate, I don't know. I'm conscious, my dog's conscious, maybe a hamster's conscious, but a rock, no. I don't attribute consciousness to inanimate objects.

It is that spark of life that I perceive as irreducible, and it's OK if that puts me in the mystical camp. If I am not prepared to rigorously define life vs. not-life, I've crossed a border that allows for those non-material, non-computational consciousness theories. It just happens that consciousness is not for me the most interesting question.

Evolutionarily, self-awareness isn't the divider, nor even imagination. It's more like the ability to hold an idea in mind and apply imagination to manipulate our environment. Animals knew fire; but could they turn that familiarity into lore, pass the lore on, learn to make fire, invent uses for it? Never. This leap does not seem linear to me. Hence perhaps non-algorithmic.

The AI field could probably go far in replicating human logic in a robot. A tougher job might be programming it to mimic our illogical side. In any event at this point is remains mimicry. Will it ever "think for itself"? I'd say no, but the other people on this thread are far more qualified than I to speculate.

Yes I was going to put this point of view myself once the thread had settled down a bit.

If we construct a living computer it will probably be conscious and have the potential to know something.
 
Yes, it's a straw man. I pointed to another thread. Do you really want to discuss it here instead?
Well, punshh found this thread too, so the signal to noise ratio is going to be crap in any case. I'll copy the posts over, but I'd like for you to explain how I'm wrong. You did specify information, not information-and-material or information-and-pattern. Although now I'm wondering if a "pick any two" approach might not be the ticket after all.
 
Here's a view from an old Scientific American article I'm telling from memory, explaining how we can judge conscious.

There's some wasp that kills a spider and brings it to a hole where it's going to plant its eggs on it. It drags the spider to the entrance of the hole, then leaves it outside the hole for a moment while it checks inside the hole one last time, then comes back out and drags the spider in.

A pesky scientist moves the spider away from the hole while the wasp is checking inside. When the wasp comes out, it resets its instinctive program to pull the spider to the entrance of the hole, releases it, and checks inside again. The scientist repeats this interference, making the wasp repeat its hard-programmed procedure, seemingly forever. The wasp never sees the pattern. The conclusion is the wasp is not conscious. It's an automaton.

How people are different is we recognize loops, then break out of them. Most higher animals can do this to varying degrees. One might think we do this best.

We have awesome pattern detectors. Sometimes it seems like 90% of what the brain is for.

So, we might say if a robot would nearly always recognize it was repeating a pattern then initiate a novel activity, that might show it was conscious. This would seem to me to require a massive amount of pattern detection that included detecting patterns of its own pattern detection. One level of recursion should be sufficient. Sounds computable to me.

The article ended with the observation that people often tend to behave in repetitive loops, like in serial failed relationships, and are therefore largely unconscious of such things. Indeed, debates about the nature of consciousness are infuriatingly repetitive. Each time we are about to pull it into the computable zone, someone drags it back out to the hard problem zone.
 
Last edited:
Then why did you phrase it in a way that implies that they are excusive ?

My statement that consciousness is unexplained is in conflict with the people who claim that there is an explanation, and that it is SRIP - computation - and that there is no need to consider anything else. It is not in conflict with people who think that this is a possible or even probable explanation.
 
Here's a view from an old Scientific American article I'm telling from memory, explaining how we can judge conscious.

There's some wasp that kills a spider and brings it to a hole where it's going to plant its eggs on it. It drags the spider to the entrance of the hole, then leaves it outside the hole for a moment while it checks inside the hole one last time, then comes back out and drags the spider in.

A pesky scientist moves the spider away from the hole while the wasp is checking inside. When the wasp comes out, it resets its instinctive program to pull the spider to the entrance of the hole, releases it, and checks inside again. The scientist repeats this interference, making the wasp repeat its hard-programmed procedure, seemingly forever. The wasp never sees the pattern. The conclusion is the wasp is not conscious. It's an automaton.
Yes! I remember that, via Stephen Jay Gould, who called this class of behaviour sphexish, after the genus Sphex, the digger wasp.

So, we might say if a robot would nearly always recognize it was repeating a pattern then initiate a novel activity, that might show it was conscious. This would seem to me to require a massive amount of pattern detection that included detecting patterns of its own pattern detection. One level of recursion should be sufficient. Sounds computable to me.
Not recursion exactly, but reflection.

The article ended with the observation that people often tend to behave in repetitive loops, like in serial failed relationships, and are therefore largely unconscious of such things. Indeed, debates about the nature of consciousness are infuriatingly repetitive. Each time we are about to pull it into the computable zone, someone drags it back out to the hard problem zone.
But the fact that we are aware of the problem (and the meta-problem) makes it computable again. ;)
 
Yes! I remember that, via Stephen Jay Gould, who called this class of behaviour sphexish, after the genus Sphex, the digger wasp.

That's it! I'm googling sphexish and found the correct details here. It was a cricket, not a spider, but I got most of it right.

A reference to the article is here, and was in Douglas Hofstadter's column Metamagical Themas, 1985.

So far as I know, the term [sphexish] hasn’t appeared in any dictionary, but it has some circulation among behavioural psychologists. Daniel Dennett created the related noun sphexishness in 1984. Hofstadter coined antisphexishness in his book for the opposite state: free will.
 
Here's a view from an old Scientific American article I'm telling from memory, explaining how we can judge conscious.

There's some wasp that kills a spider and brings it to a hole where it's going to plant its eggs on it. It drags the spider to the entrance of the hole, then leaves it outside the hole for a moment while it checks inside the hole one last time, then comes back out and drags the spider in.

A pesky scientist moves the spider away from the hole while the wasp is checking inside. When the wasp comes out, it resets its instinctive program to pull the spider to the entrance of the hole, releases it, and checks inside again. The scientist repeats this interference, making the wasp repeat its hard-programmed procedure, seemingly forever. The wasp never sees the pattern. The conclusion is the wasp is not conscious. It's an automaton.

How people are different is we recognize loops, then break out of them. Most higher animals can do this to varying degrees. One might think we do this best.
We have awesome pattern detectors. Sometimes it seems like 90% of what the brain is for.

So, we might say if a robot would nearly always recognize it was repeating a pattern then initiate a novel activity, that might show it was conscious. This would seem to me to require a massive amount of pattern detection that included detecting patterns of its own pattern detection. One level of recursion should be sufficient. Sounds computable to me.

The article ended with the observation that people often tend to behave in repetitive loops, like in serial failed relationships, and are therefore largely unconscious of such things. Indeed, debates about the nature of consciousness are infuriatingly repetitive. Each time we are about to pull it into the computable zone, someone drags it back out to the hard problem zone.

You've never read a DOC or yerreg thread then?
 
... I'm sure there will be plenty of people claiming that if you are rational, knowing that an exact duplicate exists means you shouldn't mind being vapourised.
Not really - not only would you not be exact duplicates for very long after duplication, but during that period it would be reasonable to suppose you would both have apparent continuity of experience and therefore an identical and rational fear of death, just as the original did. Also, when you each see the other and come to rationalize your situation, your apparent continuity of experience means you would initially still both feel you were the original - and certainly, at the point one of you realizes he really is/isn't the original, you are no longer exact duplicates... :)
 
My statement that consciousness is unexplained is in conflict with the people who claim that there is an explanation,

I don't automatically see the conflict either. Something can be unexplained so far while still having an explanation that will come to light only when there's more information.

And I'm one of those folks sympathetic to non-materialist musings.
 
In the Churchland article he coaches the reader on how to see impossible colors. The article is a PDF is from a print article. Does anyone here know if the paper/ink experience can be applied directly to the PDF/computer screen experience?
I don't think the full effect can be appreciated on a normal monitor because it is not passively illuminated, so at the least, you're unlikely to see the hyperbolic colours. I suspect an accurately printed colour plate is necessary to get the full effect.

I tried fatiguing my cones but I'm not sure it worked
AIUI, it's not cone fatigue but fatigue of the mid-layer opponent neurons that causes the significant effects.
 
Then I can't really know. But if I perceive it as inanimate I will probably conclude it's not conscious. Why I see this threshold as inviolate, I don't know. I'm conscious, my dog's conscious, maybe a hamster's conscious, but a rock, no. I don't attribute consciousness to inanimate objects.
It's interesting that, in general, people will often anthropomorphise inanimate objects (not just 'pet' rocks), and many not involved in computing or robotics seem prone to treating robots as conscious or semi-conscious beings. If people become emotionally attached to their Roomba cleaning robots, there shouldn't be a problem with robot AIs - as long as they avoid 'uncanny valley'.
 
How people are different is we recognize loops, then break out of them. Most higher animals can do this to varying degrees. One might think we do this best.
I suspect that's pretty much why consciousness first evolved - to recognise situations for which static routines are inefficient or incapable, and redirect activity; flexibility of behaviour.
 
That's it! I'm googling sphexish and found the correct details here. It was a cricket, not a spider, but I got most of it right.

A reference to the article is here, and was in Douglas Hofstadter's column Metamagical Themas, 1985.
Thanks. Looks like my memory is not perfect. Useful, but imperfect.

Not really a surprise, that.
 
Status
Not open for further replies.

Back
Top Bottom