Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
OK Pixi, fair enough.

Let me take a different angle and I'll ignore our disagreement and inability to come to terms on qualia for the time being. You claim you've constructed "conscious" programs based on reflective techniques. I would agree it's a form of self-referential computation but I doubt Hofstadter would consider it necessary or sufficient to yield a "strange loop" or conscious, But that's just an aside, he's not the arbitor of truth here anyway. So tell me how you "know" it's conscious.

Now please consider a range of programs you may have written or could reasonably construct based on these techniques of various complexity and purpose. Are they all conscious to some degree and/or is there a minimum degree of work, purpose, and/or complexity you have to attain to know they're conscious? If so what is it? Can you eliminatively give me the simplest programming construct you "know" is conscious? Then succinctly explain to me how you know this.

Likewise, which animals on earth do you believe are conscious and which aren't and why?

OK, now let's consider unconscious levels of human mental processing. First, do you agree they exist, even in our awake state?

Let's make this argument harder for me and easier for you by assuming you don't agree. Would you agree that there are unconscious levels of human processing when we're asleep?

OK, let's make this argument even harder for me and easier on you and let's assume that you believe that dreams are a form of consciousness (after all, there is a form of self-awareness in many/most dreams), so I can't claim that we are unconscious much of the time we're asleep.

So now let me peel back the onion to those states of unconsciousness that exhibit no dreaming and/or where brain damage has disabled the ability to dream. Such people exist and appear to be reasonably normal and conscious when awake.

Even in this deep dark unconscious realm I could lay before you a raft of studies similar to the ones you argued before to map consciousness to physical observations that show not only that the much of the brain remains functional and highly computationally active when unconscious BUT, more importantly for my argument with you, that many of these unconscious functional processes are known to exhibit and be based on recursive, recurrent, and other self-referential forms of computation, in some cases highly analogous if not identical in function to, and likely far more complex and powerful than, your reflective programs are, probably by orders of magnitude. What prevents all those forms of mental processing from being perceived as conscious? If you're right, most of our unconscious neural processing should be conscious. How do you account for that and how does it "reflect" on the necessary and sufficient conditions for consciousness in your programs?
 
Last edited:
Oh Pixi,

I've got a few more follow up questions for you relevant to my questions above? Do you know what "blindsight" is and how it has also been observed in correlated brain actively via various techniques? Now can you clearly and succinctly tell me the difference(s) in subjective "experience" between normal sight and blindsight? Finally, for a bonus question, can you tell me what the neural circuits objectively observed to work in normal sight but not blindsight do? (it's a bit of a trick question).

Added in edit:

Pixi, I've been going back to read more of your posts and make sure i understand your views. If i missed where you may have answered or asserted the following claims, please forgive me. But based on my understanding of your views (particularly the low bar you have for self-referentiality) you must believe:

- the internet is conscious
- markets (e.g., stock markets) are conscious
- the ecosystem is conscious (like Gaia hypothesis)

Am I right? if not, why not.

Now, reflecting back on on the human mind, did you agree that it consists of unconscious and conscious parts? Can you succinctly define what accounts for the boundary, or if there is no distinct boundary, that fact that there are two states at opposite extremes despite the fact that both type of processes rely on highly complex forms of self-referential hierarchical processing probably far more powerful and complex than any information processing system you've ever programmed?

Now, I'm assuming you'll have to say the bulleted things above are conscious to some degree or you're going to face some major inconsistencies in your arguments - especially if you think a car can be conscious. If so, tell me the means by which you would identify or find the boundary between consciousness and unconsciousness in those systems, or, if no distinct boundary exists, how one would objectively quantify their level of consciousness (which must be possible for you if all of this is materialistic and empirically accessible as you claimed earlier).

What would I have to take away from the internet, the stock market, or the ecosystem to render it from a conscious state to an unconscious one? And having done so, how will I really objectively observe that distinction in terms of the "public behavior" (to use the terms popular when I arrived here) of these systems?
 
Last edited:
OK Pixi, fair enough.

Let me take a different angle and I'll ignore our disagreement and inability to come to terms on qualia for the time being. You claim you've constructed "conscious" programs based on reflective techniques. I would agree it's a form of self-referential computation but I doubt Hofstadter would consider it necessary or sufficient to yield a "strange loop" or conscious, But that's just an aside, he's not the arbitor of truth here anyway. So tell me how you "know" it's conscious.
Well, a couple of different ways. First is that this is how we define consciousness: Consciousness is self-referential information processing.

We can reach this definition very quickly, though it does require cutting away a huge amount of nonsense. What the brain does is information processing. The brain produces consciousness. Therefore consciousness is some sort of information processing. But many tasks the brain performs are either not conscious or not represented in our consciousness. So what's the difference? The difference is self-reference, Hofstadter's strange loop.

And self-reference is dead easy to implement if you choose the right programming language. Indeed, as I noted earlier, it's a common programming technique. So there are lots of conscious computer systems around right now.

Second is the same test we always use to check if a system is conscious: Ask it. SHRDLU is not a particularly complex or sophisticated program by modern standards, but I would like you to explain to me exactly what behaviour it is that is definitional to consciousness that SHRDLU does not display.

Now please consider a range of programs you may have written or could reasonably construct based on these techniques of various complexity and purpose. Are they all conscious to some degree and/or is there a minimum degree of work, purpose, and/or complexity you have to attain to know they're conscious? If so what is it? Can you eliminatively give me the simplest programming construct you "know" is conscious? Then succinctly explain to me how you know this.
See above. If you've read Hofstadter, then you know this already. It's self-reference. No self-reference, no consciousness. Self-reference, consciousness.

Not because self-reference is magical - it doesn't need to be, because consciousness isn't magical.

But because that's what we mean when we say conscious. We mean that we have access to our own mental processes. Self-awareness. Self-reference. Reflection. All those terms mean the same thing.

Likewise, which animals on earth do you believe are conscious and which aren't and why?
Interesting question. I don't know enough about invertebrate neurobiology to say at what level self-referential processing kicks in. Flatworms, probably not. The larger cephalopods, probably. And most vertebrates.

OK, now let's consider unconscious levels of human mental processing. First, do you agree they exist, even in our awake state?
Absolutely. Also, there is almost certainly more than one conscious process being carried out by the brain, awake or asleep. We happen to be one of them, and have only limited access to the others. (And that's not even considering abnormal psychology and neurology like split-brain patients, though they are a wonderful illustration of this.)

Let's make this argument harder for me and easier for you by assuming you don't agree. Would you agree that there are unconscious levels of human processing when we're asleep?
Yes.

More importantly, there are conscious levels of human mental processing when you are asleep. Even when you are under general anaesthesia, which is a significantly lower level of arousal than normal sleep. There's conscious
processing going on pretty much up until the point of brain death. That doesn't mean you are conscious, though, in either of the most common senses.

OK, let's make this argument even harder for me and easier on you and let's assume that you believe that dreams are a form of consciousness (after all, there is a form of self-awareness in many/most dreams), so I can't claim that we are unconscious much of the time we're asleep.
Yep.

So now let me peel back the onion to those states of unconsciousness that exhibit no dreaming and/or where brain damage has disabled the ability to dream. Such people exist and appear to be reasonably normal and conscious when awake.
Yep.

Even in this deep dark unconscious realm I could lay before you a raft of studies similar to the ones you argued before to map consciousness to physical observations that show not only that the much of the brain remains functional and highly computationally active when unconscious BUT, more importantly for my argument with you, that many of these unconscious functional processes are known to exhibit and be based on recursive, recurrent, and other self-referential forms of computation, in some cases highly analogous if not identical in function to, and likely far more complex and powerful than, your reflective programs are, probably by orders of magnitude. What prevents all those forms of mental processing from being perceived as conscious?
They are perceived as conscious. By themselves. They're not perceived as conscious by you, simply because that information is not presented to your consciousness.

You're asking why doesn't the program running on computer A affect the operation of the program running on computer B. Answer: Computer A is not computer B.

If you're right, most of our unconscious neural processing should be conscious.
Yes.

How do you account for that and how does it "reflect" on the necessary and sufficient conditions for consciousness in your programs?
What, exactly, do I need to account for? There's more than one consciousness in your brain. You are not directly aware of what's going on in the others any more than you are directly aware of what's going on in my head.
 
Oh Pixi,

I've got a few more follow up questions for you relevant to my questions above? Do you know what "blindsight" is and how it has also been observed in correlated brain actively via various techniques?
Yep.

Now can you clearly and succinctly tell me the difference(s) in subjective "experience" between normal sight and blindsight?
Yes. In blindsight, the information sharing across the conscious process is broken. Even more dramatically in split-brain patients. Self-reference doesn't stipulate that you have complete access.

Finally, for a bonus question, can you tell me what the neural circuits objectively observed to work in normal sight but not blindsight do?
Propogate data.

Pixi, I've been going back to read more of your posts and make sure i understand your views. If i missed where you may have answered or asserted the following claims, please forgive me. But based on my understanding of your views (particularly the low bar you have for self-referentiality) you must believe:

- the internet is conscious
No. The internet subsumes a multitude of conscious subsystems. That doesn't mean there is any overall co-ordinating consciousness.

- markets (e.g., stock markets) are conscious
No. Simple regulation is not self-reference. Where is the process aware of the processing?

- the ecosystem is conscious (like Gaia hypothesis)
Again, no, and for the same reason.

Am I right?
No.

if not, why not.
Where's the self-referential information processing? Please be specific with regards to both aspects.

Now, reflecting back on on the human mind, did you agree that it consists of unconscious and conscious parts?
Sure.

Can you succinctly define what accounts for the boundary, or if there is no distinct boundary, that fact that there are two states at opposite extremes?
Unconscious in this sense is not at all the opposite of conscious; that's just something you've dragged along from the common definitions. Much less opposite "extremes", whatever that's supposed to mean.

Consciousness is just a set of additional behaviours afforded by self-reference.

Now, I'm assuming you'll have to say the bulleted things above are conscious to some degree or you're going to face some major inconsistencies in your arguments - especially if you think a car can be conscious.
No and no.

If so, tell me the means by which you would identify or find the boundary between consciousness and unconsciousness in those systems, or, if no distinct boundary exists, how one would objectively quantify their level of consciousness (which must be possible for you if all of this is materialistic and empirically accessible as you claimed earlier).
Consciousness is self-referential information processing. If a system is doing that, it's conscious. If not, not. If part of it is, that doesn't mean the whole is. If the whole is, that doesn't mean that any particular part of it is.

What would I have to take away from the internet, the stock market, or the ecosystem to render it from a conscious state to an unconscious one?
Well, since I disagree that any of these are necessarily conscious... Nothing.

And having done so, how will I really objectively observe that distinction in terms of the "public behavior" (to use the terms popular when I arrived here) of these systems?
What we're looking for there is rapid adaptive behaviour in the face of altered conditions, and the sharing and progressive modification of such behaviours. Considering how the stock market behaves, I'd say it's pretty consclusive that it's not conscious. The same goes for ecosystems generally.

The internet is a far more complex issue, and we have to be very careful with the fallacy of composition here. The internet incluces conscious components, and subsystems and subnets that can be plausibly be described as conscious, and interfaces with billions of conscious users. Unlike the stockmarket, those don't get squished down to a single behaviour that we can chart on a graph. But that still doesn't mean that the internet is conscious as a system.

Likewise it doesn't prove that it's not; I just don't see either a clear behavioural indicator or a clear mechanism.
 
I treat the question as a foundational one. If someone is unsure they're conscious, there's such a disconnect between what they believe and what I believe, further discussion would be useless. There are people here who actually think they might be P-zombies. How do you even begin a discussion of consciousness with a person like that?
I find "Hello" to generally work well.
 
Now you're simply being argumentative for argument's sake and silly to boot. Semantically there's no difference.
Yes, there is, and it's an important one, albeit subtle. If you go chasing after what things are rather than what they do, you end up down the rabbit hole like, well, certain posters.

Something can be what it does.
Absolutely. Indeed, that's the only way we can construct meaningful definitions. That's why I keep asking you what qualia do.

In fact, if it's a process, as we both agree consciousness is, then its the only coherent semantic description lest you want to fall deeper into reification in the abstract which you essentially accused others of doing in the concrete (usually correctly).
Well, hey, that's just what I said, and that's just what you're doing with qualia.

As a rather respected AI/neuroscientist let me just say you've succeeded in convincing me you're either not conscious or crazy.
Don't even bother. And if you can say something that silly, I have doubts as to your self-description.

I doubt even your idol Dennett will take your back.
He's hardly my idol. Hofstadter, now, that's a different matter.

But if I'm wrong, please let us all know when you collect your Nobel.
I'll have to get in line behind, oh, at least a hundred thousand other programmers, though.

Oh really? After reading thousands of papers, how did I miss such a major discovery of the neural circuitry and underlying processes that show precisely how they compute? I must be a complete fool.
You said it, not me. You also said:

Even in this deep dark unconscious realm I could lay before you a raft of studies similar to the ones you argued before to map consciousness to physical observations that show not only that the much of the brain remains functional and highly computationally active when unconscious BUT, more importantly for my argument with you, that many of these unconscious functional processes are known to exhibit and be based on recursive, recurrent, and other self-referential forms of computation, in some cases highly analogous if not identical in function to, and likely far more complex and powerful than, your reflective programs are, probably by orders of magnitude. What prevents all those forms of mental processing from being perceived as conscious? If you're right, most of our unconscious neural processing should be conscious. How do you account for that and how does it "reflect" on the necessary and sufficient conditions for consciousness in your programs?

So apparently you need to go argue with yourself and sort out your own position before coming here and arguing with me.

Could you please point these strange loops out in detail and tell me exactly what their self-referential algorithm/connectionism is and how it promulgates in the neural network? Or since you're a programmer who's created conscious robots, perhaps you can prove how your reflexive programming yields self-awareness.
Okay, question for you: How can reflective programming not yield self-awareness? After all, that's how reflective programming is defined.

In doing so remember not to be a hypocrite and carefully define your terms as per your Hume quote: "If any one alters the definitions, I cannot pretend to argue with him, until I know the meaning he assigns to these terms. - David Hume 1711-1776"
Done already, right up front.

Hmmm, seems to help prove my point above.
Do you have an answer or not?

You want to use the word? Define it. Operationally.

Not in my view, no, but that statement was made in the Wittgensteinian sense pertaining to information that is potentially present only to the I.
Okay, no wonder it's nonsense. There's no such thing.

You are trying to deny the existence of the subjective
Not at all. I don't know how you even got that idea.

or claim that it is completely isomorphic to objective observation.
Not at all. I don't know how you even got that idea.

Good luck with that. You haven't done anywhere near the level of both scientific and philosophical heavy lifting required to conclude this and neither has your love Dennett. In neuroscience circles the joke about Dennett's book Consciousness Explained was to call it "Consciousness Ignored"
Defining a problem out of existince is perfectly valid if the definition is correct. That it puts a lot of philosophers out of work (well, not really, but in any sensible world...) matters not the least to me, nor should it to you.

The difference between us is I don't deny they exist and I think their cause, though subconscious, is the more critical foundation of human thought, cognition, and awareness.
That's nice. But since you can't or won't define them, I have no idea what it is you think exists, so I can't engage you in discussion on the topic.

I actually meant to say as a result of data compression. But I also could have said they behave as a residual of data compression (which itself could be a self-inclusive and self-referential process). If you understand information theory then you shouldn't have a problem with this. Qualia would have all the residual information "squeezed out" and unavailable to Shannon criteria for further information sending.
Well, again, you're telling me what they're not. What do they do?

It's effortless to believe this if I accept what you implicitly seem to believe is consciousness, which is indistinguishable from the p-zombies I despise.
P-zombies are not conceptually coherent, where my definition is. Therefore they are distinguishable.

I believe it anyway, but not in your level of conception of consciousness.
Well, yes, but you have to come up with a coherent objection, not just fling strawmen about.
 
Last edited:
You were a mod there weren't you? I do remember you were treated quite fairly, even, in my opinion, rather favourably in one case where you suspended a guy you were having a disagreement with.

No, I don't remember that happening. In fact we had a policy whereby it was impossible for that to happen, so I'm not sure why you remember it either...

And actually I was originally the forum admin, not just a mod. ObscuredByClouds eventually took over that role because he was much more a techie than me.
 
Since I've been ragging on UE let me take his back for a change and back it up.

Prove to us that the qualia you perceive as blue doesn't appear to UE as red or that you feel hot pain exactly the same as he does.

That doesn't answer my question in any way, shape or form. Hell, it doesn't even have anything to do with it.

You'll find you have to rely on inference from identical architectural generation of qualia (which presumes materialism) which is a weak argument that tends to collapse if I ask you the same question with respect to some potentially alien or AI consciousness whose neural architecture is different.

Qualia ? The only thing I have to rely on is behaviour.
 
No, the evidence you have that other people have experiences is that you do, and they seem to behave the same way - and toasters don't. There is nothing in public behaviour that indicates sensation except the assertion of sensation.

Or maybe it's the other way around. The evidence that you have that you have experiences is through comparing your behaviour with other people's.

We are unable to make or rebut the claim of experience for anything with certainty, except our own experiencer.

Solipsist nonsense.
 
As a rather respected AI/neuroscientist let me just say you've succeeded in convincing me you're either not conscious or crazy. I doubt even your idol Dennett will take your back. But if I'm wrong, please let us all know when you collect your Nobel.

Huh. Pray tell, how could you tell if a computer was conscious to your satisfaction ?
 
Or maybe it's the other way around. The evidence that you have that you have experiences is through comparing your behaviour with other people's.

No, because nothing in the behaviour of other people is indicative that they are having experiences. The only reason that we assume that they have experiences is because we have experiences ourselves, and we tend to think that they will be the same as us. Explaining experience to something that did not have it would be entirely meaningless.

Initially, we do this in a naive way, and we think that because teddy has eyes and a smile, he's happy. Gradually we come to a more sophisticated understanding.

Solipsist nonsense.

I've noticed that this topic leads to people insisting on the truth of things that are highly dubious, and denying what is obviously true - and the more they think about it, the greater the tendency is.
 
I didn't suggest they were the same - I was making a distinction. UE had ascribed the computationalist viewpoint to Materialism - I was correcting the record.

I was, I think, agreeing with you.

Certainly computationalism does seem to imply the possibility of a disembodied consciousness.

I don't consider the RocketDodger concept of algorithms coming into existence when first discovered is "materialism". Exactly what it is I'm not sure.

OK, good point, I will have to give it some thought.

I've been trying to make this point for a long time now. Hasn't taken yet.
 
Pixi,

Thanks for the detailed response to my questions. It was extremely enlightening about your views and the way you think.

Frankly, I was floored by your response. I’ve never seen a seemingly very intelligent person twist their own tail in so many knots as I intend to point out. I’ll be watching to see if anyone comes to your defense since you care so deeply about pointing out who wins and loses debates. In this case, I think it’s clearly game over. To paraphrase you about UE, perhaps I haven’t won, but you’ve just lost.

Let’s start with your first response in your post: In fact, I’m going to dedicate this entire long post of mine to your first two sentences.

Well, a couple of different ways. First is that this is how we define consciousness: Consciousness is self-referential information processing.

Holy cow, Bingo, right off the bat… Game over. ;-)

Generally, when people like you define the problem by their conclusion, they always get the right answer. It’s a tautology. It’s also nonsense.

As you know, I agree with you that consciousness is self-referential processing. Well, wait, from reading the rest of your stuff I don’t. I believe consciousness is a form of self-referential processing. That’s a critical distinction - more on that later.

Do you realize the huge mistake in logical argument you’ve made here? It also violates the very process of empirical science you claim to champion. I’m going to assume you don’t see it and hope your blinders are not so tightly affixed that you might actually understand why. If not, I know some people who’d be happy to sell you something called “The Ontological Argument for God” – it’s based on the same tautological reasoning.

Consciousness is and was an observable phenomena long before humans had little if any understanding of computation and self-referential programming. In that sense it is a somewhat unique form of observable because long before Turing Machines were ever mentioned or understood (which put self-referentiality on deep formal footing for the first time via recursion less that a century ago), both ancient scientists and philosophers noticed it was a subjective observable, as opposed observing the external environment that everyone can experience. I guess in order to avoid being accused of the same mistake as you, I need to point out that I’m assuming your materialist stance for objective reality to make things easier for me to explain to you and to prevent distracting tangents.

I can see now why you have such a hard time accepting the idea of qualia, or allowing for my fairly basic attempts to define it. Because to a large and possibly exclusive extent, qualia IS the subjective observable the ancients wanted to explain just as we do. That’s all it is. “Consciousness” didn’t magically appear on the scene only once somebody could define it as self-referential processing like you.

Am I getting through to you yet? I’ll assume no and try to give you a concrete example to think about. Let’s look at the process of digestion, something I think we’ll agree is a lot less controversial than consciousness. Why is it less controversial? Because from multiple perspectives it’s is safe to say we really deeply understand digestion, at least it’s most essential elements. It’s been explained. We have learned from painstaking science over centuries that digesting is the process of well-definable biochemical dissolution and absorption of well-definable nutrients conveyed from food in our gut to our bloodstream and the concomitant elimination of waste.

Am I entitled to use this explanation as a definition for digestion? Yes. Why? Because logic and empirical science has essentially proven it. All the essential gaps have been filled yet it is always possible via science that we will discover that digestion actually does other things too (and in fact, it does).

And furthermore, before digestion was explained, it was still a purely observable phenomena that could be defined categorically if less completely in a number of ways. In this sense, digestion can be defined as the sequence of events that can be observed as food entering the stomach, getting really mushy, moving to and getting more icky in something we call intestines as various bodily glands add juices, and then comes out as poop and pee. What else is happening in there. Before modern science all we could do was hypothesize.

Would Plato have been correct in arguing with Socrates centuries ago that the definition of digestion is as I first stated above (involving biochemistry)? No, because even if Plato had a decent intuition of what biochemistry might be there was then insufficient evidence to prove that the explanation could be taken as a definition. I’m sure there were many competing ideas at the time and they lacked the science and knowledge to resolve it.

As far as consciousness is concerned, the debate may be over in your mind but it clearly isn’t in the scientific community or in philosophy. Strange loops still have the status of hypothesis, not established theory or law. There is as yet no consensus that “strange loops” are an answer to consciousness much less the definition. There is no conclusive proof for it as a theory nor does it yet make abundant predictions capable of confirmation or deeper elucidation at present. We cannot yet even characterize or succinctly formalize all the necessary and sufficient conditions for something to act as a strange loop. We don’t understand the principles sufficiently to engineer AI of comparable conscious sentience to our own.

For all these reasons and many more, you’re not entitled to define consciousness purely as a strange loop yet my friend or by the more amorphous and incomplete concept of self-referentiality that takes many forms known and possibly unknown. I state this with confidence despite the fact that like you, I believe it will be proven to be the cause and explanation eventually.
 
Last edited:
OK Pixi, I’m enjoying this, like I enjoy torturing kittens. Let me continue analyzing your posts.


First is that this is how we define consciousness: Consciousness is self-referential information processing.

We can reach this definition very quickly, though it does require cutting away a huge amount of nonsense. What the brain does is information processing. The brain produces consciousness. Therefore consciousness is some sort of information processing. But many tasks the brain performs are either not conscious or not represented in our consciousness. So what's the difference? The difference is self-reference, Hofstadter's strange loop.

….

They [unconscious processes] are perceived as conscious. By themselves. They're not perceived as conscious by you, simply because that information is not presented to your consciousness.


My bold. You’ve already agreed with me I think (??? What does that last batch of gobbledygook even mean???), or certainly at least have not contradicted, that unconscious processes also involve self-referencing computation. So I see no distinction. And even if I did see it, how do you prove that self-reference in the form of a strange loop is the only difference, and that it is both sufficient and necessary? There is still so much we don’t know about neural computation.

To focus on the gobbledygook part, you appear to suggest there that my unconscious, or at least big parts of it, are actually conscious. But somehow its not perceptible to me. What does that even mean? Who is “me”? Do I have split personalities now I’m not aware of? Which one of my conscious parts do I recognize when I wake up in the morning and which one(s) have be chatting with you? Didn’t you agree in earlier posts that to be conscious is to be self–aware and engaged in experience. Why do my unconscious parts require self-awareness and what are they doing with it if they’re not experiencing and sharing that awareness with “me”?

Pixi, come on man, this is incoherent. But let me simplify things a bit going forward. I’m not very interested right now in investigating your unsupportable claim that I have a conscious unconscious that “my” consciousness is unaware of it (I’m laughing my ass off as I write this). The only part of consciousness I, scientists, and philosophers want explained is the part you call “me”/”you”/”I” that can experience, in a sentient and self-aware manner, seeing, thinking, and writing this response to you right now.

Second is the same test we always use to check if a system is conscious: Ask it. SHRDLU is not a particularly complex or sophisticated program by modern standards, but I would like you to explain to me exactly what behaviour it is that is definitional to consciousness that SHRDLU does not display.
Oh my, I suggest you write to Terry Winograd, a buddy of mine and a brilliant and wonderful fellow, and ask him – he created it.

You might want to check out what he and his associate Hubert Dreyfus (a philosophy professor who’s still at Berkeley I think) have written about consciousness first. Not only does it rip your arguments to shreds but they even present a significant and worthy challenge to mine.

But because that's what we mean when we say conscious. We mean that we have access to our own mental processes. Self-awareness. Self-reference. Reflection. All those terms mean the same thing.
No Pixi, no matter how many times you say it, Self-Awareness DOES NOT EQUAL Self-Reference. Though self-referencing, in perhaps many forms, is probably required to generate self-awareness, they are not the same.

Simple recursion is a form of self-reference. Is this program self-aware:
1. Print “hello
2. GOTO 1

If you insist that it or anything like is is self-aware than all I can say is that this is a form of self-awareness I don’t care about. It doesn’t explain how I experience my form of self-awareness.

You can hand-wave all you like and say my self-awareness is more complex or is an emergent property of several forms of your kind of self-awareness converging (how, on what?). But then you'd be leaving out the essential meat of the process and giving us nothing but conjecture.

It’s worth remembering what Aristotle said also, “there comes a point where a difference in degree becomes a difference in kind”. If you’re right, and I do believe many of your conjectures will one day be proven correct, I think it will require an explanation for Aristotle’s distinction in kind that you have not supplied and our best science has yet to deliver. We simply don’t know yet.


And finally, while I’m at it, I’ve got to get another nagging error of yours out of the way. You don’t even have Hofstadter right. He would not agree with you that self-reference = self-awareness. His “strange loop” IS not self-reference. It INVOLVES a special FORM of self-referencing to which he would probably concede he doesn’t know all the necessary or possible pieces of yet. Nor would he claim there may not be other computational components required to be added on to strange loops to yield consciousness. Hofstadter’s strange loops (which differ a little from mine but I won’t bore you) require self-referentialism of a form that creates recurrent cycling through hierarchical abstraction layers that collapse input and output upon each other in a paradoxical way. Furthermore, Hofstadter would admit that he doesn’t know what minimum properties and requirements of the hierarchical abstraction layers are necessary to generate self-awareness. In a century of neuroscience, we still do not have even one complete description of such a hierarchy in the brain even though most of us believe they exist and we think we have pieces of it. And when you discard your ridiculous tautological definition for consciousness you will realize you don’t know how to program these abstraction hierarchies either. When you figure it out, then I’d predict we’d have sentient robots soon after and you’d get your Nobel Prize.
 
I just made a thread on the science and mathematics forum that explains my model of consciousness. If you are interested in learning or discussing actual science please take a look.
 
OK Pixi, I’m enjoying this, like I enjoy torturing kittens. Let me continue analyzing your posts.





My bold. You’ve already agreed with me I think (??? What does that last batch of gobbledygook even mean???), or certainly at least have not contradicted, that unconscious processes also involve self-referencing computation. So I see no distinction. And even if I did see it, how do you prove that self-reference in the form of a strange loop is the only difference, and that it is both sufficient and necessary? There is still so much we don’t know about neural computation.

To focus on the gobbledygook part, you appear to suggest there that my unconscious, or at least big parts of it, are actually conscious. But somehow its not perceptible to me. What does that even mean? Who is “me”? Do I have split personalities now I’m not aware of? Which one of my conscious parts do I recognize when I wake up in the morning and which one(s) have be chatting with you? Didn’t you agree in earlier posts that to be conscious is to be self–aware and engaged in experience. Why do my unconscious parts require self-awareness and what are they doing with it if they’re not experiencing and sharing that awareness with “me”?

Pixi, come on man, this is incoherent. But let me simplify things a bit going forward. I’m not very interested right now in investigating your unsupportable claim that I have a conscious unconscious that “my” consciousness is unaware of it (I’m laughing my ass off as I write this). The only part of consciousness I, scientists, and philosophers want explained is the part you call “me”/”you”/”I” that can experience, in a sentient and self-aware manner, seeing, thinking, and writing this response to you right now.


Oh my, I suggest you write to Terry Winograd, a buddy of mine and a brilliant and wonderful fellow, and ask him – he created it.

You might want to check out what he and his associate Hubert Dreyfus (a philosophy professor who’s still at Berkeley I think) have written about consciousness first. Not only does it rip your arguments to shreds but they even present a significant and worthy challenge to mine.


No Pixi, no matter how many times you say it, Self-Awareness DOES NOT EQUAL Self-Reference. Though self-referencing, in perhaps many forms, is probably required to generate self-awareness, they are not the same.

Simple recursion is a form of self-reference. Is this program self-aware:
1. Print “hello
2. GOTO 1

If you insist that it or anything like is is self-aware than all I can say is that this is a form of self-awareness I don’t care about. It doesn’t explain how I experience my form of self-awareness.

You can hand-wave all you like and say my self-awareness is more complex or is an emergent property of several forms of your kind of self-awareness converging (how, on what?). But then you'd be leaving out the essential meat of the process and giving us nothing but conjecture.

It’s worth remembering what Aristotle said also, “there comes a point where a difference in degree becomes a difference in kind”. If you’re right, and I do believe many of your conjectures will one day be proven correct, I think it will require an explanation for Aristotle’s distinction in kind that you have not supplied and our best science has yet to deliver. We simply don’t know yet.


And finally, while I’m at it, I’ve got to get another nagging error of yours out of the way. You don’t even have Hofstadter right. He would not agree with you that self-reference = self-awareness. His “strange loop” IS not self-reference. It INVOLVES a special FORM of self-referencing to which he would probably concede he doesn’t know all the necessary or possible pieces of yet. Nor would he claim there may not be other computational components required to be added on to strange loops to yield consciousness. Hofstadter’s strange loops (which differ a little from mine but I won’t bore you) require self-referentialism of a form that creates recurrent cycling through hierarchical abstraction layers that collapse input and output upon each other in a paradoxical way. Furthermore, Hofstadter would admit that he doesn’t know what minimum properties and requirements of the hierarchical abstraction layers are necessary to generate self-awareness. In a century of neuroscience, we still do not have even one complete description of such a hierarchy in the brain even though most of us believe they exist and we think we have pieces of it. And when you discard your ridiculous tautological definition for consciousness you will realize you don’t know how to program these abstraction hierarchies either. When you figure it out, then I’d predict we’d have sentient robots soon after and you’d get your Nobel Prize.

Deja Vu...

Some of my favorite Pixy claims:

- Car engines, washing machines, and toasters are conscious
- Anesthetized patients are unconsciously conscious
- Self-aware thermostats
- Consciousness IS self referential something something. I kind of tune out at that point. I know the acronym is SRIP.

Others can feel free to add more. It's all hilariously nonsensical, more so since Pixy actually believes this stuff. I've enjoyed your critique so far. Did you say you're a neurologist?
 
I've enjoyed your critique so far. Did you say you're a neurologist?

Glad to hear it. Glad to know somebody found all that work entertaining.

I'm an odd duck.

I started as a nuclear physicist but moved to become a biophysicist and went into neuroscience in the 80s. I lived in the wet brain world awhile (I invented neat little machine for studying a type of neuronal processing) and then moved over to artifical neural networks and AI research. Then I had a bunch of inventions related to AI/pattern recognition and became an entrepreneur and founded a bunch of companies some very successful, some failures, and some still among the walking dead (a VC term for companies that employ people and do useful stuff but unlikely to ever yield a return to investors).

I got entrepreneural burnout and over the past 4 years I've been rediscovering my roots in basic science and goofing around in self-discovery too (exploring writing a book, doing a stand-up comedy act, engaging in philanthropy etc.). Kids are almost grown, don't need more money, just curious and wondering again what I want to do when I grow up ;-).
 

Back
Top Bottom