• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

PixyMisa, I'd love to be able to engage you in some meaningful discussion but, like I told you before, I'm not going to slog thru your ideological baggage in an attempt to do so. At this point, all you've managed to do is thoroughly convince me that you're an intellectually shallow ideologue with no interest in an actual discussion. When you're ready to do so let me know. Until then stop wasting my time.

I think that the word qualia is very helpful in this discussion. Saying "Qualia are contradictory and nonsensical" then it sounds reasonable. Saying "the concept of feeling pain is meaningless" and people won't take you nearly as seriously.
 
I really don't know what you are talking about or what your position is anymore. Your default reply to anyone who says that computers behave as if they are conscious is some snarky remark about how obvious it is that chat bots aren't human. Well so what? Are you saying that in order for something to be conscious, it must first convince you that it is human? What kind of ridiculous non sequitur is that? Why is 'ability to pretend to be a human' a requisite for awareness?

Let me just remind people of a point of information. The reason it's called the "Turing test" rather than the "Westprog test" is because I didn't make it up. In fact, it's older than I am. So the people who find it so objectionable should take that up with the people who find it the final word in the matter. I do not.

However, it's interesting that the argument has developed now to the point where the proponents of computer consciousness don't think that any test is necessary. Proving that consciousness exists in computers is now unnecessary. We don't need to identify or define consciousness, we don't need to test it at all - we just affirm its presence.
 
...snip...

However, it's interesting that the argument has developed now to the point where the proponents of computer consciousness don't think that any test is necessary. Proving that consciousness exists in computers is now unnecessary. We don't need to identify or define consciousness, we don't need to test it at all - we just affirm its presence.


If that is the case you should be quite pleased as it would mean they have come around to your way of thinking!
 
Below I going to try to lay out my line of reasoning:

[P1] - Not all phenomena are computations, although abstracted features of them may be mathematically modeled and simulated on computer systems.
Yes, absolutely.

[P2] - Computational simulations are not identical to the systems they're modeling. They only provide abstracted descriptions to aid in our understanding of them.
Yes and no. The no part is important. An accurate computational simulation of a physical computational system by definition produces identical results.

[P3] - We know from the example of our own biology that processing information [even self-referencing IP] does not necessarily equate with there being a subjective component.
No we don't.

Whatever consciousness is, its a product of the physical conditions of the brain/body.
This goes without saying, but it tells us nothing at all.

[P4] - Right now neuroscientists are still hard at work trying to understand what it is about brain activity that produces what we call consciousness and the subjective experiences that accompany it. Until we have such an understanding we have no way of technically designing such a feature into our current technology.
Our computers are conscious right now.

[C] - Given the above, it is not justified to assume that computational simulations of brain function will necessarily produce consciousness.
Complete and utter nonsense. It's not just that some of your premises are wrong or that this conclusion does not follow from your premises in any case - though some of your premises are wrong and your conclusion doesn't follow - but that this is completely and utterly impossible.

Computational simulations of brain function if they are accurate must produce consciousness.

The brain is a physical system.

The mind is produced by the brain.

A simulation of the brain will reproduce in the simulation the physical conditions of the brain. That will reproduce in the simulation the conscious mind. To assert otherwise is to say "No, this physical condition, unlike all others, is magical and inviolate".

It's nonsense.

Because it's an instinctive feature of humans to identify with the external behaviors of others and equate those behaviors with our own consciousness [humans with conditions like autism seem to lack this instinct]. But, just like all instincts, they are not necessarily accurate. For example, objects roughly of a particular size and shape can fool many species of birds into treating them like their own eggs, even tho they aren't eggs at all. Likewise, human intuitions about what behaviors indicate consciousness can be fooled.
This is likewise nonsense, and goes to the key problem with the notion of p-zombies: If you can introspectively analyse, explain, and modify you own behaviour, you are conscious.
 
If that is the case you should be quite pleased as it would mean they have come around to your way of thinking!

I hardly think so. I regard the test for consciousness as being ad hoc, but remarkably reliable, as applied to human beings. We can distinguish a conscious from an unconscious human, or a wax figure, remarkably easily. We can do it down a phone line. We can do it on this forum. What is being proposed is that this reliable test be abandoned and not replaced with anything at all. Well, replaced with an assertion that "Our computers are conscious right now".
 
Last edited:
I hardly think so. I regard the test for consciousness as being ad hoc, but remarkably reliable, as applied to human beings. We can distinguish a conscious from an unconscious human, or a wax figure, remarkably easily. We can do it down a phone line. We can do it on this forum. What is being proposed is that this reliable test be abandoned and not replaced with anything at all.

And you confirm my statement was correct - all you are doing is asserting, so if that is all that others are now doing they have come around to your way of "testing" or "defining" conciousness.
 
And you confirm my statement was correct - all you are doing is asserting, so if that is all that others are now doing they have come around to your way of "testing" or "defining" conciousness.

The version of "consciousness" that I'm referring to is based on the concept as generally understood. I'm not plucking this stuff out of the air.

If I say that consciousness is something that human beings recognise in each other, and find in themselves through introspection, I'm merely describing what consciousness is. There's nothing to stop anyone making up there own definition - "consciousness is self-referential information processing", or "consciousness is when one bus overtakes another". However, they won't be describing consciousness as generally understood.

The vague description (it's not tight enough to be a definition) of consciousness that I use does not preclude any particular explanation of consciousness, but I suppose that's not good enough. A definition that enfolds the explanation is what's wanted, discussion over.
 
The version of "consciousness" that I'm referring to is based on the concept as generally understood. I'm not plucking this stuff out of the air.
No, but you are waving your hands a lot.

If I say that consciousness is something that human beings recognise in each other, and find in themselves through introspection, I'm merely describing what consciousness is. There's nothing to stop anyone making up there own definition - "consciousness is self-referential information processing", or "consciousness is when one bus overtakes another". However, they won't be describing consciousness as generally understood.
So what, exactly, is present in the "generally understood" concept of consciousness that isn't in self-referential information processing?

You say they are not equivalent. What, exactly, is the difference?

The vague description (it's not tight enough to be a definition) of consciousness that I use does not preclude any particular explanation of consciousness, but I suppose that's not good enough.
No, it's not.

A definition that enfolds the explanation is what's wanted, discussion over.
Already given. Over to you.
 
In principle I think such a thing is possible. We have plenty of automated systems now that can manage themselves to some degree without human intervention; but the kicker is that they were intentionally designed by conscious agent(s) to perform the tasks they carry out.

Irrelevant. _I_ was conceived by my parents but it doesn't stop me from doing science. Ditto for computers. Therefore computers CAN do science.

Because of what we've learned from our own biology and neurophysiology we know that just because a system is processing information does not mean that theres a subjective component to it. The overwhelming majority of information processing in our own bodies is completely unconscious [i.e. no subjective component]. This clearly demonstrates that information processing IAOI is not the same as, or even necessarily produces, conscious thought.

You DO know that you're still processing information when you're unconscious, right ?

How can one hold empirical science to be true and reliable when they doubt the very basis of empiricism?

I'm not. We have excellent working assumptions in that regard. I'm simply pointing out that assuming that this basis is 100% certain while all the rest is not is special pleading.

With that being said, this establishes that there is a difference between a phenomenon and a computational model of said phenomenon.

Only if the phenomenon itself is not pure computation, already, yes ?

[P3] - We know from the example of our own biology that processing information [even self-referencing IP] does not necessarily equate with there being a subjective component.

I'm not sure we "know" this.

Because it's an instinctive feature of humans to identify with the external behaviors of others and equate those behaviors with our own consciousness

So you are saying that you rely of observable BEHAVIOUR to conclude that other humans are conscious. And yet you wouldn't with a computer ? But if behaviour is the only thing you have, and a hypothetical computer displays those behaviours...

I'm certain that I'm experiencing my experiences for the same reason I am certain 1+1=2: Because its unequivocally -demonstrated- to me that it is so.

See above.
 
However, it's interesting that the argument has developed now to the point where the proponents of computer consciousness don't think that any test is necessary. Proving that consciousness exists in computers is now unnecessary. We don't need to identify or define consciousness, we don't need to test it at all - we just affirm its presence.

That is a lie, Westprog. Some people here have tried to define tests and arguments about computer consciousness since the thread began.
 
AkuManiMani said:
In everyday life we must make the distinction between events that are initiated with conscious intent and those that aren't.

Quick question... what is "conscious intent"?

Do you mean to convey that our conscious minds create the intent, or that we have an intent that we are aware of?

If it's the former, by what means? (And does that really fit our experience? It doesn't fit mine...)

And if it's the latter, could we have intent for things we're not conscious of? And if so, would there really be a difference between said intent and a computer, even if the computer had no "subjectivity" about it?

I was trying to emphasize that deliberate actions are initiated by conscious subjects with a perceived goal they wish to fulfill. To be honest, I can't think of any meaningful definition of intent that didn't necessarily imply conscious goals. Even so, I decided to say "conscious intent" because experience has taught me that I can't assume that others here have the same understanding of the word.

(FYI, I'm clumsily trying to avoid the word "intention", which means something else).

Its okay, you can use "intention"; I think "intension" with an "s" is the word you're trying to avoid :)
 
AkuManiMani said:
In principle I think such a thing is possible. We have plenty of automated systems now that can manage themselves to some degree without human intervention; but the kicker is that they were intentionally designed by conscious agent(s) to perform the tasks they carry out.

Irrelevant. _I_ was conceived by my parents but it doesn't stop me from doing science. Ditto for computers. Therefore computers CAN do science.

Remember when I pointed out that it's possible for computer to do science? My only stipulation was that they have conscious intent to carry out their tasks and a understanding of what they're doing; and it so happens that humans are an example of such computers. My point is that right now we've no scientific knowledge of how our own brains produce consciousness and therefore, no technical means of designing it into our current generation of computers.

Even if we do design conscious computers, if they still have no understanding of what they're doing and why then they're not "doing science" any more than an animal lab subject is "doing science".

AkuManiMani said:
Because of what we've learned from our own biology and neurophysiology we know that just because a system is processing information does not mean that theres a subjective component to it. The overwhelming majority of information processing in our own bodies is completely unconscious [i.e. no subjective component]. This clearly demonstrates that information processing IAOI is not the same as, or even necessarily produces, conscious thought.

You DO know that you're still processing information when you're unconscious, right ?

Uhm...Yea, thats kinda my point :confused:

AkuManiMani said:
How can one hold empirical science to be true and reliable when they doubt the very basis of empiricism?

I'm not. We have excellent working assumptions in that regard. I'm simply pointing out that assuming that this basis is 100% certain while all the rest is not is special pleading.

The point is that "all the rest" cannot have any certainty unless the fundamental basis [i.e. our -experience- of "all the rest"] is certain.

AkuManiMani said:
With that being said, this establishes that there is a difference between a phenomenon and a computational model of said phenomenon.

Only if the phenomenon itself is not pure computation, already, yes ?

Okay, moving along then...

AkuManiMani said:
[P3] - We know from the example of our own biology that processing information [even self-referencing IP] does not necessarily equate with there being a subjective component.

I'm not sure we "know" this.

Of course we do. You just pointed this fact out yourself:

Belz...:"You DO know that you're still processing information when you're unconscious, right ?"


AkuManiMani said:
Because it's an instinctive feature of humans to identify with the external behaviors of others and equate those behaviors with our own consciousness

So you are saying that you rely of observable BEHAVIOUR to conclude that other humans are conscious. And yet you wouldn't with a computer ? But if behaviour is the only thing you have, and a hypothetical computer displays those behaviours...

I'm saying its the best measure evolution could provide for us in this area but it still isn't fool proof. Its this very same instinct that causes children to be deceived into thinking puppets and animatronic toys are alive and sentient. The goal is to scientifically pin down the physical process that necessarily indicates consciousness, not fall back on an unreliable instinctive response that our primate ancestors've been stuck with for eons.
 
To be honest, I can't think of any meaningful definition of intent that didn't necessarily imply conscious goals.
Same question. What's a conscious goal? Is it a goal that your consciousness produces, or a goal that you become conscious of?
Its okay, you can use "intention"; I think "intension" with an "s" is the word you're trying to avoid :)
No, they both have special meaning. IntenTionality is aboutness (the fact that I can be talking about a lamp at all). An intenSion is a concept that has an extenSion (intenSion is the concept of a lamp--its extension is the actual lamp). And what we're talking about is altogether different--it's intent in the sense of what you were planning to do.
 
Last edited:
If thermostats are conscious, should they be given moral rights? If a brand of thermostat is about to be destroyed entirely, should we make efforts to prevent its extinction like with The Endangered Species Act?
 
If thermostats are conscious, should they be given moral rights? If a brand of thermostat is about to be destroyed entirely, should we make efforts to prevent its extinction like with The Endangered Species Act?

In the real world we do not make decisions about destroying something based on whether we think it is conscious or not.
 
AkuManiMani said:
To be honest, I can't think of any meaningful definition of intent that didn't necessarily imply conscious goals.

Same question. What's a conscious goal? Is it a goal that your consciousness produces, or a goal that you become conscious of?

Hm...I'd have to say the former. The latter example sounds like an unconscious instinct or drive.

No, they both have special meaning. IntenTionality is aboutness (the fact that I can be talking about a lamp at all). An intenSion is a concept that has an extenSion (intenSion is the concept of a lamp--its extension is the actual lamp). And what we're talking about is altogether different--it's intent in the sense of what you were planning to do.

Oki, gotcha :)
 
Last edited:

Back
Top Bottom