• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

AkuManiMani said:
What is the difference between the "computational things" conscious beings do and the "computational things" unconscious things do?

I don't know - I guess there is no difference between Microsoft Windows kernel and Microsoft Word. They are both computational so obviously exactly the same sort of thing. :rolleyes:

Functionally, a Windows kernel and MS Word are very different. Provided they're utilizing the same hardware, physically they are the same.

Doing or not doing consciousness.

And what, by your definition, is consciousness?
 
Last edited:
PixyMisa said:
To the rest of the Universe, you're a consciousness in potentia. But to you, you're real. If we reassemble the components and plug you in, you will experience no loss of continuity. We recorded you, played you back on components drifting through space, reassembled you, plugged you in to a new body, and off you go. To you, one single, continous, conscious experience.
But we can't reassemble the far-flung processors into the primary simulation processor, because they don't have the same architecture. We can't even upload their partial states into the primary processor. Instead, we would have to record the entire state of the simulation in each probe and then, sometime later, upload the last probe's state into the primary simulation processor. We would then have continuity, but trivially so.

I think this exposes a problem with our parcelled/stepped/distributed brain simulation. It has something to do with active vs. passive consciousness. Maybe.

~~ Paul
 
physically they are the same.

And does it make a difference when I run on a different box?

And what, by your definition, is consciousness?

Wiki has: "Consciousness is subjective experience or awareness or wakefulness or the executive control system of the mind," which I can work with.
 
Last edited:
Qualia are not magic and they don't exist, but they do happen. They are descriptive of our observations.
Descriptive how? A description defines one thing in terms of other things. Qualia don't do that at all. That leads people to claim that they are "elementary", which is utter nonsense.
 
But we can't reassemble the far-flung processors into the primary simulation processor, because they don't have the same architecture. We can't even upload their partial states into the primary processor. Instead, we would have to record the entire state of the simulation in each probe and then, sometime later, upload the last probe's state into the primary simulation processor. We would then have continuity, but trivially so.
I agree - that's just what I was thinking after my last post.

I think this exposes a problem with our parcelled/stepped/distributed brain simulation. It has something to do with active vs. passive consciousness. Maybe.
Yes. Not sure it invalidates the general idea, but it reduces the "record/playback" scenario to a tautology.
 
No, it is not silly, because for all the effort, no computer has come anywhere near passing the Turing test, after half a century of trying.

What has that got to do with how a computer looks?

So any discussions about what we'll do when a computer does do that should really be put on hold.

And you should stop confusing the specific computing hardware with the rather more important question of what is being computed.

Computers not passing the Turing test is hardly unsurprising for anyone familiar with the state of AI - it is too primitive to be able to deal with the level of complexity in understanding language, culture et al for anyone to have any serious thoughts about it being at a human level. The programs that compete in Turing challenges today are toys that use tricks to try to fool rather than real attempts to generate software with human understanding. As such they spectacularly fail.
 
In the same way that a stone knows how to respond when you poke it with a stick. It doesn't need to be conscious.
Stones don't say "ouch". Stones don't, when asked "Why did you say ouch?", reply "Because you poked me with a stick." Not merely aware of the outside world in a way that stones aren't, but able to reference their own internal representation of the outside world in order to answer instrospective, intentional questions.

In other words, p-zombies, defined specifically to not be conscious, are... Conscious.
 
What has that got to do with how a computer looks?

I'm using "looked" colloquially.

And you should stop confusing the specific computing hardware with the rather more important question of what is being computed.

Computers not passing the Turing test is hardly unsurprising for anyone familiar with the state of AI - it is too primitive to be able to deal with the level of complexity in understanding language, culture et al for anyone to have any serious thoughts about it being at a human level. The programs that compete in Turing challenges today are toys that use tricks to try to fool rather than real attempts to generate software with human understanding. As such they spectacularly fail.

So call me when they don't.
 
Stones don't say "ouch". Stones don't, when asked "Why did you say ouch?", reply "Because you poked me with a stick." Not merely aware of the outside world in a way that stones aren't, but able to reference their own internal representation of the outside world in order to answer instrospective, intentional questions.

In other words, p-zombies, defined specifically to not be conscious, are... Conscious.


The first rule of being a hypothetical entity is you don't talk about being a hypothetical entity.
 
What you can't do with your compartmented consciousness is do anything that hasn't been pre-calculated without running smack into the lightspeed barrier. If I inject a signal into your primary visual cortex (currently orbiting Sirius B) that says "WHAT IS 2 + 2?", then that has to be encoded and transmitted out to your prestriate cortex six light years away, and that's going to take six years. And then it has to move on to the other areas involved in visual perception, taking more years, and then to the areas involved in higher cognition, taking more years, then onto the speech areas of the brain, taking still more years.

Pre-calculating everything allows the appearance of cheating relativity, but you can't actually cheat - you can't do anything new, only play back the recording.

And this is why I have been saying that Run4 is really the same consciousness as Run2, any way you look at it. Run4 doesn't really "produce" a consciousness because it already exists, in the data that was saved from Run2.
 
So I was doing okay with the probes-across-the-universe thought experiment, but now I'm having trouble with the teleported-brains experiment.

Let's say we make a copy of my brain every millisecond for a few seconds and array them across the same field with the horses. Let's assume the brains are halted in time right down to every electrochemical process. Are the brains conscious?

I don't see how. Certainly each one alone isn't, because they are halted. Are they conscious as a group? It doesn't seem like they would be.

What is the difference between the brains and the probes?

~~ Paul

I don't understand why Robin and you are insisting that the relevant sequence must be between the arrayed copies.

Why can't the relevant sequence be between the brain prior to the copy + the copy?

Then you have no problem. It is just a whole bunch of separate instances of the same algorithm, some unlucky ones terminating much sooner than the others.
 
But can't you at least see a distinction between a perception and a quale?
No because the defintion of quale is not making sense to me.

I haven't heard what makes a quale different than a perception, at least that I understand.
If I build a robot with a camera on it and a touch sensitive body then it might recognise colours and shapes and even go "ouch" when I poke it with a stick.
Well that isn't really a p-zombie is it?

See it is teh construct of the p-zombie that is teh fallacy.

they don't say if a p-zombie has sensations or not, and quale is such a vague term, what makes a quale different from a perception? Sensation yes, because taht is unprocessed neural information (well partly processed but not correlated).
Basically the poor old p-zombie changes to fit the argument at hand.
Not my construct, I think it is a fallacy from the go.
Sometimes he is physically and behaviourally identical to a human and sometimes not.
Well a behavioral or neurological zombie is conscious by my defintion.
Sometimes he has no conscious states and sometimes has different conscious states to the associated non-zombie.

They are swapped in and out to suit the argument.

And by Chalmers explicit definition of "conceivable" it is not enough that we can conceive someone is a p-zombie.

We have to imagine a coherent situation in which we could conclude that our conscious looking neighbor is a zombie.
 

Back
Top Bottom