• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Btw, here are the other 2 parts of the Pinker talk. Direct discussion of computation is mostly at the beginning of part 2 and the end of part 3.



 
The computer simulation of a tornado has been described as an isomorphism between the particles in the tornado and the particles in the computer. There is certainly some kind of functional connection between the simulation of the tornado and the tornado. Calling it an isomorphism is a stretch, but if the simulation didn't have any relationship with a tornado - well, obviously, we wouldn't recognise it as a tornado.

However, many objects and processes map to tornadoes. A picture of a tornado, a film of a tornado, a book about tornadoes. The word "tornado" - which has, in a sense, an isomorphism to the actual tornado. Pointing at a tornado and shouting "tornado" is a way to describe a tornado, every bit as much as the computer simulation.

For some reason, none of these representations count as a "world" where the tornado is just as real as on the computer. I suppose that a claim that Dorothy is really being carried to Oz in the world of the film doesn't sound quite a scientific and reasonable as a claim that tornadoes are real in the world of the computer, but each claim is precisely as justifiable. The relationship between the tornado in the film is just as close to a real tornado as is a string of number representations in the computer.
 
In most simulations, we rely for their interpretation for the understanding of symbols that aren't defined within the simulation. Thus, in the tornado simulation, the words "tornado", "wind speed" etc are not defined for the simulation - they are generated, displayed - but their meaning is assumed. In the mythical "world of the simulation" the words have no meaning.

This applies to most simulations. Mario is a collection of separate components within the console. It's only when he's displayed on the screen that people interpret him as a cute little Italian plumber with a moustache.

Is this always the case? Obviously it is in practice. The simulation consists of a system where the understanding is provided by the observer. The simulation itself doesn't have any knowledge of the meaning of the symbols it is manipulating, and hence such meaning is not part of the "world". Any "world" inside Halo won't consist of aliens and soldiers, because the meaning of the symbols isn't inside the simulation -it's part of the human interpretation.

Could you have a computer program where the interactions between components were sufficiently richly mapped that they ended up being de facto semantic definitions? Can the meaning of something be defined entirely by its relationship with other objects? I suspect not. However, that's the minimum requirement to create a "world" which stands on its own and which has no need of external interpretation. I don't think that any such systems have been even conceptually designed. If they do appear, I still don't consider them as creating worlds, but they would be at least a step closer.
 
An interesting sidebar concerning the concept of "isomorphism". If there really is an isomorphism between the tornado and the simulation, then as well as there being a world inside the computer where the tornado is real, there's a world inside the tornado where the computer simulation is real.
 
Again with the "many worlds" theory.

If it's true that the connection between the behaviour of the particles in the simulation and the particles in the tornado create a world in the simulation where the tornado is real - then whenever physical system A behaves in a way analogous to system B - that means that a world B* is created within system A. Since such analogies are almost universal, that means that such worlds are present in every physical system - in enormous profusion.
 
An interesting sidebar concerning the concept of "isomorphism". If there really is an isomorphism between the tornado and the simulation, then as well as there being a world inside the computer where the tornado is real, there's a world inside the tornado where the computer simulation is real.
Not if you limit scope and collapse categories.
 
It isn't that simple, but even if it was, what about the cells that make up the heart? And the organelles that make up them?

Biological life is centered around the ability to react to a wide set of environmental states with a smaller set of behaviors.



It depends on the rock.

If there are areas of low integrity, such that no matter where you hit the rock it always fractures along those zones and ends up in one of 3 possible states, then you have a switch.

If the rock fractures differently each time you hit it, then you don't.

If all that is true, then I agree with westprog and piggy. Calling a brain a type of computer is tautological, seeing as everything is a computer, and it doesn't tell us anything about consciousness.

Funny how a silly definition can turn things around like that.
 
Now I think we can get back to this:



So you see, the entities produced by the simulation -- the patterns of activity in the simulator itself -- are real.

But as such, they are only that... patterns of activity in a simulator machine.

And if the entity which exists in the machine is conscious in the machine, your simulated tornado would feel very real to it, no ?
 
Btw, here are the other 2 parts of the Pinker talk. Direct discussion of computation is mostly at the beginning of part 2 and the end of part 3.

The only "problem" with Pinker's use of computational theory of the mind, is that he doesn't really address where consciousness comes from.

Yes, it does a good job addressing just about everything else the mind does, but not the nature of consciousness specifically.

You need to get into other sources for that material. That is why I like to cite Blackmore, Dennett, and Damasio, etc. more often for consciousness-related materials. Their stuff can also applpy a computational metaphor, just a different type algorithm than what Pinker usually talks about.
 
Pictures don't have a set of internal laws, however.

Any media you use to make a representation will follow the rules described by the laws of physics, whether that's a piece of stone or a computer.

Nothing that's real follows any other set of rules.

As long as you're dealing with a representation, not a reproduction, then it is whatever it's made of -- a chunk of marble, a computer, paper, whatever -- regardless of its use by us as a representation of something else.

We use it to set up correspondences to othe things -- which make a statue of Napoleon look like Napoleon, and a drawing of a baby look like a baby, and a computer simulation of a tornado seem somehow like a tornado -- but the stuff is still the same stuff it was before.

The fact that we can describe the behavior of the computer at various levels of abstraction as "computing" doesn't change that situation.

We can observe it as we watch it run the sim, and we won't see any tornados, just as (and for the same reason that) we can search the paper and never find a baby.

And there's nowhere else for a tornado to exist that's in any way connected to the events we call the simulation, except, of course, as a mental representation in the mind of an observer, which is to say in someone's imagination.

The computery behavior of the computer and the brainy behavior of the brain, i.e. reality and a programmer's and observer's imaginations, are enough to account for everything that's going on with no need for, and nothing left over to fuel, any new worlds.
 
And if the entity which exists in the machine is conscious in the machine, your simulated tornado would feel very real to it, no ?

In order to do that, you'd have to first build a machine that is conscious. That would give you an "entity which exists in the machine", just as we exist in our bodies, which is "conscious" just as we are.

(If you're thinking, wait, no, I mean what if a character in a simulation being run on the machine were really conscious, well, that would take us back to the problem of there being no such character. If you want a real object with a conscious entity in it, the only known way to get that is to build something that's like part of a replica mammal body, at least functionally.)

Let's suppose it's a HAL1000 office manager machine. It's a stationary piece of equipment, does a lot of things, pays bills, answers the phone, controls the HVAC and the windows, has a lot of gear like a microwave and a coffee maker and printers and a simulator, and it's conscious.

It's got chemical sensors so when the coffee's burning Hal really does smell it, although we can't know what it smells like to him. If you hit his body with a hammer, he feels it in some places (tho his designer mercifully had him built without a sense of pain).

Let's suppose Hal is as aware of the simulator portion of his body as he can possibly be.

The first time we run a sim, he will have no way of knowing that this is what his bodily activity is supposed to be doing, unless we tell him.

Even then, he has no way of guessing what we're supposed to be running a simulation of.

If we repeatedly run a few different types of sims, say stock markets and tornadoes and epidemics, he may come to recognize them by feel. But they wouldn't feel tornadoey or stocky or epidemicky to him... it would be an idiosyncratic sensation shared by other machines built like him but unknowable to us.

So no, the tornado would not seem real to a conscious entity inside the simulator.
 
The only "problem" with Pinker's use of computational theory of the mind, is that he doesn't really address where consciousness comes from.

Yes, it does a good job addressing just about everything else the mind does, but not the nature of consciousness specifically.

You need to get into other sources for that material. That is why I like to cite Blackmore, Dennett, and Damasio, etc. more often for consciousness-related materials. Their stuff can also applpy a computational metaphor, just a different type algorithm than what Pinker usually talks about.

Well, he wasn't addressing it there. No doubt, because there isn't an answer yet.

Don't think I've read Blackmore, haven't read Dennett in quite a while... right now I'm reading more Gazzanniga and Rees, although I've taken a hiatus for Jesus. ;)
 
So you see, the entities produced by the simulation -- the patterns of activity in the simulator itself -- are real.

But as such, they are only that... patterns of activity in a simulator machine.
And you are also only a pattern of activity. You are attempting to force a distinction where none exists.
 
If "computes" means "changes state", then why did we invent such a STUPID term ?

Computing is a very useful engineering term. It's a useful mathematical term. It's only when we try to consider that performing computations causes physical things to happen that we get into this tangle.
 
If all that is true, then I agree with westprog and piggy. Calling a brain a type of computer is tautological, seeing as everything is a computer, and it doesn't tell us anything about consciousness.

Funny how a silly definition can turn things around like that.

What has always been desired is a definition that can include brains and computers and exclude all the other systems that change state. The inability to come up with any such definition is the source of the difficulty.
 
And if the entity which exists in the machine is conscious in the machine, your simulated tornado would feel very real to it, no ?

And if there was a Santa Claus in the simulation, the conscious entity would get simualated presents in his simulated stocking.
 
Status
Not open for further replies.

Back
Top Bottom