Has consciousness been fully explained?

Status
Not open for further replies.
I wondered if that might be the come-back. So much for the suggestion (Pixy iirc) that we'd be interacting with a simulation in our reality.


You can't interact physically with a simulation because the simulation is not physical -- it is implemented by a physical process, but the simulation itself is not physical as things in the world are. That is why you can't do anything with a simulated orange. It is not physical; it is itself an action (which shows the problems with our language, just as we have the same problem with speaking of consciousness).

The only interaction between a simulation and the real world that is possible is with actions. The actions that potentially constitute consciousness within a simulation are just as available for interaction with consciousness in a simulation and in the real world because both are actions and neither is a 'thing'.
 
If a woman watches a Pilates DVD, and precisely matches the information, she will be doing Pilates. The DVD won't be doing Pilates. If we say "the woman on the screen is doing Pilates" then we don't really think that Pilates is taking place.

Yes, but it is interaction that is teh hallmark of most defintions of consciousness.

A woman and the DVD can not discuss nd respond a mental status exam.

Now what if you were talking to a simulacrum, that answered all the questions as a 'conscious' human would?
 
We assume that other people think and feel like us because they look and behave like us. The less like us an artificial being would be, the less confidence we would have.


Yes, I agree completely. That is one of the very big problems with all of this, but it is a problem with perception and not reality. That is why we need good definitions so that we can figure out what we are dealing with. We would have the same problem seeing life in a non-carbon based form or seeing consciousness in an alien species (just as we used to have problems seeing humanity in Native Americans).
 
A simulation of an orange exists in the physical world too (assuming it is implemented by something). Something can not be considered to be both "real" and not real in "our world". There is no actual "digital orange", although there can be a digital perfect description of an orange. The simulation exists in our world, in that it is carried out by some physical system.

The simulation is realized in a physical space, but the simulation is not itself a "real thing" in the sense that we speak of an orange in the "real world" as being a "real thing". A simulated orange is actually an action -- it is created by interactions of things. That is why you say "it is carried out by some physical system". We cannot touch a simulated orange. But we also cannot touch consciousness. We cannot touch running. We cannot touch squirting. We can't touch any action; we touch things. The reason that there is such an issue between a real and simulated orange is because the simulation is not a thing but an action.



We could, but there is no body, just a description. Something that doesn't actually exist can't technically run, although it may be useful to talk about it that way in non-philosophical situations.

A well-run simulation of an orange is not a description. It is a complex action. Within its 'world' that sort of complex action should function identically to the way an orange in the 'real world' acts.
 
I don't think you've been following this thread closely enough.

Nobody here is arguing that computer simulations can't help us figure out how the brain produces consciousness.

But there are people here who are saying that the digital simulation would indeed be conscious in reality, not just in the simulated (imaginary) space.

I have not followed the tread closely, that is very true, but I do know this issue. I agree with RD and Pixy that a digital simulation of consciousness, if robust enough, should be conscious in reality because of the reasons I have already given repeatedly -- it is an action. I understand how we separate 'things' that are simulated and 'things' in the real world because the simulated orange or race car is an action within a computer system, so there is no way to touch it. But I don't see how we separate actions within a computer system and actions in the real world. You can't touch either of them. They are realized in different ways, but the action -- how things are carried out -- should, if linked to a physical means of interacting in the real world should be indistinguishable whether or not it is created in a brain or in a simulation.
 
PixyMisa said:
Planck's constant is a physical constant.
Yes. And that's the physical constant that defines precise physical measurements.

The observer remains, sir.
Well, no. In the sense that you mean, in the sense that an observer is somehow different from any other physical system, no.


No. That is not the sense I mean.

Read the sentence again: the precise measurements of the physical system depend on the observer through the normative principles of truth, relevence, and parsimony.

Abstract physical descriptions describe what we know, not what something "is".

Don't you know that?
 
How do you tell if another person is conscious?
How do you tell if you are conscious? You see no division between your subjectivity and what others might objectify as "DD's private behaviors"?


What if you can only talk to them on the phone?
With call-centers scattered worldwide I do sometimes wonder if what I'm talking to is conscious. :p

Well, here too, sometimes. :D
 
The action/behavior of conscious awareness won't happen to the machine just because it runs a simulation (however accurate) of a brain, because its physical actions haven't really changed.


Sorry, I did not address this point specifically above and know that it needs addressing since this is Searle's argument in a nutshell -- at least his later arguement.

Why does enaction within a physcial system matter? There may be some underlying reason that I am not getting, but I don't see why an action could not produce another type of action without physical objects being involved (escept in the creation of the original action in the first place). So, why could not a simulation of a particle (which is an action created in the physical space of a computer), for instance, not be linked to other similar actions (simulations of other particles) to result in a grander type of action (all of which is defined as the interaction of parts) such as running or consciousness? Why, if we have a very robust simulation of a robot moving about could we not link every action from that robot to a robot in the real world and not see the 'real robot' move precisely as the simulated one does in its simulated space? Actions do things in the real world when they occur in physical systems. I don't see why we couldn't link a simulated consciousness to a physical output.
 
I can't smell an orange that is in new york when I am in seattle. I can't taste it or squirt it's juice in my eye. Is it also not a real orange?

It depends. If you are an idealist, then you might not believe that there's a real orange, made of real particles, with a real location. However, most of us think that the real orange stays real. Reality is what is still there when you stop believing in it.

The simulated orange, OTOH, doesn't have a location, isn't made up of any particular particles, and really does only exist in the sense that we perceive it.
 
And most people just assume that they are conscious, without thinking that the same problem applies to others as well, you judge your own consciouness teh same way you judge consciousness in others.

From the behavioral perspective we all just show the behaviors of consciousness.

Consciousness is the name we give to what it's like to be us. How can we be mistaken in such a belief? We can be mistaken in thinking that something else is conscious, but we can't be mistaken in thinking we are conscious ourselves.
 
Just a quick de-lurk wondering if anyone else caught Antonio Damasio on NPR yesterday talking about his new book "Self Comes to Mind"? I'll confess that I haven't read any of the books he's published since Decartes' Error, but listening to him on the radio I came away with a familiar unsatisified feeling: that he had said a lot without really saying much of anything. (A little voice keeps asking, "So where's the University?) I guess his main deal is that neuroscience has had a tendency to place undue emphasis on the importance of events in the cerebrum while underestimating the influence of emotion, "lower" brain functions, etc, reflected in statements like this one:

"The brain stem, cerebral cortex and memory act in unison in the complex mental process that tell us who we are and generate the feelings that are at the heart of being conscious".

Any big Damasio fans present?
 
A well-run simulation of an orange is not a description. It is a complex action. Within its 'world' that sort of complex action should function identically to the way an orange in the 'real world' acts.

But its "world" does not really exist. So the only action that is taking place is in the real world (at the implementation level). And that action is distinct from the actions of the system being simulated. That's why I think description is a better term. Descriptions capture or represent reality at an abstract/conceptual level, but they are not the same as that reality. The simulation is only the same in that it can result in isomorphic outputs, but isomorphic != same.
 
Now you're also committing entification.

If you're looking to replicate in spacetime the result of an action in spacetime (which is what consciousness must be), then you need to reproduce some sort of direct physical cause in spacetime.

If you run a simulation of a racecar, none of the actions of the racecar happen in reality. That only happens if you build a model racecar.

The action/behavior of conscious awareness won't happen to the machine just because it runs a simulation (however accurate) of a brain, because its physical actions haven't really changed.

If it was conscious before, it'll be conscious when it runs the simulation. If it wasn't, it won't be.
So, it's not just a consciousness molecule, it's an invisible magical consciousness molecule?
 
But its "world" does not really exist. So the only action that is taking place is in the real world (at the implementation level). And that action is distinct from the actions of the system being simulated. That's why I think description is a better term. Descriptions capture or represent reality at an abstract/conceptual level, but they are not the same as that reality. The simulation is only the same in that it can result in isomorphic outputs, but isomorphic != same.


I am certainly not arguing that a simulated orange is the same as a real orange, so please do not misunderstand me.

I am arguing here that description is not the right word for the sort of simulation that RD and Pixy are talking about. Description is the right word for simulations like we see in current computer games, but the simulations are fairly unlike the actual objects as they exist in the real world.

What they seem to be talking about, though, is a robust 'world' (which, yes, does not exist in a 'real sense') that is based on the 'real world' down to the atomic level. That simulated world is itself an action, not a description. We can use it as a description, but its nature is as action -- steps being carried out within the computer. That is why it has no location, no extension, etc.

Descartes considered the soul to be an immaterial 'thing' because it had the same 'properties' -- no extension, no location, etc.; but the mistake he made was in calling it a substance. It is not a substance but an action. Actions are realized in particular locations, just as simulations (which are actions) are realized in particular locations.

We are used to thinking of 'things' causing actions to occur, but I still do not see the objection to actions also causing actions to occur.


ETA:
What I am not at all convinced about is that an isomorphic action is not the same as the 'real action' which it 'mimics'. I know why we can distinguish between isomorphic outputs for things, but I don't see how we can make such distinctions for actions because actions consist in the interactions of parts. Why does it matter if the 'parts' are 'real' or only 'simulated' (actions themselves)?
 
Last edited:
So, the obvious counters for this discussion from my limited experience are Deep Blue and the Chinese Room argument (neither of which concern consciousness but rather 'understanding'). I have gone over the Chinese Room argument several times and find myself less convinced by Searle's argument every time I encounter it. I'm not sure what we mean by understanding Chinese except that someone can use it properly. I know one of the things that is missing from his example and that he uses to sway opinion, and that is the 'aha' moment. A system that simply manipulates symbols never has a feeling of understanding anything because feeling is not a part of how it is programmed. But is that what understanding is? Is it proper use with a feeling that you have used language properly? If that is the case, what's the problem with a bigger program that includes that aspect of it?

I know very well that brute force symbol manipulation of the type proposed in the Chinese Room is not the way our brains deal with language issues; but I am less certain that we can clearly separate this way of doing things from the idea that such a system understands. What does understanding mean?
Yep. The Chinese Room is deeply deceptive, though I don't think Searle understands that; I think instead he's deceived himself.

The Room by definition understands Chinese; it is also, as Searle describes it, utterly impossible. To understand what we're talking about with regards to simulations you first have to understand that the Chinese Room (and likewise, Frank Jackson's "Mary's Room") are complete impossibilities.
 
What they seem to be talking about, though, is a robust 'world' (which, yes, does not exist in a 'real sense') that is based on the 'real world' down to the atomic level. That simulated world is itself an action, not a description. We can use it as a description, but its nature is as action -- steps being carried out within the computer. That is why it has no location, no extension, etc.
Mind you, the reason we're talking about a Planck scale simulation is that it's a reductio ad absurdum counter to the position of, well, the other half of this thread. We've duplicated the Universe exactly. Now can we have a conscious mind? If you still say no, then you believe in magic. In which case it's time for us to just shrug and walk away.

Descartes considered the soul to be an immaterial 'thing' because it had the same 'properties' -- no extension, no location, etc.; but the mistake he made was in calling it a substance. It is not a substance but an action. Actions are realized in particular locations, just as simulations (which are actions) are realized in particular locations.
Yep. I usually describe it as a process, because that better gives the idea of an ongoing... process. ;) But we're talking about the same thing.
 
Yep. The Chinese Room is deeply deceptive, though I don't think Searle understands that; I think instead he's deceived himself.

The Room by definition understands Chinese; it is also, as Searle describes it, utterly impossible. To understand what we're talking about with regards to simulations you first have to understand that the Chinese Room (and likewise, Frank Jackson's "Mary's Room") are complete impossibilities.


Right, but for the sake of the thought experiment I'm willing to grant him the incredible brute computing power necessary to deal with the combinatorial explosion, which is why I brought up Deep Blue because it's easier to see the same issue there. Deep Blue solved the problem by brute force. There are obviously much more elegant solutions, but we probably aren't good enough at programming or rather don't know enough about how we do it to create a system similar to how a grand master plays chess.
 
Right. If Searle started out the Chinese Room argument with:

Imaginary Searle said:
Imagine a room, bigger than the observable Universe, filled with books. Inside the room is a man. Through a slot in the wall come pieces of paper filled with mysterious signs. Following the instructions in the books, the man looks up the symbols on the piece of paper and writes more symbols on another piece of paper, and passes the new paper back out through the slot, only the whole thing takes hundreds of trillions of years because the process is so complicated and the room is so large that even traveling at a constant 1G acceleration....
And so on - if Searle started the argument that way no-one would have a problem with it. No-one would give it a second thought either, of course, and Searle would be out of a job, but them's the breaks.
 
Mind you, the reason we're talking about a Planck scale simulation is that it's a reductio ad absurdum counter to the position of, well, the other half of this thread. We've duplicated the Universe exactly. Now can we have a conscious mind? If you still say no, then you believe in magic. In which case it's time for us to just shrug and walk away.


Which is a very good ploy, but it also removes the charge that you are talking about a description of a system since you very obviously are not. Possibly a description of a subatomic particle, but everything else from there is just part of the system and not a description of it.
 
Status
Not open for further replies.

Back
Top Bottom