Has consciousness been fully explained?

Status
Not open for further replies.
May I call a time out?


From what I can tell from Westprog's responses you guys are talking past one another as seems to happen in many of these threads. The two sides have very, very different ways of talking about the simulation at play in the examples.

No, Ichy, I think this difference is real.

The computationalists believe consciousness can be programmed, and if you've got the hardware to run the program, that's all you need.

The physicalists aren't buying this, any more than we'd buy the claim that you could program the temperature control of the box. We say that biologists don't yet know what's going on that causes -- among other things -- our sense of awareness to go on and off (because they don't) but it's a physical process that can be replicated by a model brain, but not the digital simulation of a brain.
 
Because actions, as actions, are frame-independent. What is frame dependent is the way the actions are carried out -- either in silica chips or in physical biological entities.

Physical objects are frame dependent, but actions are not because actions are relations between parts and not the parts themselves.

Now you're also committing entification.

If you're looking to replicate in spacetime the result of an action in spacetime (which is what consciousness must be), then you need to reproduce some sort of direct physical cause in spacetime.

If you run a simulation of a racecar, none of the actions of the racecar happen in reality. That only happens if you build a model racecar.

The action/behavior of conscious awareness won't happen to the machine just because it runs a simulation (however accurate) of a brain, because its physical actions haven't really changed.

If it was conscious before, it'll be conscious when it runs the simulation. If it wasn't, it won't be.
 
Furthermore -- and this has also been brought up already, and you have also ignored it -- you fail to account for the fact that a real orange is nothing but a collection of particle behaviors and it is only an "orange" in the mind of a human observer. Why do you arbitrarily label a real orange as somehow less "observer dependent" than a simulated one?

Wow, you're way deep into it, aren't you?

If you step on an orange you don't see, you'll still bust your butt if you slip on it. If someone squeezes an orange over your face while you're asleep, you'll wake up all sticky.

Yeah, yeah, you can talk about all the processing that goes on in our brains, but please, when we see and pick up and eat an orange, there's a real orange there. Our brains are built to perceive it in a certain way, but regardless of that, it's objectively real.

Without an observer, the machine running the simulation has no qualities of an orange, it's a machine.

The orange is the orange, the machine is the machine.

This is getting absolutely ridiculous.

I need to get back to posting about the brain.
 
I can't smell an orange that is in new york when I am in seattle. I can't taste it or squirt it's juice in my eye. Is it also not a real orange?

There's nothing remarkable about the smell of an orange or a squirt of juice not reaching across the American continent.
 
This is getting absolutely ridiculous.
No, it started out that way. And it will remain that way until you stop attributing magical properties to physical systems.

No, the observer doesn't magically change the nature of anything. How can it? It's just another physical system.

I need to get back to posting about the brain.
There's no magic, Piggy. It's turtles all the way down.
 
No, the observer doesn't magically change the nature of anything. How can it? It's just another physical system.


The precise measurements of the physical system do depend on the observer through the normative principles of what... truth, relevence, and parsimony, no?
 
Last edited:
Essentially, yes. This does not, however, make your previous statement correct.

If you're saying anything about the precise measurements of the physical system and it doesn't have hbar in it, it's probably wrong.
 
It means exactly that and only that.

"In the world of the simulation" means "in the world of someone's imagination".

Because if our time-bomb virus goes off and all life on earth perishes within the space of a minute, then there are no longer any simulated worlds in existence, even as the simulating machines continue to run.

There is only the physical action of the machines. No orange. No vitamins. No racecars. No aquariums. No cities. Without anyone to understand what the output is supposed to mean, it all ceases to be.

(Interestingly, this is not the same for consciousness. A conscious machine would still operate as such after the time-bomb virus took its toll.)

Oddly, it seems to me as if RD and PM are arguing for a kind of dualistic system - i.e. the independent existence of information or information systems completely separate from the substrate they exist within. The world of a simulation is 'real' in a sense completely orthogonal to the physical existence of the substrate it is operating in.

There is a logical consistency to this outlook. I just find it surprising that these individuals would be arguing for that side of the existence of intangible concepts.

The precise measurements of the physical system do depend on the observer through the normative principles of what... truth, relevence, and parsimony, no?

Yes, that is correct. The precise measurements of any physical system do depend on the observer to some extent. In fact, we can scientifically take measurements and quantify the portion of the variability of the measurement system due to the person taking the measurements.
 
Yes, that is correct. The precise measurements of any physical system do depend on the observer to some extent. In fact, we can scientifically take measurements and quantify the portion of the variability of the measurement system due to the person taking the measurements.
No.
 
There's nothing remarkable about the smell of an orange or a squirt of juice not reaching across the American continent.

Damn this is America, we should be able to build an orange that can squirt across the nation and stink up the whole world, what happened to the drive and invovation, damn flouride in the water. That and women wearing pants!

:D
 
Yep, that's why we have the problem of other minds. There is only one person who is privy to your personal experiences and that is you.

But the only way that any of us can decide if you are conscious is by observing your behavior, as incomplete as our information is. That's how things work in the 'real world' and how they would work for deciding if a simulation were conscious.

And most people just assume that they are conscious, without thinking that the same problem applies to others as well, you judge your own consciouness teh same way you judge consciousness in others.

From the behavioral perspective we all just show the behaviors of consciousness.
 
Now you're also committing entification.

If you're looking to replicate in spacetime the result of an action in spacetime (which is what consciousness must be), then you need to reproduce some sort of direct physical cause in spacetime.

If you run a simulation of a racecar, none of the actions of the racecar happen in reality. That only happens if you build a model racecar.

The action/behavior of conscious awareness won't happen to the machine just because it runs a simulation (however accurate) of a brain, because its physical actions haven't really changed.

If it was conscious before, it'll be conscious when it runs the simulation. If it wasn't, it won't be.


To answer your post above and this one -- this is what I see as the problem in communication:

You seem to imply that what the others are arguing is that if they program a simulation using a bunch of if-then statements to carry out actions that look like consciousness, then they have not created consciousness.

That may or may not be the case, but that is certainly not what I hear them saying; but perhaps the error is on my part. What I hear is that if we were to program a world down to every atom and also program the rules of interaction that are present in the 'real world' that we could then see something that is indistinguishable from consciousness. I am not sure that position is not correct; I can't really find a clear problem with it.

Just to clarify, I am more in the physicalist camp (as far as bringing this about) -- I think that to end up with an action you need movement, so the easiest way to design a conscious machine is to link chips (or whatever medium) in a way that we recreate the physical actions of neurons. That is certainly the easiest way of looking at the issue.

But I am not at all convinced that programming could not do it because consciousness is not a thing.

The counter examples everyone uses to shoot down the idea are all things -- like the orange or the race car that you mention. Consciousness is not a thing, though; it is an action. Sure we couldn't touch a simulated race car and a simulated race car does nothing in the real world, but that is a trivial issue. Within the simulated world, if the simulation is robust enough, the car still moves. With a good enough simulation it's movement should depend on the same factors (simulated) that rule movement in the 'real world'. So, it does not move without simulated gas, simulated oxygen, etc. A bad simulation where you just have an if-then statement (if person clicks on car, make car move) doesn't help and is not the kind of simulation I hear them talking about. Consciousness is also not a 'thing' but an action, so I don't see why the same would not apply to it as to the movement of the simulated race car.

So, the obvious counters for this discussion from my limited experience are Deep Blue and the Chinese Room argument (neither of which concern consciousness but rather 'understanding'). I have gone over the Chinese Room argument several times and find myself less convinced by Searle's argument every time I encounter it. I'm not sure what we mean by understanding Chinese except that someone can use it properly. I know one of the things that is missing from his example and that he uses to sway opinion, and that is the 'aha' moment. A system that simply manipulates symbols never has a feeling of understanding anything because feeling is not a part of how it is programmed. But is that what understanding is? Is it proper use with a feeling that you have used language properly? If that is the case, what's the problem with a bigger program that includes that aspect of it?

I know very well that brute force symbol manipulation of the type proposed in the Chinese Room is not the way our brains deal with language issues; but I am less certain that we can clearly separate this way of doing things from the idea that such a system understands. What does understanding mean? Same is true of Deep Blue. It's brute force calculations do not resemble the human brain's 'chunking' of game info, but can we really say that if a computer can reliably win at chess that it doesn't understand the game? On what basis? That it isn't the way that we do it? Is this all about humanocentrism?

When it comes to creating a simulation that is built from the very bottom up, I would assume that to create a human like consciousness, a programmer would apply the same rules as the human brain uses. Why would simple if-then statements, if at the lowest level the 'particles' act like atoms in the real world, not allow for us to recreate what the brain does (theoretically, of we had that incredible level of knowledge)? I'm afraid that I don't understand the objection.
 
Yes, this is the core of it. How is it even meaningful to discuss something that exhibits all the behaviours of X but is not X? For any X, not just consciousness?

This seems to be the point of division, I feel that if something displays the behaviors we define as consciousness then it is conscious, however this thread and other have never resolved how something can have the behaviors of a defined state of consciousness but not be conscious.
 
There is no orange. That's something that babies learn at about eighteen months. They can't do anything with the orange.

There is no mind, there are only external behaviors we interpret as the expression of a mind. You and babies can't do anything with a mind, yet the theory of other minds arises.
 
Status
Not open for further replies.

Back
Top Bottom