Has consciousness been fully explained?

Status
Not open for further replies.
Well, maybe there'll be places to attach the electrodes on the computer simulation; the rocks-in-the-sand model will make it really difficult. ;)
 
Well, maybe there'll be places to attach the electrodes on the computer simulation; the rocks-in-the-sand model will make it really difficult. ;)


Since the electrodes in a simulation are themselves part of the simulation, just as the EEG would be, rocks-in-the-sand would do just fine, thank you very much.
 
I wondered if that might be the come-back. So much for the suggestion (Pixy iirc) that we'd be interacting with a simulation in our reality.
 
Yep, that's why we have the problem of other minds. There is only one person who is privy to your personal experiences and that is you.

But the only way that any of us can decide if you are conscious is by observing your behavior, as incomplete as our information is. That's how things work in the 'real world' and how they would work for deciding if a simulation were conscious.

We assume that other people think and feel like us because they look and behave like us. The less like us an artificial being would be, the less confidence we would have.
 
I'm not sure what the point of all this is, but I think there is a bit of confusion over what has or what might be claimed based on the examples being provided.

You and an orange are part of a system -- part of the physical world. Within that world physical objects interact with one another, so an orange may squirt you in the face physically.

In a simulation there is a digitial 'you' and a digital 'orange'. Within that system the digitial orange can squirt the digital you in the face digitally.

A physical orange cannot squirt a digital you in the face and a digital orange cannot squirt a physical you in the face because they exist in different systems, differnt frames of reference.

A simulation of an orange exists in the physical world too (assuming it is implemented by something). Something can not be considered to be both "real" and not real in "our world". There is no actual "digital orange", although there can be a digital perfect description of an orange. The simulation exists in our world, in that it is carried out by some physical system.

The only reason this arises in discussions of consciousness is because of confusions over the words we use. When we speak of an orange squirting, we speak of an actual emission, a physical presence that is emitted. But is the true of all actions? Running, for instance, must be realized in a physical system, but what is 'running' exactly? It seems to consist in the translational movement of a being through movement of various body parts. One can simulate running in a digital environment where the simulation is close to what happens in the real world -- simulate the natural physical laws, all the atoms at play, the biological processes of the being, etc. In that situation would we not speak of the simulated body as 'running'?

We could, but there is no body, just a description. Something that doesn't actually exist can't technically run, although it may be useful to talk about it that way in non-philosophical situations.

I think we would, and I think we would because, while running must involve a body, the actual process - running - is constituted by the interaction of the parts. It "is" an action and not a thing.

True, but it is something which is assigned to things. If it is assigned to simulated things (i.e. assigned to descriptions of things as part of the description) then it has no more real presence than the thing itself.

When thinking about simulations we should never think of them like computer games that currently exist. While those are simulations, that is not the kind of thing that is being discussed here.
Agreed.
 
Last edited:
Yep, that's why we have the problem of other minds.


I don't share your doubt that other persons have minds.

There is only one person who is privy to your personal experiences and that is you.


There is what I experience and there is what I experience as seen over time ie an understanding of my experience as a behavior.

But the only way that any of us can decide if you are conscious is by observing your behavior, as incomplete as our information is. That's how things work in the 'real world' and how they would work for deciding if a simulation were conscious.


Understanding your own behavior learned over time and applied to an understanding of others is not unlike the field of cultural anthropology. Of course, you can always talk to them.

I think it's safe to say most of us are bombarded by low-grade simulations broadcast almost continually everyday on TV and radio ie advertising, political messages, misc emotional territorial hysteria.
 
[...] a real orange is nothing but a collection of particle behaviors and it is only an "orange" in the mind of a human observer.

True, but that doesn't change anything. When we refer to a real orange we are referring to an actual collection of particles in the world. When we refer to a simulation of the orange we are either not referring to an actual collection of particles in the world or we are referring to a very different collection of particles (those that comprise the hardware that is carrying out the simulation).
 
Because that is like saying chewing evolved to make chewing. It is the same thing.

Consciousness is not some process over and above making decisions. It is simply a form of making decisions. In particular, decisions about the self.

But yes, it is something over and above the mechanism of making decisions. This is not in dispute.

First, it's not clear that making decisions is the only function of consciousness.

But more importantly, even if consciousness is a means of decision-making, it's distinguished from other means precisely because of its association with a sense of felt individual awareness, which is absent when the brain is making decisions in other ways.

We need to understand what the brain is doing in order to generate that behavior and while it is generating that behavior, and also why it is generating that behavior.
 
When you say it "evolved to make decisions" it implies that if you remove the decisions, it would still be there. That isn't how it works -- if you remove the things that consciousness does from the equation, there is no more consciousness.

Contrast that with teeth, or the brain. It makes sense to say teeth evolved for chewing, and the brain evolved for decision making. Consciousness did not evolve for decision making, consciousness is decision making, that evolved.

The brain evolved for decision making.

One of the mechanisms (if we accept the hypothesis that consciousness is a tool for making decisions) has the effect of generating a sense of felt awareness that starts and stops and hovers with fuzzy borders/center in the brain cavity.

Whatever it is that's doing that -- making decisions and generating this sense of being -- is part of a physical apparatus. The causes of all behaviors of the body are physical.

Consciousness isn't like a tooth. It's like chewing. And to actually chew a piece of broccoli, you need real teeth working in real time. To generate a sense of awareness in a brain cavity, you need a real brain working in real time.

The thing that's like a tooth is whatever it is that's making the brain "make decisions" and simultaneously generate this feeling of awareness (which the brain can make decisions without).

We can abstract logic from it, describe it in rules, describe it in terms of information, but regardless of how we transform it intellectually, it's all eventually explanable on a purely physical level, which means the brain -- the lump of matter -- is what's "making decisions" (which is just to say that the body does one thing and not another) and also what's causing my sense of self-in-spacetime to crank up every morning.

To replicate this, we need a model brain.

We cannot make any machine conscious without altering what it's doing physically in 4-D spacetime in some significant way, in a way that replicates whatever the brain's doing when our Sofias crank up.

This cannot be purely "programmed" for the same reason that you can't have a computer control its physical temperature purely by programming.

If you want to replicate the behavior, and actually have a sense of awareness in the tower of your computer, you must have something more than programming, because you cannot get behavior for free.

If the only physical behavior you have is for running the logic and nothing else, and the machine is not doing anything physically different from what it's doing when it's up to other tasks, then you cannot extract this new behavior from it in real spacetime.
 
But, as you seemed to imply earlier the idea of a robot that passes for conscious but which is not conscious seems to be a non-starter. We still have the "problem of other minds", so how could you operationalize a definition for a robot that passes for conscious but is not conscious? That's the old p-zombie argument and it doesn't seem coherent to me; there seems to be an underlying assumption in the argument that consciousness is somehow separable from the behavior that we see as consciousness.

I don't see how it is even possible. To begin, you would need a clear cut definition of exactly what consciousness *is*. I have asked people to try to pin this down before but have found few to no takers.

I wasn't saying that it was possible to have a non-conscious machine that passes for conscious. Just indulging in an "even if we managed to achieve it" scenario.
 
Simulated running is running -- not in the real world but in the simulated world. The closer the simulation is to actual running the more informative the process by which we acheive the simulation is for understanding how running works in the real world. Same for consciousness, at least theoretically. The idea is that a simulation can potentially provide a model for how consciousness works in the brain; not the any simulation would be consciousness in the brain. Obviously it cannot be -- it is a simulation after all.

I don't think you've been following this thread closely enough.

Nobody here is arguing that computer simulations can't help us figure out how the brain produces consciousness.

But there are people here who are saying that the digital simulation would indeed be conscious in reality, not just in the simulated (imaginary) space.
 
Everything about the orange is real. We can't smell it, or touch it, or even see it without help, but it is a real thing.

The only thing that is in our imagination is the word "orange."

But that can be said for a real orange as well.

How can one argue against such insanity?

Seriously.
 
All information processing has to be carried out by some sort of hardware.

Saying that consciousness can't exist without a body is an immediately obvious statement to anyone who knows what they are talking about. The argument is that the specific form of the body isn't as strict as you think.

Look, you and I both agree that machines can be conscious.

I have no preset ideas about what materials can be used to build conscious machines.

The difference is that you think you can "program" consciousness and have it work -- actually produce a conscious machine -- and from where I sit you're demanding behavior for nothing, 4-D spacetime effect without any direct physical cause.

It would be different if you actually had some sort of description of how this would work exactly, or even some theory of how one can get actual behavior from programming alone, but you don't, no one does, so there it stands. Or doesn't, I should say.
 
It means nothing of the sort.

It means exactly that and only that.

"In the world of the simulation" means "in the world of someone's imagination".

Because if our time-bomb virus goes off and all life on earth perishes within the space of a minute, then there are no longer any simulated worlds in existence, even as the simulating machines continue to run.

There is only the physical action of the machines. No orange. No vitamins. No racecars. No aquariums. No cities. Without anyone to understand what the output is supposed to mean, it all ceases to be.

(Interestingly, this is not the same for consciousness. A conscious machine would still operate as such after the time-bomb virus took its toll.)
 
So?

What else is necessary, and why do you think that?


Consciousness is an informational process. Carried out by the brain. Sure, that makes it a bodily function. It also makes it something a simulation can do.

That just makes no sense.

It's not an "informational process", because we can see the real world behavior it's kicking off. You don't get northern lights from an informational process, and you don't get Sofia from it either.

Your whole Chuch-Turing claim fell right through the floor, I don't believe you ever responded when I posted examples of SRIP that aren't conscious, you've got no argument in favor of this idea anymore.
 
That would be the brain, Piggy. Which is a computer.

That's been debunked already, so if that's what you're basing this nonsense on, then there's no point in you and me going back and forth about it.
 
Status
Not open for further replies.

Back
Top Bottom