• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Well, this thread certainly took a turn for the verbosely obtuse.

I'm curious, piggy. Suppose we take this simulation of a conscious mind (which of course only exists in our imaginations, being symbolic representations of things), and attach a camera and an arm to it. The simulation, unbidden, uses the two to write "I am conscious. This is not your imagination. Piss off" on a piece of paper. Is it still just so many electrons whizzing about a computer?



I will most likely then believe that it is conscious. Because out of all the ways to announce its consciousness it chose to do so using the mannerism of some humans I can think of...... being rude buffoons.

If a machine can manage to be as obnoxiously stupid and rude as some humans elect to be then I am forced to conclude that it is as conscious as these people..... which is to say most likely not at all.

On the other hand it could have been programmed to churn out such a stupid response much like some rude humans are probably programmed by their base and stupid nature to give the illusion of being conscious.
 
Last edited:
Piggy said:
Now keep in mind, the thing you just built is not conveying information. It's a real physical thing, moving some kind of electrophysical impulses through spacetime.

If you want it to convey information for you, you're going to have to come up with some kind of information that naturally mimics what it's already doing anyway.

I don't follow what you mean by 'conveying information'. Naturally it's a physical thing - you need hardware to perform the switching & logic operations.

Right, but keep in mind they're only "logic operations" in your mind. Objectively, they're physical computations, not symbolic ones.


Piggy said:
This is where you hit your problem.

When you emulate any real thing in software, the real things it was doing are no longer being done, and instead some very different real things are being done which are only informationally related to what the original system did
...So you've turned a real thing into an imaginary one.
Sounds like some sort of deus ex machina...

I really can't see what has fundamentally changed; we know that the physical implementation of a transform function isn't relevant to the processing of inputs to produce outputs in any other form of computing - a mechanical adding machine gives the same results as an electronic calculator; the 'real things' being done are physically different, but the function achieved is the same. If the electronic calculator or a computer emulates the adding machine in software, what is imaginary? Isn't there still a functional adding machine? The implementation is different, is all.

Yes, but this is irrelevant.

What you're saying here is that we can assign imaginary symbolic values to any similar set of physical computations... as long as the changes proceed the same way, the nature of the object making the changes isn't important.

But that is irrelevant to the point at issue, which is that you can't substitute an imaginary thing for a real one.

Physical computations are real.

The symbolic computations we associate with them are imaginary.

When you swapped the brain's neurons with other objects that performed a similar real function, you simply produced a replica brain.

When you attempted to introduce a simulation into the system, you swapped a real object for an object which is merely associated with the target object in your imagination (because the relationship between the simulator and the system intended to be simulated is imaginary) which won't work, for obvious reasons.

Of course, if the simulations exists in some sort of black box along with some unknown hardware, and the real inputs (physical calculations) going into it and coming out of it are identical to the physical system you want to replace, well, in that case you've simply created a very complicated replacement part.

The original neural processors were already emulating 'real things', by running microcode that translated the instructions for the neuron behaviour into their native instruction set, then executed the native instructions, so there is a level of abstraction between the instructions for behaving like a neuron and the hardware doing it. If a different neural processor chip was used, the same neuron behavior instructions would be translated into different native instructions and executed in a different way by the hardware, but with the same end result - similar inputs would result in similar outputs; the particular physical circuits and pathways used to achieve the transformation from inputs to outputs are not relevant. A Windows application works just the same on my native Intel Pentium box running Windows OS as on the Linux server running a Windows emulation.

Right. But this simply means that you're changing parts. You're not attempting to replace a physical computation with an entirely different physical computation that is merely imagined to correspond to one that isn't different.

If you replace one physical computation with another that is functionally similar -- attach a wooden leg to a broken metal chair -- there's no problem.

But if you try to replace X in any physical system with a simulation of X, the system will ignore the informational transformations (any number of which could be possible, but none of which are affecting the machine in any way) and respond only to the physical computations which are determined by the properties of the simulator regardless of what is supposed to be simulated.

And if those are the same, then your simulation is superfluous, because you're simply using a physical replacement part (which happens to be running a simulation at the moment).

Remember, the brain isn't processing information. To think so is to mistake the information processing metaphor (the post office metaphor) for reality.

Yes, we say that an "image" is "recognized" as a "face" and "routed" through the amygdala, and so forth.

Yet none of that happens. Until we get to the processes which support conscious awareness (which are downstream) none of that language makes sense.

It's every bit as metaphorical as saying that the "pressure" at work is "building up" and I need to "let off steam" by "channeling" my energy into sports and "venting" to my buddies at the bar. We should not take the language of the information processing model any more literally than we take the language of the hydraulic model when discussing the brain.

The impulse isn't an "image of a face" because there's nobody around to look at it and decide that it is supposed to correspond with anything. That information certainly isn't in the impulses. So this would be like saying you can look at a person and deduce if they have a twin.

And if the information isn't in the impulses, then it makes no sense to declare that they somehow are an image of a face as far as the organ of the brain can tell at this point. They can't be. (Symbolic value requires an interpreter.)

Nor is there any structure capable of "recognizing" the impulse as "an image of a face" and therefore "routing" it anywhere.

The reason the impulse goes where it goes has nothing to do with any logical computations. At this stage, it's 100% physical computations, just like your heart or your lungs or your fingernails or anything else. All the laws of physics, nothing more.

When you use a single microprocessor to emulate multiple microprocessors, there is still physical hardware that is performing the same switching and logic operations, but now one piece of hardware is performing the switching and logic operations previously performed by many pieces of hardware. The same functions are applied to convert inputs to outputs, but the overall implementation is different.

Are you suggesting that we must have as many physical processors as there are neurons?

Well, you're either going to have to build your machanical brain neuron-by-neuron, or you're going to have to sacrifice functionality, because each input-output point has an impact on how the brain functions.

Keep in mind that there are folks right now who are building a brain simulation in precisely this way... neural column by neural column.
 
piggy said:
Yes, we say that an "image" is "recognized" as a "face" and "routed" through the amygdala, and so forth.

Yet none of that happens. Until we get to the processes which support conscious awareness (which are downstream) none of that language makes sense.

Just a nitpick, but according to your definition this is already part of consciousness.
 
Ya think?

No, he's not trying to deliberately miss the point.

What he's trying to explain to Belz is simply that if there were any conscious entity generated by the activity of the simulating machine, it would live in the same world we do, not in some other world... it might just have a different sort of experience of that world.

But one thing's for sure... if the behavior of a machine caused any incidence of conscious awareness, and that machine were also running some sort of simulation of another world, the world that its behavior is intended to simulate would NOT be the world that the conscious entity would be aware of.

It cannot be, since there's no information in the system that could possibly identify such a world out of the infinite variety of possible worlds that could be described by associating symbolic values to its state changes.

And even if there were, there is no mechanism for making that imaginary world into the world where the conscious entity exists.
 
Just a nitpick, but according to your definition this is already part of consciousness.

Not at that point in the pipeline, no. When the impulses are at this stage, they are not yet involved in the processes that generate consciousness. Like I said, those are downstream.
 
Not at that point in the pipeline, no. When the impulses are at this stage, they are not yet involved in the processes that generate consciousness. Like I said, those are downstream.
Then you need to change your definition, because even while dreaming the visual "pipeline" beyond the eyes is actively engaged.

Piggy said:
But one thing's for sure... if the behavior of a machine caused any incidence of conscious awareness, and that machine were also running some sort of simulation of another world, the world that its behavior is intended to simulate would NOT be the world that the conscious entity would be aware of.

It cannot be, since there's no information in the system that could possibly identify such a world out of the infinite variety of possible worlds that could be described by associating symbolic values to its state changes.

And even if there were, there is no mechanism for making that imaginary world into the world where the conscious entity exists.
So what you're saying is we cannot simply be... told... what the Matrix is.
 
Well, you're either going to have to build your machanical brain neuron-by-neuron, or you're going to have to sacrifice functionality, because each input-output point has an impact on how the brain functions.

If the neuron processors in a processor-per-neuron brain are synched to the same clock, then a single-processor brain can behave identically. If they are not synched, then a single-processor brain can behave as close as you like to the ppn brain, by using appropriately small time slices. With a small enough time slice, the behavior of the two would be indistinguishable.
 
Well, this thread certainly took a turn for the verbosely obtuse.

I'm curious, piggy. Suppose we take this simulation of a conscious mind (which of course only exists in our imaginations, being symbolic representations of things), and attach a camera and an arm to it. The simulation, unbidden, uses the two to write "I am conscious. This is not your imagination. Piss off" on a piece of paper. Is it still just so many electrons whizzing about a computer?

How could it not be?

This is like asking, if you simulate a flower, and you're looking at the screen, and you see the image of the flower open, do you still think it's just a computer turning on lights on a screen?

Of course!

In other words, you're asking, what if I simulate a thing, and the simulation emits images that resemble how that thing might behave... do you still think it's not real?

If you simulate a human being in great detail... assuming we have managed to describe a sufficient set of rules to make this possible... and if we've included a monitor output as part of the simulation, then I would not be surprised if the resulting patterns of light caused me to imagine a person who appears to believe he exists... for the same reason that I would not be surprised of those patterns of light caused me to imagine a person with 2 legs.

ETA: I apologize for the fact that some ideas take several paragraphs to explain.
 
Well, this thread certainly took a turn for the verbosely obtuse.

I'm curious, piggy. Suppose we take this simulation of a conscious mind (which of course only exists in our imaginations, being symbolic representations of things), and attach a camera and an arm to it. The simulation, unbidden, uses the two to write "I am conscious. This is not your imagination. Piss off" on a piece of paper. Is it still just so many electrons whizzing about a computer?

It would be remarkably easy to write a program to do just that.

100 PRINT "I am conscious. This is not your imagination. Piss off"
200 END

The technology has existed for thirty years for the ordinary person to create "conscious programs like these.

Otherwise, the suggestion is of the form "If we could prove that the thing you say is impossible was possible, wouldn't that prove that it wasn't impossible". It's amazing how often that is put forward as if it were some kind of argument.
 
That's true.

No, not really. It's that the computer can have a particular set of state changes easily visible and accessible to human beings. Given the spectacularly huge set of possible state changes in any physical system, it's highly likely that mappings could be found between any arbitrary two systems.
 
Piggy, its almost as if you took the most boring aspects of materialism, the stubborn refusal to admit logic of dualism, and the woo of idealism and mixed them all into one depressing worldview that is far worse than any of those three by themselves.

At this point I don't know how you can even consider other humans to be conscious like you are. Given the logic you use that seems to be the natural conclusion -- that our consciousness is only real when you are observing it. Otherwise we are just a bunch of particles that can be interpreted as anything at all.

You cannot possibly have read my posts with any care at all and come to that conclusion.

I have said precisely the opposite of this, in fact.

I have said -- rightly -- that consciousness must be the output of physical computations rather than symbolic ones because symbolic computations require an observing decoder in order to be symbolic (rather than physical) computations.

All symbolic computations are overlaid on physical computations. They have to be, or they don't exist.

Only the physical computations remain real, and the symbolic value we assign to them is invisible to them. Unless human beings agree to them, they literally don't exist. They're imaginary.

Consciousness is not imaginary precisely because it is the result of physical computations, not symbolic ones.

If the brain produces consciousness via information processing, it's the kind that stars and computers perform. It cannot be the kind which computers perform (for us) yet stars do not.

ETA: Fortunately, I base my worldview on evidence, not philsophical isms.
 
Last edited:
In other words, you're asking, what if I simulate a thing, and the simulation emits images that resemble how that thing might behave... do you still think it's not real?
Except that consciousness is not the thing, consciousness is the behavior. You've argued that a computer cannot be conscious because it's disconnected with the real world; that all the processes which contributed to the real brain are now abstract symbolic representations that have no influence and are not influenced by the rest of reality. Welp, that's easy to fix.
 
Try dropping the computer out the window, and very soon he will be. He will cease to exist. He won't know why - but he is just as subject to the actual laws of nature as a person is.

Yes, and pop the quantum bubble our universe is from the lab it's been created in and we are very much cease to exist. And again, it's entirely beside the point. Why would you think that, when the simulation stops running, the fact that the world it simulated no longer exists somehow makes my point moot, unless you understood nothing of said point ?

So what world you're living in depends on how ignorant you are? Really?

Instead of looking for a single part of my post that allows you to avoid having to address my argument, read what I wrote again. Otherwise you're not worth the bother.
 
No, not really. It's that the computer can have a particular set of state changes easily visible and accessible to human beings. Given the spectacularly huge set of possible state changes in any physical system, it's highly likely that mappings could be found between any arbitrary two systems.

No, it is true... what he's saying is that computers are very useful as information processors because they are so plastic and predictable and fast. They can be made to mimic all kinds of rule-dependent systems, in ways that most rocks can't.

Ditto for brains. After all, what are our imaginations if not simulations we play for ourselves?
 
The "man" who is "in the world of the simulation" can only be in my imagination because that's the only place such a world can be.

I'm not saying you imagine him as a man. I'm saying he's simulated as a man. So yes, you did miss the point.

In the real world, there's only a computer doing the kinds of things it does all the time, sitting there and changing voltage potentials and running a fan and making lights change on a screen, and making a speaker cone vibrate, spitting ink onto paper, and so forth.

You fail to realise that this could be the case for THIS world as well.

The machine itself would have no way of knowing, were it conscious, that the changes in its body were supposed to represent anything.

And yet it could imagine quite a bit, without access to the "real" world.
 
No, he's not trying to deliberately miss the point.

What he's trying to explain to Belz is simply that if there were any conscious entity generated by the activity of the simulating machine, it would live in the same world we do, not in some other world... it might just have a different sort of experience of that world.

Since I'm obviously not saying they are in another universe, and I defined exactly what I meant by world, he most certainly is deliberately missing it. Otherwise he lacks the ability to understand it. I'm not sure which option I prefer.
 
How could it not be?

This is like asking, if you simulate a flower, and you're looking at the screen, and you see the image of the flower open, do you still think it's just a computer turning on lights on a screen?

Of course!

In other words, you're asking, what if I simulate a thing, and the simulation emits images that resemble how that thing might behave... do you still think it's not real?
What's the difference between a simulated story and a real story?
 
Consider also that if it is possible to create artificial worlds of this kind, won't far fewer resources be needed to maintain a conscious being in a computer simulation rather than in a world of matter and energy? Doesn't that imply that for any given conscious being, he is more likely to be living in such a "virtual world"?
Why should it imply that? we don't know the resources needed to maintain a simulation complex enough to support consciousness, we don't know it's even possible or practical in practice, whereas we know that what we believe to be the 'real' world has no trouble supporting consciousness.
 
I want to repeat one point that really needs to be stressed for those who are tempted to believe that "a simulation could become conscious" because an "entity" that exists "in the world of the simulation" might, in some real sense, "become conscious", and that this conscious being in the world of the simulation would believe that it lived in the world which the simulation was intended to represent.

If you find yourself in that camp, consider this....

A simulation is nothing more than a set of physical computations, which is to say some sort of matter and/or energy changing states from moment to moment.

Which is an obtuse way of saying that simulators are objects.

When we run a simulation, what literally happens is that a machine changes state. In other words, an object does what it does.

The only reason it's a "simulator" is because we've set up the way it changes its state (which is to say, the relative position of all its parts) so that a portion of those changes mimic the changes in another system with enough accuracy that we can fast-forward the changes of the simulator system and see what's going to happen to the other system -- which could be real like an engine or imaginary like a fantasy world.

What's important here is that there's no information in either the simulator or in the thing we want to simulate to indicate that the other exists. You can examine each one all you want... you'll never, as a result, be able to figure that out, just like you can't look at a person and decide if they have a twin.

Of course, we could use natural simulators. Like if I only cleaned my house when the moon was full, you could use the moon phase to predict how clean my house would be if you wanted to visit.

Notice that the moon is unchanged by our using it as an information processor, and there's no way anyone can know we've made it into one except by us telling them... which means that its status as such is purely imaginary.

There are a lot of things like that. For instance, a coaster. If it was made to be a coaster, or if someone uses it like one, then it's a coaster. If not, then it's not. There's no coaster molecule. It's an imaginary class of object.

Anyway, we use computers as information processors because we can dictate how they change states, so we can mimic all sorts of stuff.

But here's the thing... the physical universe has no idea which objects we decide to call simulators or information processors.

Consider that.

If you believe that the behavior of the physical computations in the simulator machine creates a new world because it mimics a world that its behavior corresponds to... and if we accept that the physical computations in the simulator machine are the same as all others elsewhere (to deny it is to cast aside all physics) -- which is to say, the machine obeys the same laws of physics as every other object in the universe -- then you must believe that the physical calculations (real behavior) of every group of particles in the universe generates an infinite number of new worlds.

Given that situation, your simulation seems rather superfluous. (ETA: Meaning, if we do live in a maximally rich multiple universe, the simulation can only be redundant.)

But think about that. Accept the one, you must accept the other, or live by your own metaphysics.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom