• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
For those who weren't present for the previous threads, I characterised Piggy's argument as consciousness requiring computation and a magic bean.

I've seen nothing since then to change my mind.

Brother, do you seek to remove the bean from mine eye? ;)

The funny thing is, the bean is in your hand.

I'm trying to get you to let it go.

For me, consciousness is computation, period, end of story.

But I'm talking about physical computation -- the physical activity of the brain. And nothing else.

You, however, overlay a set of symbolic (informational) computations on top of the physical ones.

Then you say we can dispense with the physical ones!

Now who's left with the magic bean, huh?
 
It might turn out to be easier to do for a large number of pseudo-neurons sharing a physical processor resource. But plain curiosity seems to me a good enough reason.

Well, yeah, I can see doing it for that reason. Like maybe you want to see if a particular material will work or not.
 
If you, or any of the other individuals with bizarre illogical worldviews, had bothered to ask intelligent questions of pixy -- such as "how do we visually perceive things in the world?" or "how does our memory work?" or anything else that is specifically pertinent to human minds -- he would give intelligent answers. Instead you people ask the most vague questions possible, "how does consciousness arise?"

God forbid, on a thread called "Explain Consciousness to the Layman", that anyone should ask "How does consciousness arise?"

The only reason that question sounds "vague" to you is that your way of viewing the workings of the brain simply doesn't account very well for consciousness... if at all.
 
OK, I guess I was thinking of the physics usage.

I actually meant it that way.

If there's no process, there's really no way to say that anything exists.

Imagine if there were only one thing in the universe and it never changed. Well, you can't. Because if there are no processes, there is also no space, no time, and no matter.

There was an interesting documentary a while ago, interviewing a South American(?) split-brain patient, and showing film of her shortly after the operation, where one hand (controlled by the non-verbal side) would persistently thwart or undo the actions of the other hand, e.g. when she buttoned up her blouse with one hand, the other would follow, unbuttoning it. It occurred to me this might have been the only way the non-verbal half could express it's frustration or distress, etc. The interview was conducted a couple of years post-op, when all these conflicting behaviours had died away. I wondered what had transpired in her head. They said that where such dual consciousness is manifest, it usually doesn't last long before the obvious differences go away and some kind of integration seems to happen.

The mute side can hear what the vocal side says, and they can sense each other -- and by force will mimic each other to some extent -- based on the physical feedback from emotional states, even if they don't know (and they probably don't) whether any given emotion is primarily generated in their half or the other half.

There's a certain "dorm period" effect going on.
 
That is the question I've been asking all along. What is special about the brain? Why would we think anything is special about the brain?

All available evidence says it's just a big marble machine. In fact, as far as we know, it's not even possible for it to be more than that.

So whatever the brain can do with the output of the marble machine, a bigger marble machine can do by itself.

But think about that a moment, Pixy.

If the brain is just a big marble machine... which I'm fine with saying it is... then what is the "output" of that machine?

Before you point to any "addition" going on, let me remind you that there is nobody looking at the brain and decyphering any symbols on it... it's just an object in spacetime... so let's look at this machine as an object in spacetime, too.

If the shape of the brain reminds us of a walnut, or if a pattern of paint on the marble machine reminds us of numerals, we'll just ignore that.

Ok, so we're looking at the marble machine, and we see there are channels and there are valves on the channels and curves, and there are trays with holes in them, and a tray without holes, and marbles are dropped through the channels.

That's what the brain is, too.

It's just like the marble machine, a cascade of physical events, nothing more.

So whatever the brain can do with the output of the marble machine, a bigger marble machine can do by itself.

It should be obvious why this statement is logically incorrect.

Remember our 2 systems: One comprised of a machine and a brain that understand the symbols, and a machine by itself.

These are two different systems. The first can comprise an information processor, the second cannot.

You've just said that whatever the first system can do (a brain with the output of a machine), the second system can do (a machine alone).

That assertion is patently and demonstrably false.
 
Last edited:
Ok, I'll put it back into context for you and we'll move on from there.

Remember, the topic is symbolic information systems, such as adding machines and simulator machines.

What does the difference between symbolic aggregation and physical aggregation have to do with that, and what's up with the role of the brain state?

If two quarters are washed into the same spot in a gutter, or if they're picked up by a hand and placed together, these are both examples of physical aggregations of the objects.

And as your examples show, no brain state is required for this to happen -- it can happen with one, without one, doesn't matter.

But what about our marble machine? What sort of aggregation can it peform, and under what circumstances?

To examine this, we'll use an old schoolbook way of looking at systems.

Imagine a page with two big circles drawn on it, representing two systems. Whatever's inside the circle is necessary for that system, whatever isn't, ain't.

So if I want to represent the system necessary to move a pile of brush to the county composter, I need the brush pile to be in the circle, of course, and the county composter, and some sort of vehicle that can carry the brush, and some sort of passage for the vehicle between the two places.

That's a fair representation of our necessary system.

Notice we leave out the color of the vehicle, because although the vehicle will certainly have some color to it, no particular color is necessary for the system.

Ok, so let's look at the first circle, which only contains the marble machine, nothing else.

If that's all that's in the circle, can it be an information processor?

As it turns out, the answer is "No". Here's why....

Without a brain state to determine the "meanings" of the patterns of paint on the machine, they might as well not be there. You need a brain to assign those meanings and to interpret them.

Without that, all you're left with is an object you can drop marbles through.

You can prove this to yourself by viewing the video with the sound off, and ignoring the patterns of paint -- remember, inside our circle there's nothing that has any means of deriving any meaning from them. (Even if the machine were completely self-aware, this would still be the case.)

Without a brain state somewhere in the circle, the system cannot be a symbolic information processor. The only addition it can do is to channel the marbles at the top into a single group at the bottom.

So let's use our other circle to describe the minimum system needed for this thing to work as an info processor.

Well, we need at least one brain capable of assigning values to different aspects of the machine, determining the rules of operation, and interpreting the results of the machine's behavior. (A programmer and a user or reader.)

It might be one brain or more involved, but at least one is required.

And this is true of any object in the universe that is (or can be) used as an information processor.

By itself, alone in the circle, as an independent unobserved system, it cannot perform that function. To do so, at least one brain has to be involved in the system.

And here's an extremely important consequence of that fact:

The "world of the simulation" -- which is to say, whatever a simulation program is supposed to represent -- cannot be located in the simulator.

Remember, PixyMisa's marble machine is intended to simulate aggregation -- that is, its physical activities are designed to mimic, in a certain way and only if you understand the symbology, the process of grouping things together. In that way, it can answer the question "How many quarters do I have in the house if I've got 3 in my pocket, 32 in my coin collection, and 2 under the sofa cushions?"

But by itself, it can't do that.

It can only do that if an observing brain changes its internal states while oberving it.

Which leads us to an extremely important fact:

Changes in the state of the "world of the simulation" exist as changes in the state of a brain.

Notice that there is nowhere else for this "world of the simulation" (WOTS) to be.

All we need for the simulator to run is the machine and a brain that understands its symbols and usage.

Changes in the state of the machine cannot be changes in the WOTS because, as we've seen, by themselves they mean nothing... they are only what they are, the movements of the machine.

The only other choice we have is the other object in the system... the brain, or brains.

And indeed, we find that the WOTS must change every time the relevant brain states change.

In short, the world of the simulation literally exists in your imagination, not in the simulator.


This is not philosophy, but physics.

Now I hope you can see the relevance to the question of whether or not a machine simulating (not replicating) a brain can be wired into a robot body and produce a conscious robot.

No, because the brain being simulated doesn't exist in the simulator.

You might as well plug in a machine simulating a sunset over Miami Beach.

But when we examine the system, it is so tempting, when we move along the sensory nerves up to the skull, and get to that computer running the brain sim, to switch our perspective over to the WOTS in our own imaginations and take our eye off the ball, which is simply the mass of moving parts in front of us.

In other words, if you want to know how a part will work in a machine -- and that's what we're doing here, sticking a computer box into a machine -- you can only pay attention to what it's physically doing.

So we could only use our marble machine inside another machine that needed a few marbles at a time. The fact that we can symbolically use that machine to add two-digit numbers doesn't mean it can work in a machine that needs 20 marbles to come out of it.

And the fact that we can "read" the simulator in order to change our own brain states which are the simulated brain... well, it doesn't mean anything at all as far as whether or not a robot body will be conscious if we stick a machine designed to run simulations in its head, rather than a machine designed to be conscious.

The answer is no, it won't.

If you want that robot to be conscious, you've got to design and build a brain that does whatever is physically necessary to make that happen, not designed to run simulations of brains.

That is what I have been saying.

And that is why consciousness cannot be programmed.

For the same reason you can't go and program yourself a new truck or a bigger house or clean laundry.

I have got a hold of this consciousness and it seems a lot like a new truck or a bigger house or clean laundry
 
Irrelevant to what?

A human brain--or at the very least, a wolf brain, is responsible for judging whether or not the "join" happened. What exactly are you drawing a distinction to? And why is this such a "relevant philosophical musing"?

ETA: I'm wanting to know what it is you think is significant about physical addition before moving on to discussing things like momentum, aggregation of entropy, and so on. In particular, your concept of "physical addition" is meant to contrast with "symbolic addition", and I want to know if there's such a thing as symbolic addition that does not involve a brain state. Your inclusion of brain state in your definition of physical addition is already suspect, since the very things you are claiming are physical addition do indeed involve brain states somehow.

If you seriously want to argue that objects do not aggregate and disaggregate, you're on your own.

Seriously, if you think that's worth contesting, you're missing the entire point of what I'm saying.
 
Physical symbols. Let's say we use wolves as a symbol.

ETA: Oh, and we're building the "whom". If you need this "whom" to build the "whom", you have a problem.

I agree, if you need a whom to build a whom, there's a problem.

Your scenario creates this problem, and that's why I object to it.

Moving on, there are no "physical symbols" in the sense in which there are physical computations and physical information.

The latter 2 concepts allow us to deal with systems that don't include observers.

All symbol systems require encoders and decoders.

There are plenty of folks who talk about physical systems in terms of information and computation, but I don't know of anyone who refers to them in terms of symbols.

If you did want to do that, you'd be forced to say that my front door opens because my door lock recognizes my gesture with the key as a symbol telling it to move.

Once you go there, then you can't deny that the moon recognizes its path around the earth.

Oh, wait... looking back, I think by "physical symbols" you might mean something like "symbols representing physical objects"... to which I can only say....

Jesus H. Christ Brown....

Until and unless you can put down the mathematical perspective, the informational perspective, and simply examine a physical system for what it is -- because, after all, the world of matter and energy is all we know of reality -- you will never understand anything I'm saying at this point in the discussion.
 
And if you build a simulator that has four marbles in it, where four is somehow significant, you need to have some quantity of matter and/or energy to represent that there are four marbles, and it had better be different than the quantity of matter and/or energy representing three, two, and one marble.

We're going round and round in circles.

No we're not.

Not both of us anyway.

You said it yourself, you need some quantity of matter and energy to represent the various symbolic values.

OK, we're fine there.

Like, you know, you need some paint on the marble machine.

There you go, there's your matter and energy to represent the symbolic values.

This is now accounted for in our system.

So what's the input of matter and energy?

Well, there's paint on the machine in some pattern which required X amount of matter and X amount of energy.

OK, fine.

Doesn't matter whether the pattern is a green "4" or a red "F" or an orange stick figure or a blue spiral, we got X amount of matter and X amount of energy into the system.

What are we left with?

The same machine we had before. Only now it's got paint on it.

This input of matter and energy has no more effect on the system than if you nailed a pinecone onto it.

Unless you add a brain state to the system.
 
Ok, I'll put it back into context for you and we'll move on from there.

Remember, the topic is symbolic information systems, such as adding machines and simulator machines.
Yes.

Okay. There's a general thing you keep bringing up over and over. So let's just start right here.
But what about our marble machine? What sort of aggregation can it peform, and under what circumstances?
...
Without a brain state to determine the "meanings" of the patterns of paint on the machine, they might as well not be there. You need a brain to assign those meanings and to interpret them.

Without that, all you're left with is an object you can drop marbles through.
Your phrasing is still clumsy as hell, but I'm just going to wing it. By focusing on simulations rather than mere correlations between systems, what we wind up doing is examining things like this that have an intended purpose. Like, not just the symbols, but the entire machine--the thing that the person who built it had in mind when he put the thing together. But part of this intent is the symbols--the guy who made this marble machine created symbols using the rocker.

So, the first thing you need to do is stop saying "brain state", because that's not what you mean. You're really after just meaning itself. So let me explain to you how this works.
You can prove this to yourself by viewing the video with the sound off, and ignoring the patterns of paint -- remember, inside our circle there's nothing that has any means of deriving any meaning from them. (Even if the machine were completely self-aware, this would still be the case.)
The problem, I surmise, is that you're imagining that the task of coming up with the meaning of the machine is one akin to mind-reading the builder, to figure out what he had in mind when building the machine. I'm going to suggest to you that this is simply the wrong approach. After all, even if you did somehow guess at what the programmer had in mind, all you're doing is deferring to his intent.

What you need to do, instead, is to study the machine. In particular, you need to pay attention to two things--entities, and transformations. In particular, when I refer to "entities", I mean "anything with an identity"; that is, something you can recognize. Something that is distinct from other things, and the same as itself. A transformation is something that takes entities and makes entities. This is a vague definition on purpose--these things can be anything at all. They're just that abstract.

But in the marble machine scenario, it's not that difficult. To figure out what entities are, you just observe invariants; that is, things which tend to remain the same. The marbles would do--and I take it we can start with our ability to recognize marbles, columns, relative position, and so on--unless you want me to explain how to identify those.

So, the marbles are entities. And the rockers are entities. Now we look and see what happens to the entities. We just jump in and play, and look for things. Put a bunch of marbles in. Put a few in. It's not too long before we notice an interesting property--marbles either stay in a column, or they move left. Further study reveals another interesting pattern--if we start from scratch, and I put two marbles into the same column, I get one marble in a column to the left. It doesn't seem to matter which column we start on, this pattern holds. There's another invariant. It probably won't take that much more tinkering before we discover that if I put one marble into a column, and put two marbles in the column to the right, then I wind up with a single marble at the left. This seems interesting, because without that marble in the column, the two marbles would have put a marble there--so the system seems to behave the same if I put one marble into a column, or two marbles into the column to the right. So there seems to be a bit of a global pattern--and we can try to figure out to what extent this rule applies.

After some study, we might figure out that we can make any arbitrary number of rockers go to any position--just from a blank spot, put numbers in the columns we want rockers to the right in. And with our rules above, we note that we can also get a particular rocker into position by adding two marbles to the column in the right. Or four marbles to the column to the right. Then we may note that we can fill the machine to get the position simply by adding marbles to the rightmost column--and our desired arbitrary position will get set one column at a time from the left to the right. Then we may get interested in just adding marbles to the right to see how many states we can produce--and we note that when everything is filled, and we add one more marble, everything empties.

So by now, we note that every arbitrary position can be reached solely by adding marbles into the righthand column. Not much after that, we may learn that filling in the arbitrary positions manually--by dropping the marbles into the columns we want--and adding more marbles to the machine arbitrarily, produces the same result as adding enough marbles to make one position, then adding enough marbles to make the other one.

And thus, we learned about positional numbering systems, and the meaning of addition using them.

The meaning of the symbols derives from the relationships between the states of the system and the environment of the system. The entire exercise above is simply one of studying what the states of the system implies about the environment--the full theory of the machine, which can be discovered simply by studying what the state implies about the inputs--conveys the meaning of the machine.
Without a brain state to determine the "meanings" of the patterns of paint on the machine, they might as well not be there. You need a brain to assign those meanings and to interpret them.
The marbles react to their inputs in specific ways. Figuring out the meaning of this machine is simply a matter of studying what the states of the machine implies about the inputs. As soon as you build a full theory of this, you have discovered the meaning of the machine.

It's all there, in the machine.
Without that, all you're left with is an object you can drop marbles through.
Not quite. You have the machine reacting in specific ways to the marbles. And you have the states of the machine exposed--which, in this case, is the intended outputs. From this you have very real behaviors that you can study to figure out what the state implies about the environment.

How did you suppose we discover meanings?
The only addition it can do is to channel the marbles at the top into a single group at the bottom.
Not true. See above. You just have to develop a full theory of the machine, and out plops everything from a positional numbering system to addition.
So let's use our other circle to describe the minimum system needed for this thing to work as an info processor.

Well, we need at least one brain capable of assigning values to different aspects of the machine, determining the rules of operation, and interpreting the results of the machine's behavior. (A programmer and a user or reader.)
Again, why a brain? All you need is something capable of generating a full theory of the machine. You're begging the question.
It might be one brain or more involved, but at least one is required.
Yes, obviously. The guy that built the thing. But that's just because it's a machine someone built.
The "world of the simulation" -- which is to say, whatever a simulation program is supposed to represent -- cannot be located in the simulator.
Maybe in general. If this marble machine was supposed to balance the builder's checkbook, it'd be impossible to figure that out without asking the designer. After all, the only thing it interacts with are marbles. But you're misunderstanding something very fundamental about how meaning comes to be. Everything you need to understand that the machine performs addition is there in that marble machine. It just requires studying the invariants.

It's a real machine, and it performs real transforms according to real inputs. And it obeys the laws of physics to do it. It's possible by studying it to figure out that it adds...
For the same reason you can't go and program yourself a new truck or a bigger house or clean laundry.
...but it doesn't haul freight.
 
But you're just rationalizing your position. It's a "just so" argument. You're feeling your way through it--you're not presenting cold hard logical facts.

And you need to present cold hard logical facts.

If you want to claim that A and B are accomplished with the same net matter and energy as A alone, but that B is somehow independently real, you have some explaining to do.
 
The point is that for the body to do anything, whether that's growing toenails or moving muscles or cranking up awareness in the morning, it's gotta be doing something.

Which means that we can't bypass the real brain with a machine that runs sims (rather than doing what the brain does) if we want the apparatus to be conscious

That's the most dumbfounding non sequitur I've seen in a while.

Simulations DO stuff !

and simulator machines don't produce conscious experience.

Hey, if you say so ! :rolleyes:

(Don't forget, the "world of the simulation" is not anywhere in the machine.)

I don't forget the claim, but I seem to have missed the proof.
 
Uhm, this "we" you're talking about is an integrated planning element that retrieves information about its environment in a useful form to it; basically, anything you can consciously conceive is the type of information you have. What you are aware of is, well, whatever information goes into that integrated planning element. By definition, if you are not aware of something, it is not information that went into your integrated planning element (it very well may be in your planning element but not integrated with enough things for you to report on it, reflect on it, compare it to other things, and so on).

Will that explanation do?

How abstract of you.

This is so far removed from any specific discussion of what brains are doing to generate conscious awareness, I don't believe there's anything relevant that can be said in response.
 
I have got a hold of this consciousness and it seems a lot like a new truck or a bigger house or clean laundry

Me, too.

Which is why I conclude that it, like them, cannot be programmed.
 
How abstract of you.

This is so far removed from any specific discussion of what brains are doing to generate conscious awareness, I don't believe there's anything relevant that can be said in response.
Well, of course it is, because your "challenge" had no teeth. The explanation happens to simply be a tautology. I don't even understand why you thought it was supposed to be a problem.
 
I have got a hold of this consciousness and it seems a lot like a new truck or a bigger house or clean laundry

Me, too.

Which is why I conclude that it, like them, cannot be programmed.

It was six men of Indostan
To learning much inclined,
Who went to see the Elephant
(Though all of them were blind),
That each by observation
Might satisfy his mind.

The First approached the Elephant,
And happening to fall
Against his broad and sturdy side,
At once began to bawl:
"God bless me! but the Elephant
Is very like a wall!"

The Second, feeling of the tusk
Cried, "Ho! what have we here,
So very round and smooth and sharp?
To me `tis mighty clear
This wonder of an Elephant
Is very like a spear!"

The Third approached the animal,
And happening to take
The squirming trunk within his hands,
Thus boldly up he spake:
"I see," quoth he, "the Elephant
Is very like a snake!"

The Fourth reached out an eager hand,
And felt about the knee:
"What most this wondrous beast is like
Is mighty plain," quoth he;
"'Tis clear enough the Elephant
Is very like a tree!"

The Fifth, who chanced to touch the ear,
Said: "E'en the blindest man
Can tell what this resembles most;
Deny the fact who can,
This marvel of an Elephant
Is very like a fan!"

The Sixth no sooner had begun
About the beast to grope,
Than, seizing on the swinging tail
That fell within his scope.
"I see," quoth he, "the Elephant
Is very like a rope!"

And so these men of Indostan
Disputed loud and long,
Each in his own opinion
Exceeding stiff and strong,
Though each was partly in the right,
And all were in the wrong!

Moral:

So oft in theologic wars,
The disputants, I ween,
Rail on in utter ignorance
Of what each other mean,
And prate about an Elephant
Not one of them has seen!
 
Last edited:
So, the first thing you need to do is stop saying "brain state", because that's not what you mean. You're really after just meaning itself. So let me explain to you how this works.

The problem, I surmise, is that you're imagining that the task of coming up with the meaning of the machine is one akin to mind-reading the builder, to figure out what he had in mind when building the machine.

I hate to have to tell you this, but you're incorrect on this point, so everything after it is off.

Believe it or not, I really do mean "brain state", I really do mean what I type.

I know that doesn't fit with your view of the world.

And I mean that literally. I know that what I'm saying simply doesn't match up with the patterns you're using to sketch all this out.

Nevertheless, I mean what I say.

If the machine is isolated as a system, there are no symbols, and no symbolic behavior. There cannot be. This is quite literally impossible, unless the system is the encoder and decoder of its own behavior, which presents an ominous bootstrapping dilemma.

All information processors used by human beings involve, minimally, two components -- a machine, and at least one brain to assign the symbolic values and rules, and to interpret the symbolic values of the machine's states according to those values and rules.

This is entirely noncontroversial.
 
That's the most dumbfounding non sequitur I've seen in a while.

Simulations DO stuff !

Yeah, and so do simulators.

And those two are not the same.

They can't be, because if they were, it wouldn't be a simulation, it would be a replication.

If you have a simulation, some difference between what the simulation and simulator are doing is a requirement.

Which means that if you replace anything with an object that's running a simulation of that object (unless it's a redundant simulation of the simulator machine) there must be real differences in the system as a result.
 
Status
Not open for further replies.

Back
Top Bottom