• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
I'm not appealing to some guy's intent. I'm only telling you what the states imply about the inputs.

You cannot use the word "imply" in the way you're using without involving intent, yours or someone else's.

When we look at what the machine physically does, we have no need to "imply" anything. You put in 6 marbles, you get out 6 marbles, or you get out 6 minus the number still trapped inside.

The machine only "implies" other values if you decide to pay attention to certain features, ignore others, and apply specific symbolic values to the features you don't ignore.

This process can be used to derive more than one symbolic purpose for any given object, bar none.

In all cases, "intent" must be taken into consideration.
 
This is wrong. I observe a world using my senses, and I interact with the same world. When I reach out and touch a cup, and say "cup", what I mean by "cup" is whatever thing I reached out and touched. I'm relating that concept to a thing I recognize using my sensory inputs, and the thing I can interact with in particular ways.

No, it's not wrong.

Please note that I plainly said if the object were aware of itself and nothing else (thereby describing the system including the object alone).

Not only that, how do you know that the cup you're using isn't being used by someone else as a symbol for something?
 
The neurons in our head are living cells. They have a DNA sequence with particular genes in it. They make proteins; they undergo respiration. They do lots of things. If you want to argue that every last thing the neuron does is responsible for consciousness, then you can speak of every part of this machine as critical.

Otherwise, you're nit picking about irrelevant details. Stay inside your circles.

Yeah, they do lots of things.

And if you think you can remove all the physical work of the brain and replace it with a machine that does something else entirely (no matter what that is) and still get consciousness, despite the fact that we know consciousness is a result of the work of the brain, I have something more than a nitpick with that.
 
No, the distinction is imaginary. If every time I apply modus ponens, it works in the real world, then modus ponens works because it works in the real world. It's not a type of thing that only works for Donald Duck.

Quite the opposite. Only Donald Duck can do things that betray logic.

Your use of the term "imaginary" is questionable. It seems you're just conflating "imaginary" with "abstract".

Yeah, well, Major Tom kills the space squid because of what he really does with his body.

You can preserve all those transformations in the teleporter, and he'll show up with suction cup marks on his suit and weariness in his muscles and a memory of the battle in his brain, but that won't kill the space squid.

For the same reason, you can preserve the syntax in your simulator and run it out the other side, but without an actual brain there, there are no real waves in spacetime to synch up any real electrical fields in spacetime, which means that the body wont' be conscious.

Any real event depends on more than logic... it depends on something doing something described by that logic.

And it matters whether that something is a man or a transporter beam, a brain or a simulator machine.

That's a fact.
 
My italics.

Clearly there is something the computer can't do. It can't replace a human brain and control a human body.

You keep saying this without explanation. What do you mean? If two systems have the same inputs and outputs and produce exactly the same outputs when presented with the same inputs, and both are connected to a human body in exactly the same way, then why do you think one can "replace a brain and control a human body" but the other can't?
 
No, you can't.

The problem is, if the thing works, it's both a replica and a representation.

But the only part of that which is important here is the fact that it works as a physical replacement. If it also works as a representation or not is irrelevant, merely coincidental, but trivial and irrelevant.

The result of that fact is this....

If you treat the representation as if it were important, you're going to examine that aspect of it -- the one that's irrelevant to why the thing works -- and follow the rules of logical systems, which are different from those of real systems.

For instance, you might conclude that it can operate at any speed and exhibit the same sets of behavior because logical computations can be performed at any speed and the outcome won't change.

But this is not true of physical systems -- you can't make an airplane fly by running its physical computations at any speed, for example.

And the math which describes our world is reversible in time, but reality stubbornly isn't.

Instead, we have to view the replacement neuron as a physical system.

And yes, a collection of these physical parts would make a brain, if they really do act functionally like neurons in all respects.

But this does not mean that a machine which preserved only the logic would do what a brain does.

That's like saying we can make Major Tom appear at the space station with his atoms logically rearranged as if he had killed the space squid, and thereby kill the space squid.

If the brain is replaced by a machine that doesn't get the physical work done, then that work doesn't get done, and the machine isn't conscious in the real world... and there's no other place to be conscious.

Is that a whole lot of text just to say the single processor brain would not be fast enough? If so, that is just a matter of better hardware.
 
Yes, that's your premise--that meaning requires something in a human brain and cannot be derived from devices like what PixyMisa is suggesting.

But you're supposed to be arguing for this, not assuming it.

That's been done, ad nauseam.

You simply refuse to deal with it by describing scenarios that require interpreters and pretending that they don't.

Every single scenario you've described has included an interpreter who pays attention to some features and ignores others, and favors certain possible uses above others because they're useful to human beings.

The checkbook, the hen house, all of them.

Without exception.
 
I did not decide that putting two marbles into slot $ puts the machine into the same state as putting one marble into slot #. I figured out that it did.

But you went beyond that, and decided that this fact was important to the function of the machine, but the ratio of the length of the sides, or the channels, was not.

And you furthermore decided to ignore the other marble which went into the bone pile, and made no attempt to say that this somehow gave the bone pile a value of 2.

In fact, only by ignoring the position of the other marble are you able to make your assertion that # has some sort of value of 2. Why not ignore that channel and pay attention to the bone pile instead?

There's no reason inherent in the machine for doing either.
 
And what do you mean by an observing brain? Anything I say about the system will be done by an observing brain, because that's what I am. If you want to include an observing brain for this reason, then you need to do it honestly across the board, and conclude that the universe would not exist if you were not alive. After all, even if someone shows you evidence to the counter, you can only see it with your observing brain.

You and I are both taking a "God's eye view" of the systems we're looking at.

We discount our own presence, because we have to.

The question is, if we examine the system that includes the machine alone, is it an information processor?

The answer is unequivocally "No" because there's no one to assign any values to its functions.

It's true, from our God's eye view we can discern that it "could" be used as a binary adding machine, but remember, when examining the system with just the computer in it we have no reason to give any preference to functions that are particularly useful to humans.

So we also can discern any number of real physical uses it could be put to, and a pretty much unlimited number of symbolic ones, depending on which symbolic values we attach to which parts of the system.

On the other hand, when we look in the circle that includes the machine and a brain that knows how to use it as a binary adding machine, then -- and only then -- it becomes usable as an adding machine, because the specifications for how to use it are now included in our system.

In the circle with just the machine, these specifications did not exist. Nor were they discernable (except as one possibility among innumerable ones) from the machine itself.
 
No, I'm not ignoring the marbles. I'm attending to the machine states.

So you have decided that the positions of certain parts of the machine (the levers) are important, while the positions of other parts of the machine (the marbles) are not.

You certainly did not get the information telling you which parts to "attend to" from the machine itself.

So where did you get that information?

Hmmm.... I wonder.....?
 
Are you suggesting that the use of the brain as a food source may be key to generating consciousness? Or has a point flew over your head?
I think it's at least worth considering the possibility that some physical attribute of the brain produces a physical effect of consciousness.
Yeah, that's not the point either. Assume that PixyMisa gives a full explanation for consciousness--and that he builds his Pixydroid. Piggy laughs and says, "that's not what causes conscious meaning! That marble machine could be used as a fuel source".

Then Piggy discusses how he thinks brain waves are a key to consciousness.

Piggy has opened a door. PixyMisa can now say, "that's not what consciousness is! The brain can be used for food."

Don't you see the connection here? Piggy cannot use a criticism against the machine that can be used against his theory of the brain.
The burden of proof is undoubtedly on those who insist that it is only the computational nature of the brain that counts, and that nothing else is of any significance. They need to demonstrate that nothing in the physical configuration of the brain matters.
No, the burden of proof is on the person who makes a claim. Piggy is claiming that computation cannot generate consciousness. And he's explaining why. And, it does not follow.
 
Last edited:
You keep saying this without explanation. What do you mean? If two systems have the same inputs and outputs and produce exactly the same outputs when presented with the same inputs, and both are connected to a human body in exactly the same way, then why do you think one can "replace a brain and control a human body" but the other can't?

You need to be very specific about what you are claiming.

If you are claiming computational equivalence between the network of artficial neurons and the computer - i.e. that if we treat the operation of either as a calculation, with a fixed result - then we can have equivalence of input and output. We won't have functional equivalence unless the timing is exactly the same. The artificial neurons guarantee this. If a single artificial neuron didn't guarantee the same response as a real neuron, it wouldn't work.

If the question becomes - if a black box - and it doesn't matter if it's a computer or anything else - is put into the brain cavity, and connected up, and gives, as far as we can tell, exactly the same functionality as a brain, allowing the person to walk and talk - then is the person conscious?

That's an interesting question.
 
Is that a whole lot of text just to say the single processor brain would not be fast enough? If so, that is just a matter of better hardware.

Real-time programming (as people who've done it know) isn't just a matter of faster hardware. It's a matter of a guaranteed response in a given time. It's necessary to respond to interrupts.

A computation will execute much faster on a faster processor, but unless the responses are built in, it won't respond in the necessary fashion.
 
If you want to talk about the purpose of a brain,
I thought you were talking about how the brain produces consciousness. In fact, isn't your complaint here that we require a brain to generate meaning?
 
Piggy is claiming that computation cannot generate consciousness. And he's explaining why. And, it does not follow.

No, I'm not.

In fact, physical computation is our only explanation for consciousness.

We have plenty of evidence to link the physical computation of the brain with the phenomenon. And the more we explore, the more precise our understanding of that link becomes.

Now, if you want to say that symbolic computation is involved, well... let's hear it. Let's hear an explanation of how that works.

If you don't care to put forward any claim, of course, then there's nothing to discuss, is there?
 
I thought you were talking about how the brain produces consciousness. In fact, isn't your complaint here that we require a brain to generate meaning?

No.

Why would that be a "complaint"?

And if you accept that we require a brain to generate meaning, why in the name of all that is holy have you been insisting that a system without a brain can generate meaning?
 
Real-time programming (as people who've done it know) isn't just a matter of faster hardware. It's a matter of a guaranteed response in a given time. It's necessary to respond to interrupts.

A computation will execute much faster on a faster processor, but unless the responses are built in, it won't respond in the necessary fashion.

We also need to take in mind the difference between performing the calculations that describe how my truck behaves, and my truck performing the physical calculations that are how it behaves.

You can perform the former at any speed, and they'll come out the same.

On the other hand, the physical calculations which are how my truck behaves cannot be executed at any arbitrary speed and end up with the same result.

Like the whole time issue, it's one of the differences between our real world and the mathematical world that describes our real world.

That's just one of the reasons why you can't expect a system which describes something real to behave quite like the real thing itself.
 
I never said any such thing.

Of course the use of these machines can derive meaning, as you say.

Now, when this happens, which circle are we looking at... the one with just the computer in it, or the one with the computer and a brain in it?
Well, let me ask you a completely off the wall question. Does the law of conservation of energy exist, or does it require a brain?

It's the same use of the brain to figure out the relationship between the marbles in that machine and its inputs, as it is to figure out the law of conservation of energy.
It must happen in the one with the brain in it, because otherwise all we have is some marbles in, and the same number of marbles out.
Same thing would occur if we figure out how a single neuron works by studying it. We would have to use the brain to do so. But that doesn't mean the neurons don't really do that thing, or that we're making it up. And it doesn't bear on whether or not those neurons contribute to consciousness.

All it means is that if we figure out something, we have to use our brains.
These relationships you see in the machine are there,
Good. Then you agree. Now why can't those relationships, which are there, be used to create a conscious entity?
but so are a bunch of others which you're ignoring as unimportant...
Argumentum et silentio.
without a brain that knows which ones to pay attention to and why, then no, there is no way to "derive meaning" from the machine.
And we can apply the same criticism to your theory of the brain. You're ignoring a lot about how the brain is structured in your theory of consciousness from waves.
You are the one who refuses to stop describing scenarios that require brain states!
When you said that the relationships were there in this reply, did you mean they weren't? Make up your mind.
 
Last edited:
Now, if you want to say that symbolic computation is involved, well... let's hear it. Let's hear an explanation of how that works.
Okay, symbolic computation. Nevetheless...

You don't get it, Piggy. I don't want to say that symbolic computation is involved. I want you to say why it isn't.

So far all you've been doing is rationalizing your own position.

And you've been reacting... you can't even slow down enough to read posts. My gosh, this is a forum, not internet relay chat. Slow down and digest things.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom