• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Real-time programming (as people who've done it know) isn't just a matter of faster hardware. It's a matter of a guaranteed response in a given time. It's necessary to respond to interrupts.

A computation will execute much faster on a faster processor, but unless the responses are built in, it won't respond in the necessary fashion.

The timing of I/O in the single-processor brain would of course match that of the multi-simulated-neuron brain. With fast enough hardware, this is always possible, at least to some level of precision at which the differences in function will be negligible.
 
You and I are both taking a "God's eye view" of the systems we're looking at.
Okay, see, this is good. That's exactly what I'm doing, and it is what you're doing too. We agree on this point.
The question is, if we examine the system that includes the machine alone, is it an information processor?

The answer is unequivocally "No" because there's no one to assign any values to its functions.
There's a tiny problem here. I'm a "one", because I'm an integrated planning mechanism, which receives input from its environment, cognates it, interacts with it. The Pixydroid, if it can be built, is also a "one". It could interpret this machine just as well as you and I can.

But you're appealing specifically to brains, not marble machines. My point is that the relationships are just there, and I think you agreed with me. I'll also grant that they can be discovered by anyone with a brain. On that I think we can agree.

Where we differ I believe is that you say that it requires a brain to figure it out. To that I ask, why do you think it requires a brain? Why can't a Pixydroid figure it out? You just defined the brain's requirement right into the problem. But it says nothing about what is really required to figure out that relationship. It just says that you think a brain is required, which is your premise.

If you acknowledge that the relationships are there, and can be discovered, then they can be discovered by anything that is capable of discovering it, whatever that thing may be. It does not have to be a brain... unless, of course, it has to be a brain. But I need to hear that argument--why it has to be a brain.

And keep in mind, I'm not arguing that a Pixydroid is possible. I'm arguing that nothing you have said convinces me it's impossible.
 
Last edited:
I haven't counted, but I'd be surprised if I wasn't, by far, the biggest contributor to this thread.

In response to this:

Actually, "Nah, that ain't it, there's gotta be a magic bean involved somewhere." sounds rather populist.

By the way, Westprog, Leumas, Ufology... when you're done patting yourselves on the back, you may want to contribute something worthwhile, here.

At least piggy is trying.

So, you think that postcount is an indicator of the quality of what you post ? Or did you just not read what I wrote ?
 
Well, let me ask you a completely off the wall question. Does the law of conservation of energy exist, or does it require a brain?

Oh, lovely.

What's next? "Is math invented or discovered?"

How about that tempest in a teapot?

There's no connection here.

Any system which describes an information processor, or any other use of symbols, cannot involve a machine alone, but must also involve a mind to assign symbolic meanings to the physical transformations.

How does the law of conservation of energy and matter relate to that?
 
I don't want to say that symbolic computation is involved. I want you to say why it isn't.

And there we have it, folks.

Finally.

Even though all our observation points to consciousness being produced by the physical activity of the brain, even though research along those lines is producing results which answer earlier questions, even though the laws of physics demand that all real events have physical-energetic causes....

I'm being asked to prove that a proposal regarding a hypothetical role of symbolic computations is false.

Never mind that any claims of consciousness being caused by symbolic computation make no sense -- because we have no one to establish or read any symbols inside our brains -- never mind that there's no evidence that symbolic computation could cause consciousness, and never mind that all sorts of absurd stuff must be possible if you believe that it does....

Never mind that no one has ever produced any actual hypothesis explaining how this might occur... much less why it occurs in some cases of supposedly "symbolic processing" but not in others....

Nor has anyone used this hypothesis to explain why it should be that the brain appears exactly as if its physical activity were producing consciousness, if in fact it's not....

Nor has anyone explained why we need to abandon the matter-and-energy physics that have worked for everything else in favor of an unproven (and incoherent) metaphysics when it comes to this particular phenomenon....

Nevertheless, I'm asked to prove that it's wrong instead of demanding that anyone provide some shred of evidence that it could possibly be right.

You heard it here, folks.

Finally.
 
Slow down and digest things.

The mistakes of the symbolistas used to be disconcerting, but now that I've had time to figure out where the errors are, the mistakes appear much more systematic. So it doesn't take me much time at all to spot errors and describe them.

It's like if people say wrong things about where the furniture is in my house, it doesn't take me long to discover what they got wrong, especially if I know the error comes from assuming that a two-story house is a one-story house, and therefore anything that's on one floor must be in the same room as anything in the floor above or below it.

I have no problem keeping the floors separate, so it's not like I have to strain my brain over where the error is and what's correct.
 
Last edited:
Where we differ I believe is that you say that it requires a brain to figure it out. To that I ask, why do you think it requires a brain? Why can't a Pixydroid figure it out?

It doesn't matter whether the brain we're talking about is biological or artificial.

But if you have a symbol system working, if you have an information processor working, you're going to have to have some sort of object to act as a symbol, and you're going to have some sort of brain to assign and read the values.

Ain't no way around that.

If all you got is the object, you don't (can't) have a symbol system or an information processor. All you have is an object with any number of possible real and symbolic uses.

Only when a brain which "knows" the rules and symbols is placed inside the circle can we, as God's eye view observers, declare that the object is part of a symbol system or information processing system.

That's how we know our brains cannot possibly be generating consciousness by using symbols, or any sort of "information" except the type of physical information that allows our sun to shine.

As soon as we start talking about using any object, including an unconscious brain (which a brain must be before consciousness is created), as an information processor or making it do anything involving symbols, we need another brain to come in and complete the minimum components of a symbolic system, which must involve instances of the symbols and an encoding and decoding mind.

In other words, if our droid is conscious, that consciousness cannot be caused by the use of symbols in its mechanical brain, because the droid's unconscious mechanical brain obeys only the laws of physics, not any symbolic laws (which is has no way of interpreting) and certainly not any symbolic laws imagined by the droid!

You've got a fatal bootstrapping problem here.

Consciousness can only be caused by the physical work of the brain, because it makes no sense to say there are any symbols in the unconscious brain.
 
Argumentum et silentio.

No, it's not.

You're ignoring every other possible use of the machine except the one you already know to be its intended use.

You then declare that this symbolic use, because it's consistent with the working of the machine (of course), is somehow independently and objectively the "right" use of the machine and all other possible uses are "wrong".

Of course, that makes no sense at all.

As far as the physical matter of the machine is concerned, there are no "right" and "wrong" uses of it.

But if you want to believe that running a simulation of a brain creates a "world of the simulation" in which the brain is somehow "real", then you have to believe such things, don't you?

You have to believe that there's a way for the machine itself to "know" what it's behavior is supposed to represent, because that's the only way that this "world of the simulation" can be "real" and somehow exist in the machine so that replacing a brain with this machine will work just like a brain!

On the other hand, if the machine has no way of privileging any one possible symbolic value for its physical states over any other, then of course you have to abandon that idea, because either there is no such "world" to refer to, or there are innumerable ones simultaneously.

Fortunately, we don't have to worry about that because plain ol' vanilla physics tells us that the "world of the simulation" resides in your brain, quite literally, as physical patterns.

And that accounts for everything quite nicely and neatly with no need to appeal to any non-imaginary "world of the simulation" or any way for a dumb machine to somehow know what it's supposed to symbolize (if anything).

So why in the world would anyone bother to pursue such ideas? They're not necessary to explain anything, there's no reason for anyone to think they could be accurate in the first place, and they create problems with no solutions.

In those circumstances, you don't waste your time.

The notion that consciousness is generated by symbols is both unnecessary and absurd.
 
The timing of I/O in the single-processor brain would of course match that of the multi-simulated-neuron brain. With fast enough hardware, this is always possible, at least to some level of precision at which the differences in function will be negligible.

However, if you accept that this is necessary, then you exclude a central tenet of the computational theory - which is that the speed of the computation is irrelevant - that any execution of the same algorithm will produce the same result. That is what computational equivalence means. It's a necessary element of computation equivalence that the hardware which runs the computation is not able to replace a brain.

Of course, the issue of even a single artificial neuron begs the question to some extent. If such a thing could be made and inserted, what functionality does it need to provide, and what is unnecessary? We don't know that, even for a single neuron.
 
In response to this:



So, you think that postcount is an indicator of the quality of what you post ? Or did you just not read what I wrote ?

Yes, I saw your opinion. How should I gauge the quality of what I write? On whether you think it's good? It's not as if you've given any indication as to what quality Piggy's posts possess that mine don't.

I'll add that to the long list of ad hom responses to this topic which have substituted for some kind of actual argument.
 
No, it's not.

You're ignoring every other possible use of the machine except the one you already know to be its intended use.

You then declare that this symbolic use, because it's consistent with the working of the machine (of course), is somehow independently and objectively the "right" use of the machine and all other possible uses are "wrong".
No. That interpretation of the symbols is objectively right, given the particular usage, and all others either map to it or are wrong. You agree; I know, because you explicitly said that you did. You used different words, but I finally heard the right ones, and they match my view 100%.

The problem is that you're having severe difficulties getting my view correct.
Of course, that makes no sense at all.
Of course not. You insist that I'm claiming what I'm not claiming.

But so long as you do that, it's never going to click. You need to stop pretending that you know what I'm talking about.

Try this rule of thumb. Assume that I'm not a complete total moron. Assume, just for the sake of argument, that I actually have a valid point to make. Then reread what I wrote, and try to interpret it in such a way that it expresses the maximally rational view that you can muster up that a person who says that thing could possibly have.

This way, you'll avoid making these same mistakes over and over, and my repetitive corrections might suddenly make sense to you.

Incidentally, there's a name for this approach.
 
Last edited:
No. That interpretation of the symbols is objectively right, given the particular usage, and all others either map to it or are wrong. You agree; I know, because you explicitly said that you did. You used different words, but I finally heard the right ones, and they match my view 100%.

The problem is that you're having severe difficulties getting my view correct.

What do you mean "given the particular usage"?

There is no "particular usage" which is "given" by the machine itself.

And I'm having no problem "getting your view correct".

But as the cited post demonstrates very clearly, you are having one helluva problem viewing objects as objects.
 
Try this rule of thumb. Assume that I'm not a complete total moron. Assume, just for the sake of argument, that I actually have a valid point to make. Then reread what I wrote, and try to interpret it in such a way that it expresses the maximally rational view that you can muster up that a person who says that thing could possibly have.

I have.

And I don't conclude that you're a "total moron". Just that you've probably spent enough time dealing with computer interfaces and with information theory that you're in the habit of treating imaginary things as real, especially when they piggy-back on real processes, and you're having a very hard time breaking your mind of that ingrained habit.

You are also failing to test your ideas against actual brain behavior and brain research, which is the only way to know if they hold water.

And what you're saying makes total sense with regard to the second system we talked about, the one which requires the... you know... that other thing besides the machine you'd rather not talk about.

Total sense.

Of course, it makes no sense at all when applied to the other system, the one with just the machine in it, which is the only one that matters when we discuss something like swapping a brain for a simulator machine.

So your errors are absolutely and completely reasonable and understandable.

They are not the errors of a "total moron".
 
It doesn't matter whether the brain we're talking about is biological or artificial.

But if you have a symbol system working, if you have an information processor working, you're going to have to have some sort of object to act as a symbol, and you're going to have some sort of brain to assign and read the values.

Ain't no way around that.
If you have a symbol system working, it is a brain.
If all you got is the object, you don't (can't) have a symbol system or an information processor. All you have is an object with any number of possible real and symbolic uses.
Right, but electromagnetic waves are not going to pull you out of this trench.

Let me explain how this could possibly work. If we clear the machine, and put one marble in column #, then the machine goes into a particular state. If we clear the machine, and put two marbles in column $, then the machine goes into the same state. There are also many more ways to go into this state from a clear state, but they all follow the same principle. They all are isomorphic to addition under a 6 digit radix 2 modulo 64 base numbering system--and that's just a way to describe the rules. As Feynman says, it's a cheat. But that cheat perfectly describes the behavior of the states.

Now let's suppose we build the Pixydroid. It will wind up having eyes, ears, movable hands, and so on. If we put a cup in front of our Pixydroid, a certain subset of this Pixydroid will go into a certain state. If we put a chicken in front of the Pixydroid, another subset of this Pixydroid will go into a certain state. Since hypothetically speaking the Pixydroid is conscious, it has a mental landscape where it experiences seeing a cup. It also has a mental landscape where it experiences seeing a chicken. By virtue of the fact that the Pixydroid has a subset expressing a particular set of states when you shove a cup in front of it, and that said Pixydroid reports seeing a cup when you shove a cup in front of it, Pixydroid's intension of a cup must be represented as a state that results in a cup being put in front of the Pixydroid. (In fact, almost certainly, there must be something that showing Pixydroid a cup has in common each time, that showing Pixydroid a chicken does not have).

This is simplistic... a more thorough account has the Pixydroid being a learning marble machine, which learns how to recognize cups (rocketdodger already described how Hopfield networks can learn associations, and since marble machines are known to be Turing complete, you can build Hopfield networks with them, so the principle is solid). And the same learning principle can be applied to another layer that learns how to represent the subset of states that is the cup as a subset that is a cup--and, in particular, associate it with learned uses of "what cups are for".

Particular flows of models could cause the Pixydroid to actually move in certain ways. Just as a rote theory-building machine can develop a theory of our marble machine--at least, by representing; the Pixydroid can, by playing with the things that move it, develop a model of its own movements in reaction to diverting marble flows a particular way. Those would be represented as subsets of states as well.

Keep iterating, and you get to the point to where a part of the Pixydroid can model how the Pixydroid should divert models to use a cup for what it was told a cup was supposed to be for. And you can get it to represent this as a state, and use that with its model for how to move its mouth to talk, and so on, and so on.

Eventually, you get a Pixydroid that can recognize a cup, recognize what it is supposed to be for, recognize that you said you were thirsty, grab a pitcher of lemonade, pour it into a cup, shove it onto a flat surface in front of you, and give you a warm smile.

But, of course, the one thing that the Pixydroid cannot possibly do is assign meaning to all of this.

Unless... that's what meaning is.

Now, I personally don't think this is actually possible. There's no way I can imagine that a marble machine will react in time to interact with you to serve you lemonade. I'm willing to be proven wrong on this point, but the only issue here is an engineering one, not a principle one.

Unless you can point out the flaw.
 
Last edited:
What do you mean "given the particular usage"?
I mean this in the same exact sense that a neuron must be situated in my head, be alive, placed at a particular location, have its axons and synapses aligned in relation to some other neurons, and be of a particular type, in order for it to contribute in a particular way to my consciousness; where said neuron contributes to my conscious experience specifically by responding in particular ways to stimulations which include inputs to other neurons, neurochemical modulations, and stored electropotential differences, and excludes the particulars of a lot of other processes of a neuron.

I mean, specifically, that there's a certain kind of thing that happens that contributes to a process we're interested in. I'm not claiming that the only things of interest in this device must be how it calculates--I'm only claiming that this is a thing of interest, and can be the key factor contributing to what we're building; i.e., a conscious Pixydroid.

And I gather we're interested in consciousness.
 
And I don't conclude that you're a "total moron". Just that you've probably spent enough time dealing with computer interfaces and with information theory that you're in the habit of treating imaginary things as real, especially when they piggy-back on real processes, and you're having a very hard time breaking your mind of that ingrained habit.
These are not imaginary properties. They are abstract properties.

To answer post #2386, the law of conservation of energy is abstract. It is not imaginary. That's the relation.

It's critical that you see the difference between abstract and imaginary.

The time traveler in H. G. Wells' book, Time Machine, builds a machine that uses geometric properties to travel backwards and forwards in time. This is an imaginary device. It violates the abstract law of conservation of energy--a law I recall that you mentioned in this thread as somewhat important.

The law of conservation of energy, which is abstract, is nevertheless real. So are the behaviors of the marble machine. Every other post it seems you agree with me, then call it imaginary.

ETA:
Put it this way. Let's use the mantra: "Reality is that which exists when you stop believing in it."

Now George believes that if you, from a clear state, put three marbles into $, you get one marble in # and none in $. George is wrong, precisely because the machine does not behave that way. The fact that he believes it does has no effect on its behaviors.
 
Last edited:
These are not imaginary properties. They are abstract properties.

To answer post #2386, the law of conservation of energy is abstract. It is not imaginary. That's the relation.

It's critical that you see the difference between abstract and imaginary.

The time traveler in H. G. Wells' book, Time Machine, builds a machine that uses geometric properties to travel backwards and forwards in time. This is an imaginary device. It violates the abstract law of conservation of energy--a law I recall that you mentioned in this thread as somewhat important.

The law of conservation of energy, which is abstract, is nevertheless real. So are the behaviors of the marble machine. Every other post it seems you agree with me, then call it imaginary.

ETA:
Put it this way. Let's use the mantra: "Reality is that which exists when you stop believing in it."

Now George believes that if you, from a clear state, put three marbles into $, you get one marble in # and none in $. George is wrong, precisely because the machine does not behave that way. The fact that he believes it does has no effect on its behaviors.

"Reality is that which exists when you stop believing in it".

To be sure.

And the law of the conservation of energy is real, not imaginary. And the calculations of marbles are real, if abstract, and not imaginary.
But, to my knowledge, neither the law of the conservation of energy nor the calculations of marbles are conscious of what they are doing. Nor do I see any reason to suppose the events, laws, or processes you are describing ever would ever become conscious. Is that what you are claiming? The act of calculating and the consciousness that you are in fact calculating are two different processes.

I am nothing if not a layman on the question posed by the OP, but it strikes me you are claiming the fact that certain properties can be described as abstractions and not imaginary bears some relation to the question of how consciousness arises. I don't see it. Could you explain this to a layman?
 
Last edited:
These are not imaginary properties. They are abstract properties.

To answer post #2386, the law of conservation of energy is abstract. It is not imaginary. That's the relation.

It's critical that you see the difference between abstract and imaginary.

The time traveler in H. G. Wells' book, Time Machine, builds a machine that uses geometric properties to travel backwards and forwards in time. This is an imaginary device. It violates the abstract law of conservation of energy--a law I recall that you mentioned in this thread as somewhat important.

The law of conservation of energy, which is abstract, is nevertheless real. So are the behaviors of the marble machine. Every other post it seems you agree with me, then call it imaginary.

ETA:
Put it this way. Let's use the mantra: "Reality is that which exists when you stop believing in it."

Now George believes that if you, from a clear state, put three marbles into $, you get one marble in # and none in $. George is wrong, precisely because the machine does not behave that way. The fact that he believes it does has no effect on its behaviors.

That's not what I mean by "imaginary", which is is abundantly clear from my posts so far btw.

I define reality the same way as you -- it's real whether you believe in it or not, whether you observe it or not. (And there is no meaning of any symbol which has that quality, of course.)

What's real about a simulator machine are its parts and their motions. That's it.

The target of the simulation, however, is "imaginary", not because it's something that doesn't exist (it may or may not) but rather because without taking into account the minds of the people who understand the simulation -- who designed it and know how to read it, even if that reading is entirely intuitive for their kind of brain -- it's impossible to declare that it's even running a simulation.

The behavior of the marble machine is real, we both agree on that.

Which is to say, if you drop marbles through it, they really fall through it.

The symbolic value of all that, however, is "imaginary" in the sense that unless someone's brain changes its physical state, there is no addition of large numbers taking place. (The only addition going on is the addition of small numbers of marbles to the bucket at the bottom.)

If you don't believe me, then please, give me one example in which addition of any number larger than the number of marbles going into the bucket can occur without a change in someone's brain state, and only as a result of the physical operation of the machine.

Keep in mind that if you start giving symbolic values to some behaviors of the machine and not others, if you you assert that some relationships among the physical parts are significant and others are not, if you start ignoring parts of the system (e.g. declaring marbles in the bucket "out of play"), then you are no longer describing what the system itself does, but rather how it interacts with a brain that is able to decide how to pick and choose what to pay attention to, what to ignore, and what relationships are significant.

And if you propose to swap this thing out for a part in another machine, and if you expect that machine to keep running, then you'd better expect to get exactly 4 marbles out for every 4 that go in, no more.

Yes, this larger machine itself could be an information processor spitting out symbols for large numbers, but you can't use this part in order to perform the physical work that its symbolic performance represents.

And when you try to swap a simulator machine for a brain, that's what you're doing... removing a part that does certain physical work, and adding one that merely symbolizes that work.

You're transporting Major Tom to the ship -- the space squid lives.

Now, are the laws of physics "imaginary"?

Well, all symbols are imaginary because without a mind to perceive them it's impossible to say that they are symbols at all, much less what they might symbolize.

Yes, the light shaped like a gas pump on your dash is real, but its meaning "If you don't add gas to this machine it's going to quit working" is imaginary. (Which doesn't mean "not real", but rather "real as a physical state of the brain".)

So that's a tricky question with respect to physical laws, because these laws describe the physical workings of the universe, which are certainly real and not symbolic.

In one sense, the laws of physics are the transformational rules of our world, and that does not change whether we're here to view it or not.

Our formulations of these laws -- which are abstractions, as you say -- are imaginary, even though (actually precisely because) they describe that reality.
 
Status
Not open for further replies.

Back
Top Bottom