• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
What do you mean "name" one? Why would such a thing have a name?

Are you telling me that the physical activity of the simulator machine cannot be correspond to any other hypothetical system at all? All the components must correspond and can only correspond to a watershed?

I know you believe some weird things, but surely you don't believe that.

If we look at the possible state transitions of a physical system. Those states are not fixed - in the sense that a switch can only be on or off. They are subdivisions of the physical condition of the system, and can be allocated in any arbitrary way. If that is so, we could take any physical system, and consider its possible state transitions, and apply the state partitioning on any other physical system, choosing the possible states so that one mirrors the other. Given the hugely rich changes of state in most reasonably large physical systems, almost every system can be considered to be simulating every other system, at some level of granularity.

This is not an especially interesting or useful observation, unless one has the idea that each of these simulations is a world in itself, regardless of observers.
 
Silly piggy. You haven't figured out what they are talking about yet!

You're so close... but you're not quite there. You just need to connect a few more dots.

Maybe, just maybe, that's because the assertions are as vague as they are certain.
 
But the simulated entities are real--either they aren't being simulated, or there are real patterns made of real particles that make them up.

Oh, please....

The entities in the simulating machine are real, no doubt. I mean, the actual patterns of behavior of the machine itself.

But those things are not, for instance, tornadoes.

You can actually examine what it is you have caused to happen and you will see that you have not caused any tornado. That's very plain and simple.

What you've done is to make the computer change in certain ways that mimic certain changes in a tornado.

So yes, you've created entities that exist, patterns of behavior in a machine.

But you know darn good and well, or should by now, that when I say the tornado doesn't exist, I mean that the entities which do exist are not tornadoes.

Now, if you're going to disagree with me, you're going to have to explain why these real entities are tornadoes but cause no damage to the simulator.
 
Let's step back and look at this whole thing, consciousness, brains, computers, simulation, QM... (how) does it all fit together?

Computation

We'll start with computation just because it's convenient to start there.

A computation is a change from one state to another according to a set of rules, which are applied to each state in order to determine how it changes into the next state. The output of one change is the input to the next.

The matter and energy making up our universe, combined with the "rules" we abstract as the laws of physics, make our world a very large physical computer. States of matter and energy change in many ways that are so consistent that we can write rules which describe them with predictive accuracy (and in other ways that we can’t).

Information processing

Human beings can apply this process to symbols, to make symbolic computers, or information processors. We could call our starting symbol "the number" and make it "1" and follow this rule: "If the number is not 100, replace it with the symbol indicating the next integer higher on the list of integers".

And we could work it out on paper, and after 99 calculations (that is, points at which we can change the symbol, or not, according to the rule) there'd be no next step so the process would stop. (If you began with a number higher than 100, it would never stop, so the program has 2 potentials – end at 100 or go on forever.)

In that system, the human is the "computer" – the one recognizing the symbols and applying the rules to produce new symbols.

Now there are two types of outputs from running this information processor.

One is the real output – a piece of paper with writing on it, a shorter pencil, and changes in the state of the human's brain – and another is the informational output, or the "meaning" of the symbol, which is to say the abstract notion of a group of 100 things, which actually is (or is part of) the third real output, changes in the brain of the human calculator.

Machines as Computers

But we can use machines for part of this process if we make them out of stuff that changes in predictable ways very quickly. As long as we set up the physical machine so that its changes proceed in the same way as we want our symbols to change, then we can let a process go indefinitely, check in whenever we want, and find that the appropriate symbols are being displayed at any given time.

In other words, we assign symbolic value to an instance of physical computation (the workings of the machine) which is set up to mimic the computations (of whatever sort) of another kind of system, whether real or imaginary, so that we can later interpret the symbolic values of later physical states of the machine to make inferences about the state of the other system.

In short, we can use a machine as an informational (rather than physical) computer precisely because objects in our world behave like physical computers.

The reason most rocks don't make good information processors is that their highly predictable changes occur so slowly, and if you make very rapid changes to them they tend not to be predictable with any degree of precision.

In any case, whatever we make the machine out of, the introduction of the machine into the process can change the physical outcomes, but the informational outcomes remain what they were before – states of the brain of the human interpreting the symbols.

Brain as Machine

We can look at the body and all its organs as a kind of naturally occurring organic machine. But is it an information processor, or rather does it contain one?

If you expose a human brain to the right kind of symbol – for instance, this string of sounds: “What’s two plus three?” – it can produce the correct symbol that would result from a properly formed calculation from your information processing machine: “Five.”

So it sure seems to function like an information processing machine.

You might say, “Well, yes, but there’s a difference – the person understands the meaning of the symbols, whereas the machine does not.”

And that is correct, but it’s not as significant as we might think, because although the human consciously understands the meaning of the symbols being used, he didn’t consciously come to the conclusion that the right answer was “Five”. Instead, it “occurred to him” or “popped into his head”.

In fact, he probably had begun to say the word “Five” before he was consciously aware of thinking “Five”.

Consciousness

One thing that the brain does, which our information processing machines don’t, is something we don’t even have a good verb for, unlike other bodily functions with fine verbs like urinate and sweat and flex and secrete and replicate and bite and sneeze.

We have to use unfortunately thingy language like nouns (consciousness) or adjectives (aware) for it.

But it’s certainly something our brain does. It gives us this sense of self and experience when we’re awake or dreaming, but doesn’t do that at other times.

We can manipulate it by changing the gross behavior of the brain, and we can watch the differences in how it stops when we go to sleep versus going under anesthesia, and we can watch it begin again when we wake up.

We know that it involves the coordination of activity in spatially distant areas of the brain. We know that originally disparate types of impulses (e.g. from the eye and from the ear) are merged into related but different types of impulses before being used in whatever processes cause conscious awareness, and because of this our conscious awareness is always a fraction of a second behind what’s actually happening.

For those who didn’t already know, everything you experience is already over by the time you experience it.

The mechanism for this is not yet known. But of course, it’s not the only thing the brain does. In fact, the brain perceives, imagines, decides, and learns all the time without bothering to communicate what it’s doing to the processes that are responsible for consciousness.

When you ask your buddy “What’s two plus three?” electro-chemical impulses will start cascading through his brain in patterns that are continually rebuilt as a result of having these impulses run through them, like water on a beach changing the shapes of the channels it runs through.

The part of the brain that retrieves the answer is the bit that appears to operate most like what most folks today think of as a computer, or information processor. It comes up with the answer, but in a way very different from the way your computer would respond to “2 + 3 = Enter”.

Again, we don’t yet know the mechanism, but we know that strength of association is a big player. Stronger associations with patterns of activity can be made by repetition, for example, or by certain conditions of exposure (part of PTSD, for example, is the brain’s scripted rehearsal of the trauma in order to strengthen the association of the conditions of the event with a strong aversion impulse, so that the animal avoids those conditions later, hence nightmares, panic attacks, etc.).

“Five” would likely be so strongly associated with “two plus three” in your friend’s brain that it would be the response without any attempt by any part of the brain to test whether it was “correct”, unless something else tipped it off, like a strange look on the face of the person asking the question.

And your friend would only be aware that his brain had bothered to make his mouth say “Five” sometime after it had already started doing so. Only at that point would your friend consciously understand what his brain had already done, as he hears his own mouth say the answer.

Can Consciousness Be Calculated?

Now here’s the important question, when it comes to how to interpret the Computational Model of Mind with regard to consciousness, given what little we know....

Is consciousness the result of a calculation, and if so, then what type of result is it, real or informational?

First, the only rules available to the brain (or any other organ) for its behavior are the laws of physics. We know consciousness is an outcome of the physical computation of the brain, which is to say, changes in state in brain tissue and associated phenomena such as brain waves according to physical laws, so yes, it’s the result of a calculation.

So, given our previous description of the brain as a kind of information processor, is the phenomenon of conscious awareness an informational output of these physical calculations, or a real one?

Well, remember, informational outputs are a subset of the real outputs. Specifically, they are part of the change of state of the brain which is interpreting the symbols that are known by that person to be piggy-backing on the actual physical process.

Which means that if consciousness were the informational output of a physical calculation, it would require an interpreter in order to understand it as such, which we have not got.

So consciousness is a real output – a real-world phenomenon in spacetime – rather than an informational one, else we’re back to needing little men inside our heads, and little men inside their heads, each to read the other’s symbols.

Which means that if we want to make a conscious machine, we’re going to have to build it to be conscious.

We are not going to have the luxury of merely programming it to be conscious, although I’m sure there’ll be plenty of programming involved whenever we finally figure out how to make it work.

And it looks like we didn’t need to discuss quantum mechanics after all.
 
Last edited:
The entities in the simulating machine are real, no doubt. ... So yes, you've created entities that exist, patterns of behavior in a machine.
Good. Now if only you can get the point.
But you know darn good and well, or should by now, that when I say
Sorry, that's not the point. But if it makes you feel better, yes, I know that.
Now, if you're going to disagree with me, you're going to have to explain why these real entities are tornadoes but cause no damage to the simulator.
Wrong. I disagree with you, but your criteria has nothing to do with what I disagree with.

I disagree that you understand what your opposition is saying. What makes you think you do? The fact that you get to say they're wrong because of what you mean by what they say?

That's not the way it works, piggy.
 
Oh, please....

The entities in the simulating machine are real, no doubt. I mean, the actual patterns of behavior of the machine itself.

But those things are not, for instance, tornadoes.

You can actually examine what it is you have caused to happen and you will see that you have not caused any tornado. That's very plain and simple.

What you've done is to make the computer change in certain ways that mimic certain changes in a tornado.

So yes, you've created entities that exist, patterns of behavior in a machine.

But you know darn good and well, or should by now, that when I say the tornado doesn't exist, I mean that the entities which do exist are not tornadoes.

Now, if you're going to disagree with me, you're going to have to explain why these real entities are tornadoes but cause no damage to the simulator.

When you think of a river, does water flow out your ear?

Exactly my point. There's no river in my head, only the representation of one.

You are on both sides of the river.
 
In order to do that, you'd have to first build a machine that is conscious.

That's actually what we're talking about.

So now, computers can't be conscious because they're not real, but they'd feel real if they were made to be conscious except they're not because they're computers.

This is circular reasoning.
 
...Information processing

Human beings can apply this process to symbols, to make symbolic computers, or information processors. We could call our starting symbol "the number" and make it "1" and follow this rule: "If the number is not 100, replace it with the symbol indicating the next integer higher on the list of integers".
...so, you're viewing information processing as an algorithm, or at least partially composed of one. And best I can tell is that you include an "interpreter" in the definition of an information processor?
and another is the informational output, or the "meaning" of the symbol, which is to say the abstract notion of a group of 100 things, which actually is (or is part of) the third real output, changes in the brain of the human calculator.
What meaning? In your example I can't quite figure out what it is that your information processor is doing. We're not feeding it any inputs, and we're setting it up to give us a fixed output.

But, yes, in a typical situation where you're programming something for the sole purpose of giving you an output so that you can utilize the algorithm and regularity of the computer to give you an output with a specific meaning to you, the meaning is in your mind.
...
You might say, "Well, yes, but there’s a difference--the person understands the meaning of the symbols, whereas the machine does not."

And that is correct, but it’s not as significant as we might think, because although the human consciously understands the meaning of the symbols being used, he didn’t consciously come to the conclusion that the right answer was "Five". Instead, it "occurred to him" or "popped into his head".

In fact, he probably had begun to say the word "Five" before he was consciously aware of thinking "Five".
Here you're making a few assumptions I don't hold. First, you're assuming that the person is the same as the thing the person is consciously aware of. Second, you're assuming that meaning is produced by conscious awareness.
One thing that the brain does, which our information processing machines don’t, is something we don’t even have a good verb for, unlike other bodily functions with fine verbs like urinate and sweat and flex and secrete and replicate and bite and sneeze.

We have to use unfortunately thingy language like nouns (consciousness) or adjectives (aware) for it.
The main problem here, I think, isn't that we have no good word for consciousness. In fact, I think this notion is contradictory. The problem, instead, is that the word that we do have is too vague--it does not precisely define what we have. This becomes apparent when you try to figure out if certain kinds of processes within the mind are conscious or not; there are a wide variety of processes where such consignment seems arbitrary and conventional: "depends on what you mean by conscious".

We have a lot of capabilities that are special--and there are a lot of them that we're aware of. I personally take the approach of selecting each one separately; some I have some idea on, some I have no clue about. In other words:
But it’s certainly something our brain does. It gives us this sense of self and experience when we’re awake or dreaming, but doesn’t do that at other times.
...I don't think there's a single coherent "it" here.
Can Consciousness Be Calculated?
...
Which means that if consciousness were the informational output of a physical calculation, it would require an interpreter in order to understand it as such, which we have not got.
We're a planning machine. Don't stop at the inputs and outputs--we move and interact with the environment as well, and can perceive its inputs and outputs. It's as if not only we are the information processors processing information from the environment, but we're treating the environment as an information processor processing information from us.

And part of what we look at in the environment, and part of what we study the effects of, is the result of our interactions with the environment. This is where meaning comes from. So, yes, we do have it (in fact, I find it absurd to suppose we don't have it). And no, it doesn't require the conscious mind. I would not only be surprised if meaning came from non-conscious processes, I would expect it.

Surely we're aware of meanings. But that doesn't mean that we "aware" them up.
 
Last edited:
What has always been desired is a definition that can include brains and computers and exclude all the other systems that change state. The inability to come up with any such definition is the source of the difficulty.

Actually, no, and notice the "if", in my post.

The definition can include things besides computers and brains, but it mustn't include everything in the universe.

And if there was a Santa Claus in the simulation, the conscious entity would get simualated presents in his simulated stocking.

Your attempt at ridicule overlooks the following fact: yes.
 
Let me walk you though some stuff and hopefully you can pick up what westprog and piggy seem to miss.

[...]

Let me reiterate: It is the way computations lead to other computations that eventually make a difference in the behavior of a system.

I know all that.

But the phrase "the brain is a computer" is useless since rocks are now computers too. "The brain behaves like an electronic computer" would be closer to what we want to say, only it doesn't really behave like that.

So, again, if the word "compute" means "changes state", why don't we use that, instead ? And second, how do you describe the behaviour of the brain, then ?

I was under the impression that to call something "computation" it had to meet some more criteria than "changes state". Now, don't misunderstand me: I'm arguing the use of the word itself, not whether a computer or simulation can be conscious.
 
Last edited:
What are you talking about?

"Possible" can mean just about anything here... could you explain what you're getting at?

Let's look at a computer, as an example. Typically, we consider the state of a computer as being a matter of whether bits are on or off. This generally means that a particular voltage range applies in a particular area.

However, objectively speaking, this is no more valid a way to consider the state of a computer than any other physical property. Instead of considering each memory cell to have a binary value based on voltage, they could each have a million values based on temperature.

If we are to accept that the state changes of a computer have objective significance, as opposed to the meaning they carry to a human observer, then we have to accept that no set of state changes is any more valid than any other. Given this, we are faced with an enormous number of possible state changes. I'm not qualified to calculate how many, but clearly the number increases hugely with the number of possible elements present.

For any single set of state changes among this uncalculably large number, there is an equally large set of possible interpretations of those state changes. We could consider the temperature fluctuations as representing the varying exchange rates among European currencies.

Do we consider that all of these possible interpretations - of all possible states - represent a world? Or do we say that the only simulation going on on the computer is the tornado - because that's the one we intended to be going on?
 
Good. Now if only you can get the point.

Sorry, that's not the point. But if it makes you feel better, yes, I know that.

Wrong. I disagree with you, but your criteria has nothing to do with what I disagree with.

I disagree that you understand what your opposition is saying. What makes you think you do? The fact that you get to say they're wrong because of what you mean by what they say?

That's not the way it works, piggy.

Nor is it "the way it works" to berate someone for failing to get a point without even trying to say what that point might be.

What is "the point"?

What is it that "my opposition" is saying that I'm getting wrong, and what is the correct expression of it so that I may get it right?
 
Status
Not open for further replies.

Back
Top Bottom