• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
What are you talking about?

"Possible" can mean just about anything here... could you explain what you're getting at?

He is trying to suggest that a rock is a perfectly suitable simulator for anything we can imagine, because we can map each state transition in the rock to a corresponding transition in the thing we are simulating.

However he doesn't realize that by doing so he is implicitly now including the person doing the mappings as part of the simulation, effectively taking over for the computer when it comes to calculating the state transitions.
 
I don't see anything here that either I or Westprog would disagree with.

Well you are wrong, because westprog disagrees.

Westprog has consistently stated that there is no truly objective metric with which we can distinguish the behavior of certain sequences of computation from the behavior of other certain sequences.

It took me two years to figure out what that metric could be, and it is the degree of behavior that increases the statistical probability that a system will remain in a configuration that allows for similar behavior that increases the statistical probability that a system will remain in a similar configuration and be able to repeat similar behavior ... ad infinitum.

That is the definitive ( and truly objective ) difference between life and everything else. All else being equal, life measures an absolute first given that metric.

But westprog cannot accept this, because a valid truly objective mathematical difference between life and non-life greatly devalues the need for a divine/spiritual/supernatural difference between life and non-life.

He will insist otherwise, but there is no other explanation for his views. So I wouldn't throw your lot in with westprog if I were you. I would instead focus on what you yourself think.
 
The problem seems to be that you don't take PixyMisa at his word.

To create an honest-to-God for real particle, all he needs is a computer and logic. At the end of the process, he'll have the computer and a new "real" particle that didn't exist before. And he'll do it with only enough energy to change the state of the computer.
I didn't say that, nor can it be rationally inferred from anything I have said. In fact, I have already noted specifically that this is not what I am saying.
 
So an information processor requires interpretation of results. But the non-conscious mind acts like a computer. And the non-conscious mind is capable of generating meaning, and interpreting results.

Let's take these one at a time:

So an information processor requires interpretation of results.

Yeah, when the term is used in the context I've been using it in, because otherwise, it's just some physical calculation, some object behaving like itself.

(If you use Wolfram's definition, however, lots of physical objects in the world are information processors, but this definition isn't useful to us here.)

I mean, if I set up an information processor and got it cranking, and I shot it off into space, and a billion years later it's found by some alien species who have their own way of sensing the world's matter and energy and consciously modeling it, they would have absolutely no means of determining what the machine was intended to do.

No matter how much they knew about its physical computations, it would be impossible to discern the (real or imaginary) system which I intended it to represent via informational computations which piggy back on some part of the system of physical computations.

Even if the processor itself were conscious, and fully aware of the state of its body, it would have no idea either, unless I told it.

So if the term "information processor" is not to be trivial in our discussion, there has to be some decoding agent involved.

We can also use this type of language metaphorically to describe the workings of the brain at higher levels, such as when we talk about an "image" being "recognized" as a face and "routed" through the amygdala.

But of course, the cascade of impulses and waves is not determined by any agent who recognizes anything "as" something else and therefore decides to route it somewhere. Therefore, it is not literally informational, in the sense we're using the term.

The cascade is purely the result of the shape and material of the brain (Which is to say just two levels of shape, actually, since the difference in types of stuff can be boiled down to the dynamic shapes of the components.) It is entirely a physical computation.

To attribute the characteristics of an information processing system literally to the brain -- at least, by any definition that cannot also happily apply them to a calf muscle -- is to conflate 2 different types of computation.

But the non-conscious mind acts like a computer.

It's easier to see the parallels, definitely, even though they don't act a heckuva lot alike on the surface.

Once you get into consciousing, then you've got the unanswerable question about why the neural-wave state corresponds to one particular experience and not another or none, and so forth.

Fortunately, we don't have to worry about that with non-conscious behavior, or the machines we've got currently.

We know representations are being made, which is also the case for computers. These representations are in the form of neural-wave activity and are linked to responses, such as patterns of muscle contraction in response to sudden looming.

These patterns of activity interact in ways that are currently frustratingly complex and difficult to view, even indirectly.

And the non-conscious mind is capable of generating meaning, and interpreting results.

I think it's useful to talk about it that way.

It's difficult to argue that ducking away from something that suddenly looms up at you is not, in some sense, to have properly understood the meaning of the event... as opposed to, say, simply noticing things like shape and color and trajectory.

On the other hand, we might also duck away from a looming shadow of something small and harmless crossing in front of a distant light.

So you could say that the brain "understood" that the looming "meant" that there might be danger, but what is the physical process underlying that description?

Well, it's cascades of electro-chemical impulses which, by virtue of the shape and type of materials involved, results in patterns of muscle contractions. There's actually no symbolism or interpretation involved, hence no "meaning", just brute electrochemical processes.

But it changes the memory of the brain (somehow) so now the brain "knows" something new, in that it behaves differently because the impulse routes aren't the same, but this could also be said of making a fold in a piece of paper which changes the paper's memory so that it behaves differently.
 
I know all that.

But the phrase "the brain is a computer" is useless since rocks are now computers too. "The brain behaves like an electronic computer" would be closer to what we want to say, only it doesn't really behave like that.

So, again, if the word "compute" means "changes state", why don't we use that, instead ? And second, how do you describe the behaviour of the brain, then ?

I was under the impression that to call something "computation" it had to meet some more criteria than "changes state". Now, don't misunderstand me: I'm arguing the use of the word itself, not whether a computer or simulation can be conscious.

But I gave you a good definition of computer -- a collection of particles that can use sequences of computations within itself to keep itself in a configuration where those sequences can be repeated in the future. Essentially, keeping itself the way it is.

Now, a computer doesn't need to do this, it just needs to be able to do this. I think you would agree that lifeforms can do this, so they are certainly computers, and I think you would agree that any electronic system that we consider to "compute" could be hooked up to do this. For example a thermostat *could* be hooked up to turn off the heat so it doesn't melt itself.

Contrast that with things like rocks and oceans. I think anyone would be hard pressed to come up with sequences of computations in rocks or oceans that could potentially increase the survivability of rocks or oceans. Can a rock be hooked up to turn off the heat so it doesn't melt itself? I don't think so.
 
I am telling you that the physical activity of the simulator machine, when it is simulating a watershed, is ismorphic with the physical activity of a watershed ( or anything else that is isomorphic with the physical activity of a watershed ).

If the physical activity of an automobile is not isomorphic with that of a watershed to begin with, then the activity of the simulator machine cannot be, by definition, isomorphic with the automobile either.

Just because an intelligent entity can find a mapping between the initial state of the simulation and the initial state of an automobile doesn't imply that the activity of the simulation and the automobile is isomorphic. That is where you are getting confused, I think. Yes, we can find a million things that might map to the initial state of the simulation -- so what? Once the simulation is running, those mappings instantly become invalid.

Except for one -- the mapping between the watershed and the simulation. That mapping remains valid the entire time, which is why the activity of the two systems is isomorphic.

Like I said, suppose you ran a perfect and complete simulation of a watershed. If I were to examine the device you're using for that purpose, how would I know by the machine's computations (assuming I don't view the readouts) that it was a complete simulation of the intended system, or an incomplete one, and incomplete to what degree, and for what aspects of the system?

Obviously, there are any number of hypothetical systems the same simulation could correspond to when it's running.

Would all of them be wet? Obviously not, because the real entities (the changing machine components) themselves are not wet, and they must fit the bill, or you wouldn't be running a sim.

So not all of those hypothetical systems will be watersheds, even if there is isomorphism between all of the activity of the watershed and some of the activity of the simulating system (if there is perfect isomorphism for all aspects of the system, you have a replica).
 
I didn't say that, nor can it be rationally inferred from anything I have said. In fact, I have already noted specifically that this is not what I am saying.

I went back and looked. IanS used the word "simulate" to mean "copy", as in "create a new one"... were you answering for the word "simulate" to mean "represent"?
 
Yeah, when the term is used in the context I've been using it in, because otherwise, it's just some physical calculation, some object behaving like itself.
And unless you're a dualist, that's all that exists. Interpretation of the results of information processing is just more information processing. It can't be anything else, because there isn't anything else.
 
I went back and looked. IanS used the word "simulate" to mean "copy", as in "create a new one"... were you answering for the word "simulate" to mean "represent"?
He said "simulate or copy". Look, I don't know what point IanS is trying to make, but many of his statements are flatly untrue, and I was responding to that.
 
I mean, if I set up an information processor and got it cranking, and I shot it off into space, and a billion years later it's found by some alien species who have their own way of sensing the world's matter and energy and consciously modeling it, they would have absolutely no means of determining what the machine was intended to do.

No matter how much they knew about its physical computations, it would be impossible to discern the (real or imaginary) system which I intended it to represent via informational computations which piggy back on some part of the system of physical computations.

Wrong.

For example, if their planet had tornadoes, they could eventually learn the isomorphism the program relied on and instantly realize the information processor was running a simulation of a tornado.

I think this is where you are stuck -- you think the initial mapping of reality->simulation somehow dictates the behavior of the simulation. It doesn't.

Just look at the math piggy -- If transformation Th ( h for human ) maps reality to the simulation, and the tornado is TOR, then Th( TOR ) = simulation.

To get back our conscious interpretation of the simulation we apply the inverse -- InverseTh( Th( TOR ) ) = TOR, or InverseTh( simulation ) = TOR.

For the aliens, all they need to do is figure out the transformation that takes their conceptual space to the simulation, or Ta ( a for alien ). Ta( TOR ) = simulation. Once they find that, they can do the same thing we do -- InverseTa( simulation ) == TOR.

Now replace it all -- InverseTa( Th( TOR ) ) = TOR.

Note that the interesting thing here is that Ta will be the composite of our Th and the mapping from our conceptual space to the alien conceptual space, call it Ta-to-h.

Meaning, if the aliens find your information processor running a tornado sim *as well as* the keyboard and monitor, they can deduce our Th and thus the Ta-to-h.

Linguists know this implicitly, since it is how we figure out other languages given a common starting point.
 
Information processing is qualitatively different - it is functionally indifferent to abstraction.

The question of precisely what is meant by "information processing" and whether it is applicable equally to brains and computers, and not to rocks and planets, is precisely the nub of the issue.
 
Perception, but that occurs while I'm asleep as well. You could also be talking about the integration of perception into a world, but most of that happens outside of my awareness. Are you counting that process as consciousness?

You answer your own question. Perception goes on while your brain is consciousing and when it's not.

As to the "bright line" between processes that are or aren't involved in conscious awareness in some way, I don't believe it exists, and it's a waste of time to attempt to define one.

You seem to be fishing for the type of definition that we'll have when we know more. For the time being, the one I've offered should do just fine.

I take it you mean to contrast consciousness with this state.

I thought non-conscious processes make the dream world--I'm just aware of it while dreaming.

Hoo boy... if you're going to start tossing around statements about what "I" can be "aware of", you'll only end in a quagmire.

When you're brain is consciousing (sorry, I hate the word too, but right here I gotta) it's in a physical state that causes a sense of experience (and usually self and experience simultaneously) to occur.

This is only one of the functions going on at this time.

Several disparate parts of the brain are involved in the process simultaneously (for all intents and purposes). These same parts of the brain, and others, may also be involved in other processes that don't affect the conscious experience.

During waking, a great deal of impulses that determine conscious experience can be traced back to impulses originating from contact with the rest of the physical world, such as light on the eyes, chemicals in the nose and mouth, objects against the skin, and waves impacting the eardrum.

During deep sleep, the mechanism stops operating, so no experience is happening, even though the brain is still perceiving, imagining, remembering, learning, and even paying attention to what's going on around and in the body.

During dreaming, much less of the impulses that determine conscious experience can be traced back to impulses originating from contact with the rest of the world, and not all impulse channels are operating, which makes this type of experience typically very unlike waking experience.
 
The question of precisely what is meant by "information processing" and whether it is applicable equally to brains and computers, and not to rocks and planets, is precisely the nub of the issue.

A planet could be an information processor... as long as you find some other system whose changes are mimicked by its changes in some way.

That's all it takes.

But that's why we tend not to use planets that way... they have limited application.

Computing machines, tho, can be made to wiggle in all kinds of predictable ways, which is to say we can control their physical computations, which are very fast, which makes them amazingly useful for setting up patterns of wiggling (physical computation) in them and assigning symbolic value (informational computation) to those patterns.

As long as the pattern of changes in the physical calculations matches a pattern of changes in some other system, real or imaginary, then we can make those changes happen real fast in the computing machine and see what state it ends up in.

And as long as we know what the state of the machine is supposed to correspond to, we can then know what the state of the other system is supposed to be. The physical state of our brain after we "read" the simulation is the informational output.

If we could manipulate planets the way we manipulate computer components, they'd make fine information processors.
 
The question of precisely what is meant by "information processing" and whether it is applicable equally to brains and computers, and not to rocks and planets, is precisely the nub of the issue.
I understand - which is why I spelled out what I mean by it, hoping for some constructive comment:
If, as the evidence suggests, a neuron is a sophisticated information processor, taking multiple input signal streams and outputting a result signal stream, we can, in theory (and probably in practice), emulate its substantive functionality with a neural processor (e.g. a chip like IBM's neural processor, but more sophisticated).

If, as the evidence suggests, brain function is a result of the signal processing of many neurons with multiple connections between them, we can, in theory, emulate brain function using multiple neural processors connected in a similar way (with appropriate cross-talk if necessary). [We would probably need to emulate the brain-body neural interface too, i.e. give it sensors and effectors].

If, as the evidence suggests, consciousness is a result of certain aspects of the brain function described above, then, in theory, the emulation could support consciousness.

Each neural processor can itself be emulated in software, and multiple neural processors and their interactions can be emulated in software; i.e. an entire subsystem of the brain can be replaced by a 'black box' subsystem emulation.

In theory, all the neural processors in a brain emulation, and their interactions, can be emulated in software using a single (very fast) processor, e.g. with multi-tasking, memory partitioning, and appropriate I/O from/to the sensor/effector net.

Given the above, it seems to follow that, in theory, consciousness could be supported on such a single processor software emulation of a brain.

I'm curious to know which of the above step(s) are considered problematic by those who don't agree, and why.
Would you agree with the steps I outlined, or not? If not, why?
 
Wrong.

For example, if their planet had tornadoes, they could eventually learn the isomorphism the program relied on and instantly realize the information processor was running a simulation of a tornado.

True, but if so, that's only because their observation of tornadoes would lead them to believe that the simulation probably referred to something real and significant for the kind of being that made such a thing, rather than any of the other possible solutions which would describe God knows what.

In other words, they'd still need outside information to attempt to decode it.

They couldn't deduce it from the system itself.

Which is (one of the reasons) why there can't be any independently existing entities in a simulator (and thus a simulation) except for the machine parts themselves.

If you want enough information in the system to map exclusively to a tornado, then by God, you've got to make a tornado! That's the only thing that will have the outcomes of a tornado on our observers without their having to interpret or imagine.
 
Yeah, when the term is used in the context I've been using it in, because otherwise, it's just some physical calculation, some object behaving like itself.
I'm fine with your use of it within a particular context. However, I think you're focusing on the wrong thing here.

From a broader perspective (and westprog, pay close attention), we are simply exploiting relationships we know are there in a process. This sort of thing is generally called a simulation if we set up the process to mimic another one (putting them there is one of the best ways of knowing what is in the process). But the system that mimics the other system doesn't have to be one that we set up intentionally; it's just that we have a word for the system when we do this sort of thing.

For example, many trees form rings every year; they're every bit of a counter as our computer adding 1 until it gets to 100; we didn't make them do that with the intent of dating the trees, but we do interpret the results, right down to the meaning of it.

Likewise, the purpose doesn't have to be to produce a set of symbols that we read and interpret--it could, instead, be something that is "interpreted" directly into an action we want to happen.

Suppose, for example, that I build a weather simulator, for forecasting purposes. But this particular simulator isn't going to tell me whether or not to bring an umbrella with me when I leave the house... instead, it's going to use its predictions to decide whether or not to turn on the sprinklers on my lawn. Now if I go on vacation, would this simulation suddenly stop being information processing? Would it stop being a simulation? I won't bother to ask if it'd be useful, because I'd maintain that it was useful regardless.

So be aware that you're talking only about a special case when you're talking about something that we set up, let run, wait for the bing, and then read the printout for. And, yes, in that special case, we have to interpret the results for it to be useful; but that's merely a consequence of the use case. That's what such simulations were made to do.

Furthermore, I think it's confusing to define the concept of information processing not in terms of what an information processor per se does, but rather in terms of what happens to that information later. I don't think doing so actually helps us figure out anything about the information or about interpreters of information. I'm not sure why you think it is useful--but I would welcome an explanation for how you think it helps explain things, and what you think it helps explain.
For any single set of state changes among this uncalculably large number, there is an equally large set of possible interpretations of those state changes. We could consider the temperature fluctuations as representing the varying exchange rates among European currencies.

Do we consider that all of these possible interpretations - of all possible states - represent a world?
No, westprog. We only consider those that map causally in the same way to the problem space. To put it another way, if I had a sufficiently accurate thermometer to measure the temperature fluctuations, and I were interested in investing in the European stock market, would there be a possible relationship between the ESM and the rock that I could discover, such that I can use the measurements of the temperature fluctuations to get really rich?

If so, then absolutely, it simulates European currencies. If not, you're not mapping relationships.

ETA: Or think of it another way. Suppose I had two rocks. And I use my sufficiently accurate thermometer to measure temperature fluctuations in both. Would I be able to compare one rock's predictions of the ESM to the other to see if they agree?
 
Last edited:
I'm fine with your use of it within a particular context. However, I think you're focusing on the wrong thing here.

From a broader perspective (and westprog, pay close attention), we are simply exploiting relationships we know are there in a process. This sort of thing is generally called a simulation if we set up the process to mimic another one (putting them there is one of the best ways of knowing what is in the process). But the system that mimics the other system doesn't have to be one that we set up intentionally; it's just that we have a word for the system when we do this sort of thing.

For example, many trees form rings every year; they're every bit of a counter as our computer adding 1 until it gets to 100; we didn't make them do that with the intent of dating the trees, but we do interpret the results, right down to the meaning of it.

Likewise, the purpose doesn't have to be to produce a set of symbols that we read and interpret--it could, instead, be something that is "interpreted" directly into an action we want to happen.

Suppose, for example, that I build a weather simulator, for forecasting purposes. But this particular simulator isn't going to tell me whether or not to bring an umbrella with me when I leave the house... instead, it's going to use its predictions to decide whether or not to turn on the sprinklers on my lawn. Now if I go on vacation, would this simulation suddenly stop being information processing? Would it stop being a simulation? I won't bother to ask if it'd be useful, because I'd maintain that it was useful regardless.

So be aware that you're talking only about a special case when you're talking about something that we set up, let run, wait for the bing, and then read the printout for. And, yes, in that special case, we have to interpret the results for it to be useful; but that's merely a consequence of the use case. That's what such simulations were made to do.

Furthermore, I think it's confusing to define the concept of information processing not in terms of what an information processor per se does, but rather in terms of what happens to that information later. I don't think doing so actually helps us figure out anything about the information or about interpreters of information. I'm not sure why you think it is useful--but I would welcome an explanation for how you think it helps explain things, and what you think it helps explain.

As to the first part, that's precisely what westprog and I have been describing.

Like I said, you could use a planet as a simulator, but its use is extremely limited.

As to the last part, we can use a definition of "information processor" which requires no interpreter of the information, but in that case, we're no longer talking about something that does what we typically think of information processors as doing, but rather all sorts of physical systems become information processors.

Then we'd have to go back and invent a new word to answer the question of whether or not the brain is the kind of information processor as our computers are.
 
Like I said, suppose you ran a perfect and complete simulation of a watershed. If I were to examine the device you're using for that purpose, how would I know by the machine's computations (assuming I don't view the readouts) that it was a complete simulation of the intended system, or an incomplete one, and incomplete to what degree, and for what aspects of the system?

Obviously, there are any number of hypothetical systems the same simulation could correspond to when it's running.

Would all of them be wet? Obviously not, because the real entities (the changing machine components) themselves are not wet, and they must fit the bill, or you wouldn't be running a sim.

So not all of those hypothetical systems will be watersheds, even if there is isomorphism between all of the activity of the watershed and some of the activity of the simulating system (if there is perfect isomorphism for all aspects of the system, you have a replica).

But nothing you just said has ever been a point of contention and furthermore it is not the claim that westprog is making.

The claim made by westprog is that if we are running a simulation of a bowl of soup, down to the atomic level, we are also running a simulation of the way the neural networks of the human brain function -- it just depends on how the results are interpreted.

This is simply false for reasons that should be obvious.
 
Status
Not open for further replies.

Back
Top Bottom