• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Again that's not necessarily true (but I happen to think you have a strong point). If you were to study "sunrise" (the common example) you would find that what you were studying was folklore i.e. the sun does not rise, it only appears to rise. Which is something we could work out and understand without studying the folklore of a "sunrise".

That's why I said above you are wrong about the definition I've put forward in this thread (to paraphrase) not being of any use for science. The first step in understanding anything is to know what it is you are trying to understand and I think much of the discussion about "conciousness" is addressing the folklore duality embedded in our language, like studying "sunrise" turns out not to be about studying the sun rising but orbital mechanics.

No, that's not what I mean at all.

Whatever it is you're studying, you have to look at it first, and whatever method you use to try to figure it out, at the end of the day you have to test it against the thing itself.

Based on what we were pretty sure of, the math told us -- or actually, told certain geniuses among us -- that there should be black holes and that gravity should bend light.

But we didn't know if the math was right until we put it to the test by observing the real universe.

In the case of the black holes, the math was partly wrong, since it couldn't explain why the universe was not awash in black holes. Then Hawking figured out that they should evaporate, and explained what the process would look like, and we went looking and lo and behold, there it was.

Whatever you're studying, you start and end with the thing itself.

If you fail on either count, all you have is an idea, which may or may not hold water.
 
That aspect of biochemistry which animates cellular life. Resulting in a rudimentary awareness, personal chemical integrity and a rudimentary intelligence.
If you feel all life has a degree of consciousness and intelligence, you might as well say it directly. I think that attenuates the utility of the words to an extreme, but I do like the sound of 'personal chemical integrity' - something we should all maintain :cool:
 
I too like that phrasing. Also, as brains become more complex it seems imo the gap between stimulus and response can be examined by 'self' and made longer if that is thought appropriate. i.e. How well does one control the Anger Response to a given stimulus (that ******* who just cut me off).
 
Also, as brains become more complex it seems imo the gap between stimulus and response can be examined by 'self' and made longer if that is thought appropriate. i.e. How well does one control the Anger Response to a given stimulus (that ******* who just cut me off).

It's hard to know whether self-consciousness is activated to direct the activity of suppressing or sublimating the 'primitive' emotional response, or whether it is activated as a result of the activity directed at suppressing or sublimating that response. My heart would like the former, but my head suspects the latter (i.e. we become aware that we are attempting to suppress our anger)...
 
Last edited:
Why You Can’t Replace Your Brain with a Computer Simulation of a Brain

There are several related ideas that have been discussed on this forum that I’d like to address in this post – among them:
  • If you produced a sufficiently accurate computer simulation of a brain (or a human), you could produce a new conscious entity in the world (as happens when a new human being is created biologically).
    [*]If you produced a sufficiently accurate computer simulation of a populated world, the beings in that world could be conscious and would believe themselves to be living in the world that is being simulated (and, therefore, it’s possible that we ourselves, and our universe, are merely simulations).
    [*]If you had a computer running a sufficiently accurate computer simulation of a brain, you could hook that computer up to a robot body to produce a conscious robot.
As interesting as these scenarios may be, they all suffer from fatal flaws which make them impossible.

Now before describing these flaws, I want to make it clear that just because these scenarios can’t happen, this does not mean that it is impossible to build conscious machines.

To understand why, we need to first distinguish between simulations and replicas.

These terms have a range of meanings, of course, but for our purposes here, we will use the term replica for anything which mimics all the physical features of a given system at some functional scale.

So a perfect clone of an animal would be a replica of that animal, and perfect dollhouse furniture pieces are replicas of the full sized originals.

So what is the difference between that and a simulation?

Well, for instance, I could use a computer simulation of a piece of furniture, say a chair, to help design and build a replica. I could do this by programming the computer so that it is able to display detailed images of the chair from any angle (even cross-section), allow me to view it with different stains and fabrics, tell me how much material I’d need for various sizes of replicas, how much weight they could tolerate, and so forth.

But I can’t sit in the simulated chair, or actually put it into my daughter’s doll house, or upholster it with real fabric.

That’s because only the machine running the simulation (which we’ll call the simulator) is independently physically real (more on this later).* And while the physical qualities of the replica are identical to those of the original, the physical qualities of the simulator are not – and this is the crucial difference between replication and simulation.

* I use the term “independently physically real” here to distinguish objectively real things and events from things which exist symbolically (like the “bear” in the emblem of the city of Madrid) or in the imagination (like my recollection of a bear). The seal is real, my brain is real, but there are no real bears in either one.

OK, so that being the case, how do simulations work?

After all, if a simulation can be so accurate that we can use it to predict the behavior of a corresponding replica (like how much weight a chair can take before breaking) then there must be a real correspondence between the simulation and the system it’s simulating (which we’ll call the target system or just the target).

Doesn’t this make them, in some sense, at least in part the same thing?

Actually, it doesn’t.

To explain this, we’ll use a bit of jargon – physical computation and logical (or symbolic) computation.

A computation is simply a change in the state of a system which follows a set of rules.

Because the behavior of our universe can be described in terms of rules (the so-called “laws” of physics) we can describe real events as physical computation.

The melting of ice, or salt dissolving in water, or pool balls ricocheting off the sides of a billiard table – all of these are changes in states of systems, which can be described in terms of rules, which is what makes physics and chemistry possible in the first place... without regular “laws” no changes in our world would be predictable.

(For more on that, see the work of Stephen Wolfram.)

Logical computations, on the other hand, are symbolic. Simulations are created by setting up a system of physical computations in such a way that they mimic the computations of another system, real or imagined. And if we agree on what the states of the simulator are supposed to represent, we can “read” the machine in order to understand something about the target system.

This is what allows us, for example, to set up an abacus in such a way that the movements of a few beads can represent the addition and subtraction of numbers much larger than the number of beads in the abacus.

And this is what allows us to add, subtract, multiply, and divide large numbers using mathematical symbols on paper rather than, say, using quantities of rocks and counting the groups we end up with.

And this is what allows us to set up rooms that make us think we’re flying airplanes.

Computing machines

In the early 20th century, Alan Turing developed a hypothesis regarding symbolic computation and “human computers” — that is, men and women writing out calculations by hand. His idea (or one of his many brilliant ideas, anyway) was that any calculation (symbolic computation) performed by a human computer could also be performed by a mechanical computer, even though the machine did not consciously “know” or “understand” the rules or the symbols.

It was a beautiful and wonderful and heretical and audacious idea... that a dumb and blind machine could perform such tasks. This idea is now taken for granted.

(Some folks have since misinterpreted Turing to assert that a mechanical computer can do anything a human brain can do, but this is not Turing’s notion and is not derivable from it.)

In short, as long as the machine is set up to move from symbol to symbol in the same way the human calculator does, then it will end up with the same symbol(s) at the end of the process as the human will, without having to know what they represent.

So like an abacus, any computing machine – including those powered by electricity rather than finger muscles – if properly set up can get you to the right symbol. And as long as you know what the symbols mean and set up some hardware to display them, you can read the machine to get your answer, or you can set up some other hardware to make something happen in the real world as a result of the configuration of machine parts you end up with (whether that’s painting a car door or driving a rover across Mars).

Modern computers can take this idea farther and produce what we can call intuitive symbols – that is, symbols designed so as to be naturally readable by any human, such as a pattern of lights on a screen which looks like a chair to a human eye (but not to all animal eyes because we do not bother to design our screens that way) or vibrations of a speaker cone which sound like a babbling brook to human ears (but not to all animal ears).

In short, the simulation is set up by a programmer (or some other designer, as in the case of the abacus) so that the physical computations of the simulator, which are not identical to the physical computations of the target system, are nevertheless similar enough in certain aspects that you can judge the state of the target system by observing the state of the simulation.

The necessary components


Now one thing you might have noticed is that whenever we talk about simulations, there are always 3 elements involved – the target, the simulator, and the designer/reader.

Interestingly, this is not true for replicas, which have only 2 necessary components.

Many microscopic organisms, for instance, reproduce by a process of replication. In this case the original organism serves as target and designer (as we’ll see, this cannot be true of simulations) no reader is necessary and the replicant serves as both simulation and simulator (as we’ll see, this also cannot be true of simulations).

This difference arises necessarily from the fact that the physical qualities of the simulator, unlike the replica, must differ from the physical qualities of the target of the simulation.

This is why a simulation of a tornado won’t damage anything inside your computer, but a replica of a tornado produced in a storm box can damage objects inside the box.

But wait a minute now... doesn’t the simulation have to be real in some sense?

If it weren’t real, how could we use it?

This is true, the simulation is “real”. But it doesn’t exist in the simulator.

It can’t exist there because the laws of physics forbid it.

All real things – all things or events which are locatable in space and have a duration in time – can be described in terms of matter and energy.

And if there are two real events A and B (all things can be described as events – even your sofa is simply the event of the molecules which make up your sofa retaining a particular relationship during a period of time... well, something like that, at least, we won’t dive all the way down that rabbit hole just now) you cannot cause both A and B with only enough matter and energy to cause either A or B alone.

OK, but what if A and B are identical? Isn’t the simulator identical with the simulation?

The answer is no.

To illustrate why, let’s consider this claim: In my garage, there is a truck, there is a vehicle, and there is a pick-up. But if you look in my garage, you see only one automobile.

This can be true because if you ask me to describe the truck, the vehicle, and the pick-up, you’ll get an identical answer.

On the other hand, if I were to describe one in terms of bits and voltage potentials, and another in terms of funnel clouds and wind speeds, there would be an obvious problem. And this is precisely where we end up with the simulator and simulation.

Where is the simulation?

It’s similar to the situation of watching Lawrence Olivier play Hamlet on stage. Are Olivier (a 20th century British actor) and Hamlet (a 13th century Danish prince) both on stage?

No, only Olivier is on stage.

But we agree he’s playing Hamlet, so where is Hamlet?

Hamlet is real – but not independently real. He is only real as a state in the brain of an observer in the audience. He exists as an act of imagination, which involves matter and energy in the brain.

For the Shakespeare buff, Hamlet is real as an act of imagination... real because the behavior of imagining really happens (requiring no actual Danish prince). But if the performance is a free dress rehearsal and in the audience is a recent immigrant who speaks no English, never heard of Shakespeare, is from a country with no similar theatrical tradition, and who just wanted to get out of the rain, for him there can be no Hamlet, either real as a man or real as an activity of his brain cells.

The same is true of our simulation, which as we have noted cannot be a simulation without a programmer/reader.

Simulations begin and end with imagination

Remember, the target is not involved with the simulation. Tornadoes don’t set up computer simulations of tornadoes.

Instead, a programmer or designer sets up the system according to his/her ideas about the target (even if that target is the designer), which may itself be either actual or imaginary. And the reader uses the simulation to change the state of his/her brain. And the simulator itself is not identical to the target (or to the brain of the programmer/reader).

But wait... can’t the simulator itself be considered the reader? Can’t we get the programmer/reader’s brain out of the picture that way?

As it turns out, no, we can’t.

That’s because it’s impossible to design a simulation in such a way that it must represent exactly one target. Which means a simulator-as-reader could not determine which of all the possible scenarios it was supposed to be simulating.

In fact, even if the simulator were a conscious machine with perfect knowledge of its own “body”, it would be unable to determine – from its knowledge of its body alone – even the simple fact that its bodily functions were supposed to represent something to someone in the first place. (That fact exists only as states of other brains.)

The perfect simulation

But wait, what if the simulation is perfect? I mean right down to the quarks. Wouldn’t that limit the simulation to one event?

Well, no, for a couple of reasons. (We’ll ignore problems with the idea of such a feat in the first place.)

First, let’s say we run a perfect simulation of a small river system. Unless you already know that’s what the simulation is supposed to represent (or at least that it’s supposed to represent something meaningful or useful to humans) there’s no reason why the target’s temporal and spatial dimensions should be applied to the sim. It could just as well represent a system in which height, width, depth, and time were all flip-flopped.

Sure, you could program an environment to fix these values, but then that environment would have the same problem.

The second problem is that there’s no way to determine from the simulation (without bringing in outside knowledge, which drags the observing brain back into the system) that the simulation is indeed perfect and complete, and that it’s not instead an incomplete simulation of any number of larger systems.

(It also seems intuitive to me that it should be mathematically impossible to have enough information in the system to explain the system to itself – each addition of explanatory information would require additional explanatory information to explain it, which would also require additional explanatory information, ad infinitim – but I don’t have a cipher for it.)

So no, we can’t use the simulator itself as a reader, even if the machine were perfectly conscious of the state of its parts.

The fallout

The upshot of all this is simply that the simulation must always exist only in the imagination of designers and readers.

The simulator has its own independent existence, but the simulation cannot exist in the simulator. Matter and energy in the brain account for the reality of the simulation.

This is why even a perfect simulation of a brain will not, and cannot, produce a new conscious entity in the world, the way that creating a baby does. To do that, a mechanical replica brain is needed.

And this is why even a perfect simulation of a populated environment will not produce conscious beings “in the world of the simulation” who believe they are living inside the target world.

Even if it were true that our universe was itself a simulator machine produced by some hyperdimensional race, we as parts of the simulator would have no way of knowing what it is that our universe is intended to simulate – only the hyperdimensional designers and readers would know that. If we’re parts of a simulator (we cannot be parts of a simulation) then what we observe is only the simulator, not the simulation.

And this is why you can’t put a computer running a simulation of a brain inside a robot and expect it to behave like a brain.

Whenever you replace a physical part with another physical part, only the physical computations matter to the system, and the physical behavior of the simulator must be different from the physical behavior of the target system.

You might as well say you can replace your liver with a computer simulation of a liver.

But couldn’t you just take the symbolic outputs and convert them back into physical outputs, and voila?

The problem with this approach is that it means that none of the physical computation, the real work performed by the object, is getting done during the segment where you’ve made the replacement.

And we know consciousness is performed by the brain itself. It is not an impulse which exits the brain, bound for somewhere else in the body. It is a biological function of the organ, so replacing it with a machine that does something else (e.g. runs simulations of things or plays Tetrus) would mean that the function is no longer performed, which means no consciousness.

And since we know that running the simulation doesn’t make the machine conscious, we cannot get consciousness that way either.

Consciousness is a real event. However it’s done by the brain, only a functional replica of the brain will also perform that behavior.

But wait a minute... what if the brain is a computer? Wouldn’t it work then?

Well, if that were true, it could work, but only if the computer were set up to run like your brain, and since your brain isn’t a simulator machine, you’re right back to needing a replica, not a simulation.

And besides, your brain isn’t a computer – or at least, not the kind we think of when we use the term “computer”.

But that will have to wait for a later post.
 
Why You Can’t Replace Your Brain with a Computer Simulation of a Brain

[snip]....



Well done..... very nicely put.

I would like to offer a thought that your text inspired.

If I draw a sequence of animations on a stack of papers and flip the pages the animation comes alive.

How is that different from an animation in a computer?

Would any one hesitate to deny that the animation on paper is in anyway alive or that it induces the drawn characters of the animation to be alive?

So why are people so quick to adduce life to the same characters in the flipped pages of the computerized version?

Someone might jump in and say that the characters in the computer have subroutines that make them do actions that are not just a predetermined sequence as on the paper.

I would reply..... that is just an expanded script..... extra subroutines and even ones with randomness are just an expanded script..... just like I might add a new page in the stack of paper drawings.

When I flip a dice I never predetermined which number will come up....but I flipped the dice and the number has meaning only to me. The dice did neither flip itself nor did it know or care or decide what number to show.

Think of an AI animation or any other AI as flipping a stack of papers where I could according to set rules or randomly or depending on the reaction of the observer pause the paper flipping and change a few of the papers in the stack and then continue without the observer being aware of this happening.

That is all a computer simulation is..... flipping a dynamically changeable stack of papers with symbols on them. The insertions of the changed papers are in accordance to the observer's responses to previous sequences or a random factor or some other SCRIPTED rule.
 
Last edited:
But wait, what if the simulation is perfect? I mean right down to the quarks. Wouldn’t that limit the simulation to one event?
A perfect simulation doesn't have to be perfect down to the quarks.

A simulation can also be an abstraction that still perfectly simulates the properties of something in reality. Or, at least the properties particularly concerned with.

If a perfect, yet abstract, simulation of consciousness would have all of the relevant properties of a natural, real, consciousness; I still think it would be inaccurate to say that it wasn't actually a consciousness.

A simulation of swarming or flocking is still a genuine swarming or flocking behavior, even if the entities swarming or flocking are simulated.

A simulation of evolution by "natural" selection is still a genuine process of evolution by "natural" selection, even if the entities are simulated.

Your arguments are more relevant about actual "things". Consciousness is not a "thing", but rather than the emergent property of something.
 
So, if instead of a computer simulation of a brain, someone actually built a computer which replicated all of the functions of a human brain, would it be conscious?

What would be the difference between the computer brain and a biological brain?
 
As interesting as these scenarios may be, they all suffer from fatal flaws which make them impossible.
So, this is your burden--that these things are impossible. And the things you claim are impossible are:
  • If you produced a sufficiently accurate computer simulation of a brain (or a human), you could produce a new conscious entity in the world (as happens when a new human being is created biologically).
  • If you produced a sufficiently accurate computer simulation of a populated world, the beings in that world could be conscious and would believe themselves to be living in the world that is being simulated (and, therefore, it’s possible that we ourselves, and our universe, are merely simulations).
  • If you had a computer running a sufficiently accurate computer simulation of a brain, you could hook that computer up to a robot body to produce a conscious robot.
In other words, you're setting out to argue that it is impossible for a sufficiently accurate computer simulation of a brain to produce consciousness. That it is impossible for a sufficiently accurate computer simulation of a populated world to create beings in that world that are conscious and believe themselves to be living in that world. And that it is impossible to hook a computer running a sufficiently accurate computer simulation of a brain to a robot body to produce a conscious robot.

Now let's go over your argument whereby you're demonstrating said flaws. But first:
These terms have a range of meanings, of course, but for our purposes here, we will use the term replica for anything which mimics all the physical features of a given system at some functional scale.
Your language here makes me shudder. To mimic something is to imitate it; function describes a behavior, and the phrase "at some functional scale" strongly suggests a particular scale of behavior. So if I take this definition literally, a replica should be something that imitates the behaviors of a system at some scale. But that's not how you're using the term at all.

So I'm going to ignore this definition. Instead, I'm going to propose that a replica is something that actually has the same physical properties of the system at some scale. I'm extrapolating this definition from here:
And while the physical qualities of the replica are identical to those of the original,
I hope this is satisfactory.

Now, the other thing you say is:
That’s because only the machine running the simulation (which we’ll call the simulator) is independently physically real
...where independently physically real means:
I use the term "independently physically real" here to distinguish objectively real things and events from things which exist symbolically (like the "bear" in the emblem of the city of Madrid) or in the imagination (like my recollection of a bear). The seal is real, my brain is real, but there are no real bears in either one.
And by the way, there are some established terms for these things that might be helpful. A sign is a representation of an idea somewhere. The intenSion (with an S) is the concept behind the idea, and the extension is the thing the idea is about. A favorite analogy of mine is to have you consider your "right big toe"; the sign is on the screen. The intension is in your head. The extension is on your right foot.
After all, if a simulation can be so accurate that we can use it to predict the behavior of a corresponding replica (like how much weight a chair can take before breaking) then there must be a real correspondence between the simulation and the system it’s simulating (which we’ll call the target system or just the target).

Doesn’t this make them, in some sense, at least in part the same thing?

Actually, it doesn’t.
Of course they are not the same thing. A sign is no more an intension than an intension is an extension. But all three of these things have extensions; as in my analogy, the extension of the sign "right big toe" is on the screen--between those two quotation marks. The extension of the intension is to be found somewhere in that warm fleshy bit--between your two ears. And the extension itself is between your right little toe and the left edge of your right foot, inclusively.

Let's repeat the above list again to clarify this point:
  1. If you produced a sufficiently accurate computer simulation of a brain (or a human), you could produce a new conscious entity in the world (as happens when a new human being is created biologically).
  2. If you produced a sufficiently accurate computer simulation of a populated world, the beings in that world could be conscious and would believe themselves to be living in the world that is being simulated (and, therefore, it’s possible that we ourselves, and our universe, are merely simulations).
  3. If you had a computer running a sufficiently accurate computer simulation of a brain, you could hook that computer up to a robot body to produce a conscious robot.
For discussion purposes, for item 1, let's say that this runs inside a red box. Item 2 runs inside of a green box. As for item 3, we'll just say that this is an android. Now, the simulation in item 1 will be running inside that red box. The simulation of each human in the green box is in the green box somewhere, as is the simulation of the world they populate. And the simulation in the android is between the android's two ears, just as all intensions we have are in the fleshy bit between our ears. Get the picture?

But let's look at your explanation anyway, because here is where you begin clash with everything I call mathematics and logic.
To explain this, we’ll use a bit of jargon - physical computation and logical (or symbolic) computation.
So what pray tell is a computation? Well, you say:
A computation is simply a change in the state of a system which follows a set of rules.
Sure. So this is a computation. But what is a logical computation then? Well, you contrast this with:
Logical computations, on the other hand, are symbolic.
And this is where we clash, big time. We use symbols for logic simply because the extension of logic is abstract in nature, but those abstractions are abstractions of the rules of the operation of the universe.

In terms of the simulation, the simulated entities that we actually interpret are first extrapolated as a sign--like the right big toe that is on your screen. The symbol itself--the intension--is the right big toe that is between your ears, and its extension is the right big toe that is on your foot.

Now where is the computation happening? It's not happening in the warm fleshy bit between your ears, that's for sure. If it were, you wouldn't be simulating. You'd be faking a simulation--you'd be fudging your results. Therefore, to perform an association of the computation with the kind of thing that lives in that warm fleshy bit between your ears is a misnomer.

The computation is, instead, happening inside that blue box. That's a physical computation. You may intend for it to represent the tornado that flew through Kansas last year, and it may or may not map to that well, but the actual computation is nevertheless something that is done by a physical device outside your head. Both the computations in the simulation and the computations in the entity you are simulating are the kinds of right big toe's you find on your foot.

It's a physical computation.
Computing machines

In the early 20th century, Alan Turing developed a hypothesis regarding symbolic computation and "human computers" - that is, men and women writing out calculations by hand. His idea (or one of his many brilliant ideas, anyway) was that any calculation (symbolic computation) performed by a human computer could also be performed by a mechanical computer, even though the machine did not consciously "know" or "understand" the rules or the symbols.
...and here's where you start begging the question. A human computer can also follow rules without consciously knowing or understanding the symbols. But what you're trying to demonstrate is that it is impossible for a computer to know or understand the symbols.

But you haven't demonstrated this. Suppose that knowing and understanding symbols happens inside that warm lumpy flesh behind your ears simply as a result of your recognizing patterns. Let's suppose that the way it works is that I recognize patterns, and I'm able to associate the extension of a sign to observed percepts and formulate an intension of it; that is, I'm able to recognize that the thing on my screen corresponds to particular things I perceive in reality, to formulate a method of recognizing what is and is not that kind of thing, and to associate that sign with said formulation (intension).

Now what part of this can a computation not do? If you saw the presentation I linked to earlier, you would see how computation can spontaneously formulate steady-state expectations of values which look pretty much like intensions to me--that kind of "right big toe" thing that is inside that warm lump of fleshy bit between your ears. Furthermore, to link a sign to this thing is simply another layer of recognition, followed by an association as a label. Easy peasy. The extension is where it always was--in reality--and recognition is simply the act of getting this extension to trigger an intension--also explained in the talk.
Modern computers can take this idea farther and produce what we can call intuitive symbols - that is, symbols designed so as to be naturally readable by any human, such as a pattern of lights on a screen which looks like a chair to a human eye (but not to all animal eyes because we do not bother to design our screens that way) or vibrations of a speaker cone which sound like a babbling brook to human ears (but not to all animal ears).
Those are called "signs". That's the "right big toe" that is on your computer screen.
Now one thing you might have noticed is that whenever we talk about simulations, there are always 3 elements involved – the target, the simulator, and the designer/reader.
But what you're claiming is impossible is for the simulator to be the reader. What you're failing to appreciate is how a human being gets to be a reader in the first place; that is the very thing you're claiming that the simulator cannot do.
This difference arises necessarily from the fact that the physical qualities of the simulator, unlike the replica, must differ from the physical qualities of the target of the simulation.
Sure. But that's true of human beings. The "right big toe" that is inside that warm fleshy bit between your ears is a different physical quality than the "right big toe" that is the warm fleshy bit on your right foot. So since this is true of humans, it is of no consequence that it's true of computers.
This is why a simulation of a tornado won’t damage anything inside your computer, but a replica of a tornado produced in a storm box can damage objects inside the box.
But you keep comparing the wrong things to the wrong things. You insist on proving that a simulation of a brain is not a brain, but you forget that the brain itself is doing simulation. If you want to show that the thing in the blue box cannot be conscious, you need to compare what the thing in the blue box does not with what I am, but with what I do. After all, you're not trying to show that it's impossible that the thing in the red box, the green box, or between the android's ears could be a warm lump of fleshy bit. You're trying to show that it's impossible that the thing in the red box, the green box, or between the android's ears is conscious.

So, yeah. The thing inside that blue box won't blow down a house. But neither will the thing inside that warm fleshy bit blow down a house. That warm fleshy bit isn't conscious because it creates things that blow down houses when it imagines tornadoes, and nobody is claiming that there's a warm fleshy bit inside that red box, that green box, or those android ears.
But wait a minute now... doesn’t the simulation have to be real in some sense?

If it weren’t real, how could we use it?

This is true, the simulation is "real". But it doesn’t exist in the simulator.
Again, comparing the wrong thing to the wrong thing. The right big toe that is in your warm fleshy bit between your ears is equally not the right big toe that is on your foot. The point isn't that the simulation has a warm fleshy bit in it. The point is that it has the requisite thing in it necessary to be the kind of right big toe that is in the warm fleshy bit between your ears. You know, that thing that is conscious.
And if there are two real events A and B (all things can be described as events – even your sofa is simply the event of the molecules which make up your sofa retaining a particular relationship during a period of time... well, something like that, at least, we won’t dive all the way down that rabbit hole just now) you cannot cause both A and B with only enough matter and energy to cause either A or B alone.
Didn't we go over this before? There's no such rule, and there are so many counterexamples as to make this rule perverse. With just enough energy to shove a glass off of a table, I can cause it to break. With just enough energy to turn a steering wheel, I can cause a 40-car pileup. With just enough energy to start the big bang, everything that ever was or will be in the universe resulted. Any time you knock any system out of equilibrium, the effects can follow indefinitely. The famous butterfly-flaps-its-wings analogy of huge chains of effects from the tiniest of causes violates exactly no laws of physics--and how much energy do you suppose a flap of a butterfly wing uses?

There is no such thing as a law of physics that states that with only enough energy to cause A you cannot cause B to happen. The law of conservation of energy isn't such a law; after your cause A, you still have exactly as much energy as before the cause A. If you just up and cause B anyway, you'll still have the same amount of energy. If you cause A and that causes B, you still have the same amount of energy. If you knock over a domino (A), and it causes another domino to fall (B), and another (C), and another (D), and another (E), and so on, you can get an indefinite chain of such falling dominos to occur, with exactly 0 energy loss.
Simulations begin and end with imagination

Remember, the target is not involved with the simulation. Tornadoes don’t set up computer simulations of tornadoes.
Actually, yes, they do. It's impossible to simulate a tornado without knowing what a tornado is, and the means of knowing what tornados are happens to be observing tornados. In this particular case, the tornado came first. Then the intension of the tornado. And only then did the simulation follow.

You simply failed to trace your causes back far enough. Now, in theoretical physics, we might get more indirect forms of causation, where we test the idea first, then we observe the phenomenon. As for tornadoes, though, they are the root cause for their simulations.
Instead, a programmer or designer sets up the system according to his/her ideas about the target (even if that target is the designer), which may itself be either actual or imaginary. And the reader uses the simulation to change the state of his/her brain.
But in the case of tornadoes, they were actual, and they caused some concern, which led to the programmer making the simulation.
But wait... can’t the simulator itself be considered the reader? Can’t we get the programmer/reader’s brain out of the picture that way?
Those are two entirely different questions. Do not confuse them.
As it turns out, no, we can’t.

That’s because it’s impossible to design a simulation in such a way that it must represent exactly one target. Which means a simulator-as-reader could not determine which of all the possible scenarios it was supposed to be simulating.
And now you're begging the question. How do you suppose that warm fleshy bit between your ears represents exactly one target? Somehow it actually does this.

Personally, I think it's much simpler--so simple, even a simulation could do it. We perceive, and we formulate intensions based on our perceptions. Those intensions are classifications of general targets, and we're able to recognize particular tokens (<- slightly different use than before).

So what's your theory on how this works? Neuro-biologically induced electromagnetic induction waves? I'm not exactly sure how that would help. All my explanation needs is a causal flow.
In fact, even if the simulator were a conscious machine with perfect knowledge of its own "body", it would be unable to determine - from its knowledge of its body alone - even the simple fact that its bodily functions were supposed to represent something to someone in the first place. (That fact exists only as states of other brains.)
Representation in this sense is simply the recognition of extensions--and these are ever present extensions that are the same token of the intensions, so they're actually the easy case examples. What exactly is it that you think goes on between those ears that allows this to happen?
 
A perfect simulation doesn't have to be perfect down to the quarks.

A simulation can also be an abstraction that still perfectly simulates the properties of something in reality. Or, at least the properties particularly concerned with.

If a perfect, yet abstract, simulation of consciousness would have all of the relevant properties of a natural, real, consciousness; I still think it would be inaccurate to say that it wasn't actually a consciousness.

A simulation of swarming or flocking is still a genuine swarming or flocking behavior, even if the entities swarming or flocking are simulated.

A simulation of evolution by "natural" selection is still a genuine process of evolution by "natural" selection, even if the entities are simulated.

Your arguments are more relevant about actual "things". Consciousness is not a "thing", but rather than the emergent property of something.

Look in your computer and see if you can find anything swarming or flocking.

If you can't, then this behavior is imaginary in the same way that all other simulated behavior is imaginary.

It really is that simple.
 
So, if instead of a computer simulation of a brain, someone actually built a computer which replicated all of the functions of a human brain, would it be conscious?

What would be the difference between the computer brain and a biological brain?

The problem with this question is that it assumes that a "computer" actually can "replicate all the functions of a human brain".

Despite the claims of the computational literalists, this does not appear to be the case -- but we'll have to delve into that in another post.

So we must rephrase -- if someone actually built a machine which replicated all of the functions of a human brain, would it be conscious?

The answer is yes.

What would be the differences between the mechanical brain and a biological brain?

Well, that's not a question that can be answered at the moment.

To answer it, we'll first need to know precisely how the human brain performs experience... only then will we know which features of the biological brain need to be replicated and which do not in order for a replica brain to be conscious.
 
The problem with this question is that it assumes that a "computer" actually can "replicate all the functions of a human brain".

Despite the claims of the computational literalists, this does not appear to be the case -- but we'll have to delve into that in another post.

So we must rephrase -- if someone actually built a machine which replicated all of the functions of a human brain, would it be conscious?

The answer is yes.

What would be the differences between the mechanical brain and a biological brain?

Well, that's not a question that can be answered at the moment.

To answer it, we'll first need to know precisely how the human brain performs experience... only then will we know which features of the biological brain need to be replicated and which do not in order for a replica brain to be conscious.

I'm not saying that I think it is possible right now with our current level of technology, just trying to find out if you objected to the principle that a computer could ever be conscious.

I look forward to reading your response to yy2bggggs's excellent post above.
 
... your brain isn’t a simulator machine...

I think that's exactly what it is; it continually models an internal, virtual reality based on perception of incoming signals. It's limited in scope and resolution, but if our internal model isn't a simulation, what is?
 
Last edited:
I'm not saying that I think it is possible right now with our current level of technology, just trying to find out if you objected to the principle that a computer could ever be conscious.

I look forward to reading your response to yy2bggggs's excellent post above.

A computer cannot be conscious in the way that the computational literalists want it to be... by programming alone, with only enough hardware to "run the logic".

In fact, no real event can be caused by that scenario beyond "running the logic" itself.

There's no reason, at the moment, to believe that a machine can't be conscious, and of course no reason to believe that such a machine would not contain a computer (or computers) of some sort.

As for yy2's post, I'd love to address some of those issues, but I'm not going to get into a discussion using the terms he insists on using... we've seen where that ends up. By employing terms like "sign" and "symbol" in the way he wants to do, we get into a conflation of the real and symbolic right off the bat, so as long as he demands that we speak his jargon, I'm afraid it's a non-starter.

I mean, if you want to understand how a vending machine operates, you're off on the wrong foot if you begin by saying that the machine "recognizes" coins as "tokens" with "values", for instance. Using anthropomorphic metaphors to discuss these matters is a surefire ticket to confusion.
 
Status
Not open for further replies.

Back
Top Bottom