• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
OK, we've solved one problem, but there are others.

The largest of these is that you propose the existence of a "conscious entity" that has been "brought to conscious awareness of our world" by some sort of "link" with a machine.

So you've just whipped this "conscious entity" up out of even less than thin air (no energy or mass appear to be involved) and then procede to make it aware of "our world"... at which point you also invent "its world" necessarily by contrast... by means of an unspecified "link" with a machine of nondescript construction.
OK, sorry, I assumed you were following the discussion. Suppose we have an artificial brain, that is capable of consciousness, hosted by a computer (such as has been discussed in earlier posts).

The sensory inputs and motor outputs of this brain are connected to sensors and effectors on a humanoid mechanical chassis, so that it can sense the external environment and physically respond to it. It sees and hears through cameras and microphones, and it moves using artificial muscles (something like ASIMO).

Through this interactive interface, the conscious entity has learned about the world - or at least, the environment the humanoid machine has access to [it seems to me that if we were able to construct an artificial brain based on the structure and function of the human brain, e.g. multiple neural emulators connected together, that could support consciousness, it would need to develop and learn in a roughly similar way to a human brain before it would be able to understand its environment and communicate effectively].
 
I wonder how many "physical lumps of matter" you can list that do NOT do what ONLY ONE KIND of physical lump of matter does.
That's irrelevant to my point. I'm sure you really want to make a point to correct something I never said, but all I need to establish is that at least one kind of physical lump of matter does this thing.

Nevertheless, you're also begging the question. How do you know that only one kind of physical lump of matter does this thing, unless you're particularly loose with your definition of "kind"? We're supposedly talking about physical reality. That's a pretty big place. We're talking many quintillions of cubic light years of just observable space--and simply the existence of a second type of entity doing something that we know for a fact at least one type of entity does within this space.

I'm perfectly willing to say "at least one type of entity" in this kind of space exists which is conscious. I'm even willing to allow some skepticism for moving up to the claim of "at least two types of entities probably are conscious"--that is, I agree, a huge step.

But a claim that only one type of entity is conscious because you only know of one? I think you're going to have to back that one up.
So I think your statement “we know that physical lumps of matter do” in fact implies DOING NOTHING….right?
Huh? If I demonstrate that at least one entity does a thing, isn't that sufficient to demonstrate that an entity can do it?
So if you can tell me what other lumps of matter are prone to developing consciousness other than brains, then maybe we can entertain the idea that silicon chips might do so.
You're shifting the burden (and furthermore, establishing an argument from personal incredulity). Try this. If you can show that a brain is required for consciousness, then maybe I'll entertain your theory that a silicon chip cannot produce it.
So assuming that since ONE TYPE and NO OTHER of physical lumps of matter can achieve consciousness then others are just as easily going to do it, is a bit unfounded in epistemological facts.
Huh?
But what “we know that physical lumps of matter do” in 99.999999% of all lumps of matter that exist is NOTHING. Just because 0.000001% of all matter that exists on Earth in the last 5 billion years managed to evolve the ability to be conscious does not imply that a silicon chip will do it by any stretch of epistemological imagination.
I don't agree that you have a proper epistemic inference. Show me your work.

I vehemently disagree with the rule: "I only know of one type of thing that does X. Therefore, nothing can do X unless it is that type of thing." If this were a valid epistemic rule, we would not have such ubiquitous concepts as "proof of concept".
 
Last edited:
And what does all this gobbledygook mean? Does it mean that the FSM exists?
No.
Does it imply that Superman can see through women's skirts and burn you by gazing at you?
No.
You imagining a conscious silicon chip running a simulation makes it just as likely to occur as a Superman who leaps higher than skyscrapers.
So let me get this straight.

Your argument is that since "imagining a conscious silicon chip running a simulation" is an idea, and "a Superman who leaps higher than skyscrapers" is also an idea, then an actual conscious silicon chip must be equally likely as an actual Superman leaping higher than skyscrapers.

Is that representative of what you're arguing? If so, I leave it up to the reader whether or not he agrees that this argument is sound.
 
And what does all this gobbledygook mean?
To put it another way,let's use the analogy of a physical map. Suppose I had a map of roads in France. This means we can have maps of locations. Furthermore, maps are actual things--they are made of ink and paper. Or, at least, the ones we know about.

But we can also share maps. You can give me your map of Jamaica. It's still a map, and it's still a thing. But now you want to build an argument based on the fact that we can draw and share maps of Oceana, Candy Island, and Mario World. Sure, we can do that--but they are still maps.

But now you're trying to argue that we cannot make maps on blackboards. Well, maybe that's right. Blackboards are not paper and ink. But I'm having difficulty following your argument, because I don't see the inherent limitation. Well, you don't know how the map of France gets to actually be one of France, but you know that pen and paper maps exist, and somehow get to be of France--perhaps being a map of a thing requires really complicated stuff. But, you surmise, if we just draw a line on the blackboard, that could be anything. That line could be a road, or a side of a Hospital. And no matter how many lines we draw, it would simply be a dusty line that could be anything, and we know our maps of France involve nice thin lines embedded in sheets of paper, and also somehow wind up being of France.

I object that your argument against blackboards being maps doesn't hold. Maps are just things, and we know they exist, and get to be about real places. All you need to do in theory is look at pen and paper maps and figure out how they can be maps of France, then see if that requires ink, or if drawing lines suffices. Furthermore, best as I can tell, your argument precludes even the possibility that pen and paper maps can be of France, because even if it does require ink--though I'm not sure why it would--the ink lines themselves could be roads or the side of hospitals. I have no idea why you think a line gets to be of France solely because that line is drawn in ink, and I don't buy the argument that we have to make the blackboard map an ink map to work simply because the ink map, which we know does work, uses ink.

But you object that just because we can put picture of a blackboard with some lines on it into one of our maps doesn't mean what's drawn on the actual blackboard will ever correspond to a particular place. And, sure, that's fair; the one doesn't automatically imply the other. But I still don't see why you think a blackboard cannot be a map of France.

But then you go on to say that since we can draw a map of Candy Island, it's equally likely that a blackboard can be a map of France as it is that Candy Island exists.

And to that, I say, bollocks. The fact that both are things we put in maps does not entail they are equally likely. The argument is fundamentally broken here.
 
Last edited:
To put it another way,let's use the analogy of a physical map. Suppose I had a map of roads in France. This means we can have maps of locations. Furthermore, maps are actual things--they are made of ink and paper. Or, at least, the ones we know about.

That's a reasonable analogy, but I've noticed that a lot of the claims for simulated consciousness assume that a sufficiently accurate map will have the same properties as the territory.
 
Ready for what?

A definition that helps rather than hinders effective communication.

And should you demand - as you possibly have a right to do - what I think constitutes physical addition - it's where two or more objects quantities are combined to produce a collection of objects in the mind of a conscious observer.
 
That's a reasonable analogy, but I've noticed that a lot of the claims for simulated consciousness assume that a sufficiently accurate map will have the same properties as the territory.

You don't seem to get the analogy westprog -- he is saying that our minds are maps already.

Thus at worse the simulated consciousness is a lossy reproduction of a map.

If you photocopy a black and white map, is anything important lost?

If you scan a color map, run it through a computer, and print out a copy, is anything important lost?

Maybe, maybe not, depends on the properties of the original map you consider "essential" to "map-ness."
 
A definition that helps rather than hinders effective communication.
All that's necessary is to restrict the scope to the context at hand, the relationship between the input(s) and the output(s) of the element(s) in question.

And should you demand - as you possibly have a right to do - what I think constitutes physical addition - it's where two or more objects quantities are combined to produce a collection of objects in the mind of a conscious observer.
OK...
 
OK, sorry, I assumed you were following the discussion.

I was. Apparently a bit more closely than you were following your own part of it, if I might observe.

Following the discussion does not mean accepting repetitions in previous errors in argument as if they contained no errors.

Suppose we have an artificial brain, that is capable of consciousness, hosted by a computer (such as has been discussed in earlier posts).

Stop.

You may be again assuming your conclusions. I believe you probably are.

If we have an "artificial brain that is capable of consciousness", then we have an object.

When you say that it is "hosted by a computer" then what you're saying is that you have the computer running a sim of it.

In other words, if you have a "pick-up truck that is capable of hauling a half ton load" and it's being "hosted by a computer" then obviously we've got a computer in front of us, not a truck.

The computer can't haul a half ton load. So if we ask it to replace the truck in a physical system, it can't do that, unless we're asking it to do something which both objects can physically accomplish (a small category of tasks, I imagine).

What the real truck can actually do, the computer cannot actually do.

So what you're saying here is that we have a computer that's running.

It happens to be running a simulation of an artificial brain, but unless the physical output of the physical actions performed by the computer to do this task also end up mimicking the physical outputs of a brain, then the fact that it's running a sim for someone is irrelevant to whatever we want to do with it with regard to a physical system.

(Remember, the fact that it's even running a sim is undetectable from any examination of the object without some sort of knowledge from outside the object... the nature of the intended target of the simulation is stored in a pattern in someone's brain, it has no effect at all upon the computer in front of us.)

Suppose The sensory inputs and motor outputs of this brain are connected to sensors and effectors on a humanoid mechanical chassis, so that it can sense the external environment and physically respond to it. It sees and hears through cameras and microphones, and it moves using artificial muscles (something like ASIMO).

This is precisely the error I was trying to tell you about.

We must be clear that we don't have a brain here, we have a computer. That's all.

It's true that there's someone somewhere who has set up some of the computer's behavior to mimic the behavior of a human brain in some ways, but since the physical machine itself isn't acting like a physical brain in the real world -- and you can open up the machine and verify that by simple observation -- and in fact there is no way to even deduce the fact that someone is using the machine in that way, because that information is part of someone else's brain state... this fact is irrelevant to the scenario we are considering.

So odd as it may seem, we not only can ignore the intended target of the sim this machine is running, we are obligated to ignore it. It is not part of the system we have available to us. (And cannot even be deduced from all the information in the system, for the same reason you can't physically examine someone and tell if she has a twin.)

But let's assume that what you describe here is the case... the machine accepts physical inputs functionally identical to what a brain accepts (remember, there can be no "logical" inputs to a brain, only physical) and produces physical outputs functionally identical to what a brain emits.

OK, what sort of machine can do that for us?

A machine that is built to run digital simulations for us?

Well, no. The physical inputs and outputs of such a machine are way off. So if the machine we use happens to be running a simulation of something, that won't help us... might not hurt, but won't help.

Obviously, to replace any functioning organ in the body, you need to build a physical replica. It might not look a lot like the original organ -- a peg leg, for example -- or it might be indistinguishable, such as a new organ grown from stem cells. But it will physical do what the original did.

Just like our replacement truck needs to be able to haul 1,000 pounds.

So that's what this machine will have to be, a functional replica.

To do that, we must use the artificial brain we built at the start. The computer running the simulation of the artificial brain cannot be used, because its physical outputs are all wrong.

Through this interactive interface, the conscious entity has learned about the world - or at least, the environment the humanoid machine has access to [it seems to me that if we were able to construct an artificial brain based on the structure and function of the human brain, e.g. multiple neural emulators connected together, that could support consciousness, it would need to develop and learn in a roughly similar way to a human brain before it would be able to understand its environment and communicate effectively].

Now, as long as we're talking about putting the artificial brain into this mix with other machinery, we're good.

If you're talking about using a machine that's built for another task (like running a simulation or browsing the internet or mixing records) then it won't work.

And btw, I was being generous when we talked about replacing the neurons.

If trans-brain waves are indeed part of the solution to the puzzle of consciousness (and they're our best candidates for generating anything that's coordinated across disparate brain areas, which we know is happening) then replacing them with another material might screw that up.

It could be that our rebuilt bit of cortex ends up creating a conscious blind spot.

It's this sort of thing that we have to be very careful about.

At first glance, you might think you could represent power lines pretty easily as input/output, for example. But you can't. Something as simple as arranging them vertically or in triangular arrangements makes a huge difference in how they behave.

As always, shape matters.

So anyway, no, like I said from the beginning, you can't take something imaginary (like the intended target of a computer simulation) and replace a real part with it.

Are you following?
 
Adding processors does not add functionality. Computable is computable.

Anything that a 100 billion processor brain can do, a single processor brain can also do (given enough memory and time). Emulating the behavior of a 100 billion processor brain where all the processors are running on the same clock and where communication happens in a known and determinable number of clock cycles is straightforward on a single processor system.

If the processors all have different clocks that are not synchronized, then it becomes more complicated, since to accurately reproduce the multi-processor brain you would need to compute the state at a small fraction of one cycle of the fastest clock, and that fraction may never be small enough to duplicate the behavior exactly (or it may be, depending on the true nature of time). But that only matters if you want an exact clone. As the size of the time slice used decreases, you will soon get to "close enough".

Yeah, actually, adding processors can add functionality, if you keep in mind that we're talking about the physical functioning of a physical object. (Which is to say, physical or real computation, the kind stars and oceans and brains do.)

In that case, you can't... not for philosophical reasons, but because of the laws of physics... reduce it to a series of actions in sequence.

Take my truck, for example....

The real physical systems have to be shaped the way they are, and made of the things they are -- or real functional equivalents -- for that object to do what it does.

It cannot be made to happen very slowly, one step at a time.

And this is one of the fundamental errors at the root of the false "you can program consciousness" claim.

The brain is not symbolic, it's real. Its computations aren't logical, they're physical. Consciousness is a real phenomenon in spacetime.

And you can't replace symbolic computations for real ones.

If you could, you could make anything happen by imagining it.
 
To the contrary, that's exactly what it would be. Perhaps not a computer program that people unfamiliar with multi-tasking & time-slicing CPUs or multi-threading software would recognise, but still a computer program. Being composed of a (very large) number of software modules emulating neurons wouldn't change that.

Different from what other form of cyborg?
An actual, real, built-from-scratch cyborg as opposed to a simulated one existing only in computer code.
 
To put it another way,let's use the analogy of a physical map. Suppose I had a map of roads in France. This means we can have maps of locations. Furthermore, maps are actual things--they are made of ink and paper. Or, at least, the ones we know about.

But we can also share maps. You can give me your map of Jamaica. It's still a map, and it's still a thing. But now you want to build an argument based on the fact that we can draw and share maps of Oceana, Candy Island, and Mario World. Sure, we can do that--but they are still maps.

But now you're trying to argue that we cannot make maps on blackboards. Well, maybe that's right. Blackboards are not paper and ink. But I'm having difficulty following your argument, because I don't see the inherent limitation. Well, you don't know how the map of France gets to actually be one of France, but you know that pen and paper maps exist, and somehow get to be of France--perhaps being a map of a thing requires really complicated stuff. But, you surmise, if we just draw a line on the blackboard, that could be anything. That line could be a road, or a side of a Hospital. And no matter how many lines we draw, it would simply be a dusty line that could be anything, and we know our maps of France involve nice thin lines embedded in sheets of paper, and also somehow wind up being of France.

I object that your argument against blackboards being maps doesn't hold. Maps are just things, and we know they exist, and get to be about real places. All you need to do in theory is look at pen and paper maps and figure out how they can be maps of France, then see if that requires ink, or if drawing lines suffices. Furthermore, best as I can tell, your argument precludes even the possibility that pen and paper maps can be of France, because even if it does require ink--though I'm not sure why it would--the ink lines themselves could be roads or the side of hospitals. I have no idea why you think a line gets to be of France solely because that line is drawn in ink, and I don't buy the argument that we have to make the blackboard map an ink map to work simply because the ink map, which we know does work, uses ink.

But you object that just because we can put picture of a blackboard with some lines on it into one of our maps doesn't mean what's drawn on the actual blackboard will ever correspond to a particular place. And, sure, that's fair; the one doesn't automatically imply the other. But I still don't see why you think a blackboard cannot be a map of France.

But then you go on to say that since we can draw a map of Candy Island, it's equally likely that a blackboard can be a map of France as it is that Candy Island exists.

And to that, I say, bollocks. The fact that both are things we put in maps does not entail they are equally likely. The argument is fundamentally broken here.

Nominated.
 
In other words, if you have a "pick-up truck that is capable of hauling a half ton load" and it's being "hosted by a computer" then obviously we've got a computer in front of us, not a truck.
I know you're just using this as a metaphor, but, we're not talking about pick up trucks. In particular, we're not talking about a capability along the lines of "hauling a half ton load". Anything along those lines in the realm of consciousness is an intention, and we really don't care, regarding consciousness per se, if it initiates an intention to lift a physical cup, or a simulated one. Whatever the initiator of intent does is the same in any case.

If I just put on VR goggles, and enter into a sophisticated simulation, I myself can lift a simulated cup. I do that with an intention of lifting up a cup--I happen to know it's simulated in this case, but I don't think I have to learn how to pick up a virtual cup like I would have to learn how to juggle--I'm pretty sure whatever goes on when I just lift a real cup is the same thing that goes on when I lift a virtual cup.

This is not true when we're talking about trucks hauling freight. We can't just hook up the truck to a truck simulator, and have its current freight-hauling capability simply apply to virtual freight.

So you're at the very least comparing apples to oranges. As much as you want to simply apply your truck metaphor, there is a very good argument to why that metaphor simply fails, and isn't apt.

Note that no actual virtual people were invoked in the discussing of this point. All hypothetical stunts were hypothetically performed by a real actor (using existing technologies even!)
 
So, Piggy, just to be clear. You're claiming that a simulated person cannot become really conscious, because a simulation is lacking an interpreter? (Apparently that's the thing you're claiming I don't understand, right? That everything is just a "physical computation" until it's assigned meaning by an interpreter?)

How, pray tell, does an interpreter work?

ETA: I don't need a full explanation of how an interpreter does work--just some sort of idea. You seemed to have ruled out how it could possibly work given the above description.

That question "How does an interpreter work?" is the really interesting one.

But we can't get to it unless we clear the deck of the kinds of mistakes that lead folks to think that a person who is being simulated on a computer could become conscious.

Now, I wouldn't put the case quite like you do here.

In a way, it's true that the simulated person can't become conscious because simulations involve (require) observers.

But that's one of those "I was late because my daughter thought the sprinkler was a duck" sort of statements... perhaps true, but not fleshed out enough.

The simulated person cannot become conscious because it is imaginary... that's another way of saying it, but just as unclear.

I think the key question to ask here is this: When we're talking about the man in the simulation, what all do we have to have in the room, so to speak?

Well, first, we need the simulator. We're assuming it's a computer here, which is fine. We need that thing. Let's say it's plugged in and idling.

Now, funny thing about this idling... when you think about it, you could pick any collection of parts in that machine -- atoms or molecules or bigger pieces, whatever -- and assign any number of symbolic values to the kinds of changes they're making.

And if you did that, you could then describe any number of possible worlds, and track their development over time by looking at what the machine was doing.

In other words, you could simply decide, in your own mind, that these sets of changes represented logical values for each type of change, and in each case you'd come up with a possible world.

Which is not to say that all of these worlds exist anywhere, it's just that if you decide to imagine symbolic values for part of what the machine's doing, then looking at the machine causes you to imagine these worlds in various states, depending on the state of the machine.

So what does it mean to say that the world changes?

What do we need "in the room" for that to happen?

The computer can sit there and change state all day, but just as no human could examine the computer and determine the state of the world you've decided to imagine (except as one of a practically infinite number of possibilities) the computer itself could not do so, even if it somehow literally "knew" everything about itself.

The computer alone in a room does not, and cannot, constitute the world you imagine.

Why? Well, because you imagined it! It's a state of your brain. It only changes when you look at the machine and subsequently change your mind about what's going on in that world.

This fact -- and it is a fact -- is extremely important.

In order for the state of the "world of the simulation" to change -- which is to say, the world which the simulation is intended to represent -- a brain needs to change states.

There is not enough information in the computer alone to make that happen.

And this fact does not change when we somehow make the computer change in certain ways to custom fit what we imagine.

So you see, we don't -- at this point -- have to ask about how those changes in brain state come about. And that, after all, is the same as asking "How does an interpreter work?"

So we don't need to know how the interpreter works in order to know that one is involved, simply because we know that a change in brain state must be part of the system if the "world of the simulation" rather than merely the "computer" can be said to have changed.

I would love to move on to the question of how the interpreter works.

But the explanation -- or, rather, the partial and still unsatisfactory explanation we have -- isn't going to make sense to anyone who still accepts that the world of the simulation is anything but imaginary.

And the key to making the picture come into focus isn't so much focusing on the need for the brain state (or interpreter or observer)... the key is to be extremely rigorous in making clear distinctions at all times between the physical computations of the real world, on the one hand, and the symbolic computations we imagine them to equate to when we build and use information processors.

If you avoid conflating those two things, you won't make the errors that lead to conscious people inside of simulations who are experiencing the world which the simulation is intended to represent.

I will say, though, that the process is the same as the one that's needed to understand why taking a photo of a baby doesn't mean there's a new baby in the world. It's just that there are several key reasons why that logical process is much harder to follow when it comes to computer sims... especially if you spend a lot of time dealing with computers.
 
Last edited:
I know you're just using this as a metaphor, but, we're not talking about pick up trucks. In particular, we're not talking about a capability along the lines of "hauling a half ton load".

No, I'm not using it as a metaphor, but an analogy. There's a very important difference.

And yeah, in fact, we really are talking about something "along the lines" of hauling a half ton load. We're talking about matter and energy in spacetime, not logic.

The brain is a physical organ of the body. Just like every other one.

It works by physical computation, by real stuff happening in real time, involving real matter and energy.

That's no joke. That's the plain fact of the matter.

You cannot replace your kidney with a simulation of a kidney running on a computer.

You cannot replace your brain with a simulation of a brain running on a computer.

That's because both organs operate according to one set of rules -- the laws of physics.

Anything you replace them with must operate in a physically equivalent way.

Anything along those lines in the realm of consciousness is an intention, and we really don't care, regarding consciousness per se, if it initiates an intention to lift a physical cup, or a simulated one. Whatever the initiator of intent does is the same in any case.

Good God... you're so tied up in the world of the logic that you literally can't change your perspective to physical reality long enough to comprehend a paragraph on the topic.

None of what you're talking about is at all relevant to what I was saying.

When I'm talking about the physics, I'm talking about the physics, I am not using a "metaphor" for symbolic logic.
 
That was never in question.

You claimed that an observer would NOT be able to look at the physical hardware of the computer and determine that addition was taking place.

That claim is simply wrong, and you made it because you know nothing about digital logic.

Ok, describe for me a system which, if observed by someone who knows nothing about anything except the system, would have to be "performing addition" in any way other than real addition (actually physically aggregating things).

Remember, you've got to posit a completely ignorant observer, or else you're bringing a brain into the system with the machine, and if brain states are required, then you know where we are.
 
Status
Not open for further replies.

Back
Top Bottom