• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
Because ours is the "real" world, duh ! :)

I think we can be reasonably confident that whatever the real world consists of, we don't fully understand it. I.e. there's a gap between reality and our theory of reality. The same applies to the virtual people. Reality is the same in both cases.
 
They would have no experience or perception outside the virtual world; to them, the laws of nature would be the laws governing the behaviour of the elements of the virtual world they perceive.

On encountering a Bishop Berkeley-style message from us that his world is virtual and has no material existence, the Dr. Johnson consciousness would strike his virtual foot with mighty virtual force at a large virtual rock, from which it would rebound, sending the sensation of a stubbed toe to him, and exclaim "I refute it thus!".

Yes, and they would be wrong - objectively wrong - just as we would be wrong to think that Newtonian physics was right, or that the Earth was flat. That's why, after a long, long time, scientists don't claim that their theories are a true model of reality - just that they can predict what is happening. We expect to continue to find out more, and to make better models that will better predict what is going on.

It might be that in a closed system, the artificial consciousness will hit a dead end of scientific research - that his scope for investigation will be able to go no further than the rules programmed for him. He will continue to be bound by ultimate reality, though. What ultimate reality might be, and how it confines us, is as unknown to us as to him. We both live in the same world, however.
 
Interesting conversation. You two are right in there in the nity gritty of it all. I've run over the same ground as I've seen posted here in my own reflections and I currently think consciousness is an emergent property of a combination of processing power. sensory input and adaptive coding ... or did I already say something like that.
 
You literally don't know what you are talking about piggy.

Read a book on computer architecture and get back to us when you know something.

Well, you should be able to tell me what happened, then, which falls outside the scope of that description.

Please do.
 
Doesn't matter. I don't believe 197 of anything matched up with 203 of anything when I multiplied the two to get 39991 (sorry for picking an easy example, but what's fun is that I get to use a non-standard algorithm here).

But nevertheless, when I, in my head, said (200-3)*(200+3)=(200^2-3^2)=40000-9=39991, I really did multiply numbers. Why? Because those numbers have names, according to a convention that we set out. And the goal of multiplication is to find the name of the number that is the product of the two numbers we just named. And that's exactly what I did.

That's right. We both agree about that.

You found the right name.

We also agree that no physical multiplication took place.

So we're perfectly in line here.
 
I did not perform the math directly. I cheated. I used a trick--an algorithm I learned--in this case, one I learned in algebra class. But the standard multiplication algorithm is also a trick--it's just as much a cheat. But it's multiplication.

This is because, in mathematics, how you get the answer isn't as important as simply getting the answer in a valid way. There is no single proper way to get the answer; any valid way that you use to get the answer is legitimately performing mathematics.

This is why I'm telling you it's a bad example. Anything that works in a dual space that maps to the problem space, where you use the dual to perform the work, counts as doing the work.

Actually, that's why it's such a good example.

The trick to seeing why is to make sure to always clearly distinguish between the material and the symbolic, the physical computations and the logical computations, the real and the imaginary.

How does real addition happen? Stuff that was separate moves into a group. Or maybe new stuff is created to increase the size of the group. We can observe this happening in all kinds of ways.

Computers simulate this happening, which means they use a different physical system -- their own hardware -- which gets them into the state to, as you say, find the right name.

Now this is a very different kind of thing to do. Real addition isn't part of what the machine is doing. And in fact even what it's doing when we add the human to the system, ending up with the right light pattern on a screen or pattern of ink on paper to make a human brain think of the number five, is quite distinct from real (physical) addition.

So when you say "in mathematics, how you get the answer isn't as important as simply getting the answer in a valid way", that's true, but mathematics is about as deep into the symbolic/imaginary side as you can get, so keep in mind that this point is clearly relevant only when you're discussing the "informational" side, but not necessarily the physical side.

I mean, mathematically, time is reversible, but in our lives it's not.

So you're right, the computer comes up with the name -- which is what it's designed to do -- it does not perform physical (real) addition.

And what's interesting, when you think about it, is that the computer isn't even actually simulating real addition... it's simulating mental "addition", which is itself simulating real addition.

The human brain is also in the game of coming up with the name, rather than actually adding things physically.

The brain does that, though, not by having its behavior shaped by a programmer, but by having its behavior shaped by evolution. Which actually isn't a metaphor here, since the process of evolution quite literally determines the physical shape of the brain and all the rest of the body.

And the shape and material of the brain in its environment are the things that determine what it does, how it operates. (Same for a computer, of course, or a kidney.) For these objects, symbols mean nothing... the only "rules" they can be said to follow are the laws of physics.

Over time, the physical channels in your brain which are active when you hear the sound "two plus three" and when you think about the number five, come to overlap to such a degree that the cascade of neural activity that is physically inevitable when that sound hits your ear will at some point include the neural activity that is going on when you think of the number five. (It should do this when you're asleep in many conditions, too.)

This is how the idea of five "occurs to you" or "pops into your head" after you hear the sound "What's two plus three?" It's a matter of neural erosion.

To use the hydraulic metaphor, a flood in the "two plus three" sound area will cause heavy flooding in the "five" number area of the brain, because that's how the pipes are laid out.

This is not the way the computer operates.

So all 3 cases are distinct.

From a mathematical point of view maybe not, but taking that point of view wipes out the entire reason for the exercise, because that view is immersed in one panel of the triptych that we're looking at.

And I might suggest that a failure to move out of the symbol world when considering the physical world may be a big part of the reason why the "man in the world of the simulation" idea still appeals to you.
 
I agree. I think that what you proposed, albeit theoretical, is quite different to running a computer program.
To the contrary, that's exactly what it would be. Perhaps not a computer program that people unfamiliar with multi-tasking & time-slicing CPUs or multi-threading software would recognise, but still a computer program. Being composed of a (very large) number of software modules emulating neurons wouldn't change that.

It would be a different form of cyborg.
Different from what other form of cyborg?
 
To the contrary, that's exactly what it would be. Perhaps not a computer program that people unfamiliar with multi-tasking & time-slicing CPUs or multi-threading software would recognise, but still a computer program. Being composed of a (very large) number of software modules emulating neurons wouldn't change that.

It might be like a computer, but that doesn't mean that it would be like a computer program. In particular, the type of computer program typically specified for artificial intelligence.

Different from what other form of cyborg?

A man with an artificial limb, for a start.
 
Well, you should be able to tell me what happened, then, which falls outside the scope of that description.

Please do.

http://en.wikipedia.org/wiki/Arithmetic_logic_unit

The digital logic that occurs when performing an arithmetic operation on two bitfields is very specific -- it is a dedicated portion of the hardware. Not only is the ALU different from the rest of the hardware, but the portions of the ALU responsible for different arithmetic operations are themselves very distinct.

Any intelligent entity can look at the way the gates are set up and see that they correspond to specific arithmetic operations -- addition, multiplication, and division.

Arithmetic isn't some "generalized" computation that happens in a "generalized" computer part.
 
It might be like a computer, but that doesn't mean that it would be like a computer program. In particular, the type of computer program typically specified for artificial intelligence.

Make sure you watch your step as you backtrack, seems you are doing it at quite a good clip now. Wouldn't wanna trip.
 
But we're not talking about the brain friendliness of what the computer did. We're talking about the addition-ness of what it did. So comparing how brain-friendly the computers processes were to how brain friendly the thing I did above was is a non-sequitur; we should, instead, compare how "math friendly" the computer's addition was versus how "math friendly" our brain friendly addition was.

No, this is not what I was talking about.

You seem to want it to be.

But in fact, I meant what I said.

And I wasn't comparing the brain-friendliness of what the computer did versus what your brain did.

I was pointing out that the behavior of the computer (the machine) is much less brain-friendly than a programmer's GUI.

The programmer can think in terms of containers which have contents that can be read, erased, added to, moved from place to place, and so forth. She can think in terms of objects and their relationships. (Like the post office model of the brain.)

All of that, of course, has nothing to do with the physical computations of the machine... and the physical computations of the machine, which is to say what you can observe the computer doing, is the only thing she is making happen, period.

Which is to say, what is really happening is that machine wiggling around when it's electrified.

The implementation of the logic is not occurring at all, in real terms.

And this is where the rubber meets the road.

Remember, the logic is merely a symbolic value that human brains associate with the physical computations (the real-world behavior of the machine).

An infinite variety of symbolic values can be imagined for the behavior of any machine, and none of them have any impact on the machine at all, nor leave any mark or sign of their existence upon it.

So anytime you want to talk about what's "really" happening, you cannot appeal to the logic of a symbolic system, because that resides entirely in patterns of people's brains.

The implementation of the logic cannot be occurring in the machine, because the logic is imaginary -- because it's patterns in brains.

This is the bit that you have not been getting, which is keeping you from understanding that a simulated person cannot become really conscious.

Once you make a clear distinction in all of your reasoning between what's going on in the wider physical world, and what's restricted to brain states, then I think it will all fit nicely into place and you'll no longer have to wonder what would happen if a person in a simulation were conscious.
 
Read a book on computer architecture and get back to us when you know something.

Btw, if you're talking about logical architecture (which you probably are) then you need to understand I was talking about what the physical machine is doing, not the logic we assign to a subset of those actions.
 
I think we can be reasonably confident that whatever the real world consists of, we don't fully understand it. I.e. there's a gap between reality and our theory of reality. The same applies to the virtual people. Reality is the same in both cases.

That is, again, and unsurprisingly, irrelevant.

The "real" laws of physics may not change, but as far as the simulated entity can ever perceive, its laws of physics are different and, from its point of view, they constitude its reality. Of course the "real" world doesn't change, but why would you mention that in the first place, unless you completely fail to understand what I'm talking about ?
 
Replace "a simulation" with "the universe".

Yes, that's what I was trying to say.

A simulation... that is, the action of mimicking one thing with another thing... or in another sense the apparatus as it's operating... is a real thing, just like the rest of the real stuff in the universe.

What the simulation is supposed to represent, however, is not part of, nor even evident in, the simulation... only the brain of the programmer and reader make that association.
 
Informational. Not imaginary.

Informational, in that context, is imaginary.

In other words, in the sense in which computers are information processors but stars are not.

In that sense, informational is necessarily imaginary, because we're talking about the states of people's brains.
 
No. That doesn't follow. First, it completely ignores the statistical mechanics of each system. The key factor of a computational system is that it can exhibit behaviour starkly different from the bulk properties of the material involved. Otherwise your brain would be nothing more than a sponge.

Second, you just pulled "infinite" out of your ass.

First of all, the number of possibly simulated systems must be infinite, because every combination of particles could be simulating at least one complete system, and an infinite number of larger but incompletely represented systems.

And your use of "computational system" here is what's getting you into trouble.

You aren't distinguishing between physical and symbolic computations, which at this stage of the conversation is necessary. (Well, it always is, really, but especially now.)
 
You said "When you emulate any real thing in software... you've turned a real thing into an imaginary one". I'm saying this isn't necessarily so. When one microprocessor emulates another in software, it is just a language translation; whether it is done hard-coded on-chip or in RAM makes no difference. The same algorithm using the same instruction set can run on both processors identically. The same principle applies to one microprocessor emulating multiple others by time-slicing or other means.

I'm trying but failing to see what imaginary thing you think I've substituted for what 'real' thing.

The language can be touchy.

I took you to mean that at some point a computer simulation of a physical entity was taking the place of that entity within a system.

I may well have misunderstood you. I've had few programming courses, and they were several years ago, so you can't count on me to get shop talk.
 
Exactly what simulation are you referring to? I talked about replacing neurons with functionally equivalent chips running neuron algorithms, which you accepted, and replacing groups of those chips with a single mutli-tasking chip that emulates them - still running the original neuron algorithms on each virtual processor. What has become imaginary?

Sorry, I obviously misunderstood you there.

Looks like you're just changing the chair legs.
 
Now this is a very different kind of thing to do. Real addition isn't part of what the machine is doing. And in fact even what it's doing when we add the human to the system, ending up with the right light pattern on a screen or pattern of ink on paper to make a human brain think of the number five, is quite distinct from real (physical) addition.
Could you please tell me what "real addition" is?

ETA:
The implementation of the logic is not occurring at all, in real terms.
Could you please tell me what "implementation of the logic in real terms" is as well?
The implementation of the logic cannot be occurring in the machine, because the logic is imaginary -- because it's patterns in brains.
I think you're confusing the intensions with the extensions.
 
Last edited:
Status
Not open for further replies.

Back
Top Bottom