• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
The problem with this question is that it assumes that a "computer" actually can "replicate all the functions of a human brain".

Despite the claims of the computational literalists, this does not appear to be the case -- but we'll have to delve into that in another post.

So we must rephrase -- if someone actually built a machine which replicated all of the functions of a human brain, would it be conscious?

The answer is yes.

What would be the differences between the mechanical brain and a biological brain?

Well, that's not a question that can be answered at the moment.

To answer it, we'll first need to know precisely how the human brain performs experience... only then will we know which features of the biological brain need to be replicated and which do not in order for a replica brain to be conscious.

So basically you're telling us that the Brain Works In Mysterious Ways.
 
I think that's exactly what it is; it continually models an internal, virtual reality based on perception of incoming signals. It's limited in scope and resolution, but if our internal model isn't a simulation, what is?

Well, to me, this is really the interesting question.

Our brains certainly are not simulator machines in the sense that, say, a flight simulator is a simulator machine -- that is, it's not designed to trigger another brain to imagine things.

And when we talk about the non-conscious brain, which is where we must begin when we ask how consciousness evolved and how it's generated, then we certainly cannot talk about any simulation taking place (the necessary components are not there).

Yes, there's a cascade of reactions when, say, light bounces off a tree and onto my eyes. There may be an involuntary response, such as squinting, but that involves no simulation of anything.

And we'd be making a mistake to claim that this cascade of activity in my head "is an image of a tree" or "is a representation of a tree" as far as the work of the non-conscious brain is concerned.

As the impulses are working their way down the optic nerve, for example, there is simply no way we can meaningfully identify these as a "representation" or "image" of anything... at this point, they are merely a physical outcome of the light striking my eyeballs.

It's only when the brain performs an experience (i.e. when it is "consciously aware of the tree") that we can begin to discuss the performance of colors and textures and smells and such, and only then does it make sense to speak of representations or simulations.

Which raises an interesting question... does the conscious mind use the non-conscious brain as a kind of simulator?

I think it very well might make sense to frame it this way. And in fact, it may turn out to be the only option that does make sense.

Viewed that way, the physical processes in the brain which produce experience depend upon the activity of other physical processes in the brain, and in fact use that activity to perform green and wet and cold and everything else we experience.

And in that way, the conscious brain does act like a simulator, because the physical activity of the brain is not the same as either the experience it produces or the target of that experience (absolutely nothing about your experience of a tree is present in the tree, and your brain does not behave like a tree).

So if we want to build a machine that does this too, it's not enough to build a machine that simulates a brain from the point of view of an already-conscious designer/reader brain.

First we have to understand the mechanism which allows certain patterns of impulses in the brain (but not all of them) to be integrated as a performance of experience within the brain itself, and how and why these mechanisms produce the particular experiences they produce.

Once that's understood, then we can go about (perhaps) designing and building machines which also work this way.

But without that understanding, we can build all sorts of machines that simulate things to our observing conscious brains, and none of them will perform the feat of experience themselves, because they're not built to.
 
So basically you're telling us that the Brain Works In Mysterious Ways.

At the moment, yes. With research, those ways become less mysterious. And although with regard to consciousness the brain's ways are much less mysterious than they were a generation ago, we still don't have answers to the most important questions.

Of course, one is free to pretend otherwise and simply believe that these problems have been solved when they haven't, but that leads one into a territory rife with dragons.
 
At the moment, yes. With research, those ways become less mysterious.
I don't think you understand what he's saying Piggy.

You claim that it is impossible for a computer to produce a reference. And yet, somehow, our brains wind up doing it.

How would you respond to the notion that it is impossible for a brain to produce a reference? You claim that only the brain knows something, but you haven't even an outline for how it gets to do that. There are those crazy whack jobs who claim that knowledge itself is impossible.

Are they right? Is it impossible for the brain to know and understand? I mean, sure, we know it does know, wink wink, and we know it does understand, wink wink. But how could it possibly do that?

You're the one using the impossibility argument against computers. I want to know if your impossibility argument would fail if we sanity checked it against the thing we know does know and does understand things. So far, I don't see that argument failing at all when we apply it to the brain. That should be an indication to you that something is fishy about your argument.

In other words, that thing that is in your warm fleshy bit that is the "right big toe" could map to anything--this is the same problem you point out with the computer. What about that thing makes that idea of the right big toe about your right big toe?

It's really easy when you think about it. The only hard part is that you have to admit you're wrong first.
 
Last edited:
Didn't we go over this before? There's no such rule, and there are so many counterexamples as to make this rule perverse. With just enough energy to shove a glass off of a table, I can cause it to break. With just enough energy to turn a steering wheel, I can cause a 40-car pileup. With just enough energy to start the big bang, everything that ever was or will be in the universe resulted. Any time you knock any system out of equilibrium, the effects can follow indefinitely. The famous butterfly-flaps-its-wings analogy of huge chains of effects from the tiniest of causes violates exactly no laws of physics--and how much energy do you suppose a flap of a butterfly wing uses?

There is no such thing as a law of physics that states that with only enough energy to cause A you cannot cause B to happen. The law of conservation of energy isn't such a law; after your cause A, you still have exactly as much energy as before the cause A. If you just up and cause B anyway, you'll still have the same amount of energy. If you cause A and that causes B, you still have the same amount of energy. If you knock over a domino (A), and it causes another domino to fall (B), and another (C), and another (D), and another (E), and so on, you can get an indefinite chain of such falling dominos to occur, with exactly 0 energy loss.

There's a fatal problem with your reasoning here.

First, if you think that only the matter and energy involved in the motion of your hand was involved in the glass breaking, or that only the matter and energy involved in your turning the steering wheel was involved in the 40 car pile-up, then you need to rethink those scenarios.

In fact, you can try turning your steering wheel while your car is on a lift, or moving the glass while it's in the middle of a table, and you'll see what I mean.

But computational literalism does not maintain that the "world of the simulation" is something that happens merely as a consequence of the behavior of the simulator (this is what I am stating, in fact, using the matter and energy of the brain, just as your examples here require matter and energy beyond the A system).

Rather, their claims rely on the behavior of the simulator and simulation being literally identical.

So yes, you're right that there's no law of physics which prevents one real event from causing another real event.

And in the case of simulators the first real event is the physical behavior of the simulator (which is not the same as the behavior of the target) and the second real event is an act of imagination.

What is impossible is for event A, not to cause event B, but rather to actually be event B, which is not the same as event A.

That's why the behavior of the simulator cannot be the behavior of the target system as well, as long as the target system is something other than the simulator machine itself (a redundant simulation).
 
As for yy2's post, I'd love to address some of those issues, but I'm not going to get into a discussion using the terms he insists on using... we've seen where that ends up. By employing terms like "sign" and "symbol" in the way he wants to do, we get into a conflation of the real and symbolic right off the bat, so as long as he demands that we speak his jargon, I'm afraid it's a non-starter.

Man I wish I knew how to include that laughing dog gif.

I feel bad for yy2bggggs because he took the time to not only read your mega post but also to respond inline, which probably took him at least 30 minutes. And you are writing it off with this "I'm afraid it's a non-starter" dodge?

But I hope he takes some comfort in knowing that it was a very good response and anyone who takes the time to read it will instantly forget everything you said in your mega post.

I mean, if you want to understand how a vending machine operates, you're off on the wrong foot if you begin by saying that the machine "recognizes" coins as "tokens" with "values", for instance. Using anthropomorphic metaphors to discuss these matters is a surefire ticket to confusion.

Well, he isn't talking about vending machines. He is talking about consciousness and symbolic logic.

You don't think the terms "recognizes," "tokens," and "values" have anything to do with consciousness and/or symbolic logic?
 
I don't think you understand what he's saying Piggy.

You claim that it is impossible for a computer to produce a reference. And yet, somehow, our brains wind up doing it.

How would you respond to the notion that it is impossible for a brain to produce a reference? You claim that only the brain knows something, but you haven't even an outline for how it gets to do that. There are those crazy whack jobs who claim that knowledge itself is impossible.

Are they right? Is it impossible for the brain to know and understand? I mean, sure, we know it does know, wink wink, and we know it does understand, wink wink. But how could it possibly do that?

You're the one using the impossibility argument against computers. I want to know if your impossibility argument would fail if we sanity checked it against the thing we know does know and does understand things. So far, I don't see that argument failing at all when we apply it to the brain. That should be an indication to you that something is fishy about your argument.

You're wrong on both counts.

I can see that discussion with you still leads nowhere, so don't expect for me to keep it up for long.

Here's what tsig was responding to:

What would be the differences between the mechanical brain and a biological brain?

Well, that's not a question that can be answered at the moment.

To answer it, we'll first need to know precisely how the human brain performs experience... only then will we know which features of the biological brain need to be replicated and which do not in order for a replica brain to be conscious.

And my response was correct. Until we know what the brain is doing when it performs experience, we cannot build systems that do the same thing.

And I've never said that it's "impossible for the brain to produce a reference".
 
Man I wish I knew how to include that laughing dog gif.

I feel bad for yy2bggggs because he took the time to not only read your mega post but also to respond inline, which probably took him at least 30 minutes. And you are writing it off with this "I'm afraid it's a non-starter" dodge?

But I hope he takes some comfort in knowing that it was a very good response and anyone who takes the time to read it will instantly forget everything you said in your mega post.



Well, he isn't talking about vending machines. He is talking about consciousness and symbolic logic.

You don't think the terms "recognizes," "tokens," and "values" have anything to do with consciousness and/or symbolic logic?

I'll respond to what I can, but we've got to lose the anthropomorphic metaphors.

And it's not a matter of "having anything to do" with the topic... it's a matter of clarity and precision.

yy2's response is riddled with the same sorts of errors we've discussed before. I can discuss those errors, but not if we're forced to use language in which those very errors are implicit.
 
Rather, their claims rely on the behavior of the simulator and simulation being literally identical.

Huh?

Of course the behavior of the simulator and simulation are identical -- the simulator is the simulation. They are one and the same.

How else could it be?
 
In other words, that thing that is in your warm fleshy bit that is the "right big toe" could map to anything--this is the same problem you point out with the computer. What about that thing makes that idea of the right big toe about your right big toe?

Well, now, this is the interesting bit, isn't it?

If you start to think about what my brain is doing in terms of what simulators do, there's no way to explain it, is there?

This should give you get another clue that my brain is doing something different from what the simulator machine is up to.
 
And I've never said that it's "impossible for the brain to produce a reference".

You said it was impossible for a computer to do so.

The question yy2bggggs is trying to get you to answer -- and I think everyone else would like to know as well -- is why you think it is impossible for a computer to produce a reference but not impossible for a brain. Given that you freely admit that you have no idea how a brain does it.
 
Huh?

Of course the behavior of the simulator and simulation are identical -- the simulator is the simulation. They are one and the same.

How else could it be?

If you had read the "mega post" you'd know my answer.

In fact, not only are they not the same, but they cannot be the same.
 
If you start to think about what my brain is doing in terms of what simulators do, there's no way to explain it, is there?

But you admit elsewhere that there is no way to explain it yet using any other terms, either !!!
 
Well, now, this is the interesting bit, isn't it?

If you start to think about what my brain is doing in terms of what simulators do, there's no way to explain it, is there?
Actually, yes. In fact, I gave you such an explanation. You just didn't understand it.

"Performing this action" in the neural network results in that toe moving back and forth. What is "that toe"? Well, it's defined by your perception of the toe--which is a representation within that warm fleshy bit between your ears of the toe. Now I can hardly be accused of referring to a simulation here, because my referent is that warm fleshy bit.

But we know that "performing this action" is wiggling the toe precisely because of that feedback loop. It's "the toe" because we defined the toe in terms of reference of perception.

This isn't defining the brain in terms of what the simulation does--it's simply describing how that brain does it.

Now I'll tell you what's interesting. You dared even include the word "interesting", but you didn't actually explain how this could possibly work.
This should give you get another clue that my brain is doing something different from what the simulator machine is up to.
Yeah, I know. Magic beans.
 
If you had read the "mega post" you'd know my answer.

In fact, not only are they not the same, but they cannot be the same.

Well, I see the issue here -- you just redefined "simulation" willy-nilly and expect everyone to understand wtf you are talking about, yet you yourself don't even know.

Look at these two quotes you force me to dig from your mega post:

piggy said:
Well, for instance, I could use a computer simulation of a piece of furniture, say a chair,

...snip...

That’s because only the machine running the simulation (which we’ll call the simulator) is independently physically real (more on this later)
piggy said:
This is true, the simulation is “real”. But it doesn’t exist in the simulator.

...snip...

Where is the simulation?

...snip...

He is only real as a state in the brain of an observer in the audience. He exists as an act of imagination, which involves matter and energy in the brain.

So you are saying the simulation "runs on" the simulator, and the simulator "runs" the simulation, but that the simulation is taking place "in" the brain of the observer?

Man I am really confused now...
 
And my response was correct. Until we know what the brain is doing when it performs experience, we cannot build systems that do the same thing.

And I've never said that it's "impossible for the brain to produce a reference".
I never said that you said that it's "impossible for the brain to produce a reference". What I said was that I can apply your argument to the brain, knowing that it does in fact produce reference, and I have no way to stop it from getting that conclusion save special pleading.

This is called a "sanity check", Piggy, not putting words in your mouth. The test of your theory about the impossibility of computers to do X is how well that theory explains an actual impossibility; the way this sanity check works is to apply the argument to something that does in fact do X and see if it concludes that it, too, is impossible. The problem occurs because there's no stopping point in the argument... it plows right through the brain and reaches that conclusion. Unless we add special pleading, of course.

There's nothing to break down when applying your argument to the brain, though. Not only am I forced to conclude that the "right big toe" in that warm fleshy bit could be about anything, but, it's actually right. We have to perform work to figure out how our existing knowledge is a dual of some other thing--a simple reassignment of the same concepts works beautifully a lot more often than our ability to know that the reassignment is possible.

But if I actually reach out and touch a cup, I can say that this is the thing I mean by "my cup". I can pick it up. What is happening here is that I'm referencing the real cup, but I only do that using my brain. My brain has a model of the cup, and that's tied to reality via the percept of the cup, which also happens in my brain, and is related to that model. And it plays into my model of how I move, and how I trigger brain processes, such that when I reach out and touch it, I know that it's touching the thing I mean by the cup.

There's no problem accounting for this at all. The only problem here is that given this account, we're forced to conclude that an android can do it. And since you don't want to conclude that, you're left holding your magic beans.
 
Last edited:
So why should it surprise you that we cannot explain it in terms of symbolic computation?

Well I tend to think that the logic used by the human brain is only capable of realizing that W is not an explanation for event E because it has already considered that X, Y, and Z are more plausible candidates.

Thus I would expect that if we are so sure we cannot explain it in those terms, we would have other terms in mind.
 
Status
Not open for further replies.

Back
Top Bottom