• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
It's not a strawman, it's an analogy. Sheesh.


Straw man fallacy
A straw man is a component of an argument and is an informal fallacy based on misrepresentation of an opponent's position.[1] To "attack a straw man" is to create the illusion of having refuted a proposition by replacing it with a superficially similar yet unequivalent proposition (the "straw man"), and refuting it, without ever having actually refuted the original position.[1][2]
 

That Wikipedia article defines a cyborg as 'a being with both biological and artificial (e.g. electronic, mechanical or robotic) parts'.

The entity we've been discussing (i.e. artificial consciousness and humanoid mechanical body) has no biological parts.

It doesn't really matter, it just confused me.
 
That's irrelevant to my point. I'm sure you really want to make a point to correct something I never said, but all I need to establish is that at least one kind of physical lump of matter does this thing.
No …. Have you heard of the Hasty generalization logical fallacy.
Why, yes, I have.
Can you show me another?
I don't know. You're not even defining your terms.
We are arguing about whether a silicon chip can become a conscious entity.....not whether there are aliens and gods.
Not exactly. I'm arguing that some of the arguments I've seen here are invalid.
The above is nothing but an argument akin to RELIGIOUS arguments.
How flattering of religious people to have you associate their arguments with my argument style! I'm sure they're going to appreciate the complement.
We were discussing what we KNOW HERE AND NOW….. not what might be lurking in some corner or GAP of the universe.
That word "know" has a very specific meaning to me, and it's one of the few things I'm not going to budge on.
Are you a theist? By your above theistic style argumentation we cannot dismiss Fairy Godmothers nor Thor and his hammer, since they could be out there too.
Well, no, I'm not a theist. But you're apparently not understanding my style of argumentation here.

The key difference, and it makes all the difference in the world, is that you have an entity that is conscious. You have, effectively, a physical proof of concept demonstrating that it not only can exist in theory, but can and does exist in practice.

You cannot say that about Thor or Fairies.
We were discussing consciousness as we know it here on earth….. not the possible SPIRITS and other constructs that you and theists can concoct.
No, you're talking about what it is possible to construct in another substrate.

Your line of reasoning is analogous to reasoning that we cannot build a flying contraption because all of the flying devices we know of are biological.
 
Why, yes, I have.
I don't know. You're not even defining your terms.Not exactly. I'm arguing that some of the arguments I've seen here are invalid.

How flattering of religious people to have you associate their arguments with my argument style! I'm sure they're going to appreciate the complement.

That word "know" has a very specific meaning to me, and it's one of the few things I'm not going to budge on.

Well, no, I'm not a theist. But you're apparently not understanding my style of argumentation here.

The key difference, and it makes all the difference in the world, is that you have an entity that is conscious. You have, effectively, a physical proof of concept demonstrating that it not only can exist in theory, but can and does exist in practice.

You cannot say that about Thor or Fairies.

No, you're talking about what it is possible to construct in another substrate.

Your line of reasoning is analogous to reasoning that we cannot build a flying contraption because all of the flying devices we know of are biological.


No!!
 
No… my argument is that an imagined conscious silicon chip running a simulation is just that…. An Imaginative construct.
Okay, sure. An imagined conscious chip is an imagined construct.
Whether it might come to be true or not is not proven by the strength of the imaginative process nor by the fact that you can imagine it.
Sure. However, there's an important caveat here you're not appreciating. My imagination can apply reasoning. I can imagine a work of art, and actually bring it into being. I can imagine a computer program and actually code it. Furthermore, imagination is where all of this reasoning is coming from about how to do things--and, sure, we can get the how's wrong. But we can get them right.

I'm not saying merely that "it doesn't mean it can't exist"; I'm saying that you cannot argue that because the chip is imagined, any arguments against why it can be conscious must be worthless. This sort of argument rules out any and all reasoning we do.

And reasoning, I hate to tell you, works. Even if we reason about a thing that has never existed before--all we need do is reason it out properly.

So if you're simply trying to establish that just because we can imagine it doesn't mean it's possible, I concur fully. But if you're trying to argue that since your opposition is imagining it, then they don't have a valid reason to reach a conclusion that it can exist, then you're full of it. You're just making up rhetoric in the latter case, and it has no value.

If what they are saying is valid, it's valid. And if their arguments are sound, they're sound. And of course it's going to be in their imagination, but that's neither here nor there when we consider whether or not they have a point.
If you think that by virtue of being able to imagine the device it makes it more possible, then consider how possible a Superman is by that same virtue.
No, and I even claimed I didn't think it. But I do think that we are capable of reason, using only our imagination.
Don’t forget that the post was in reply to your gobbledygook in this post about imagined stuff being real stuff since it is occurring in a real brain.
Reread that post.
The likelihood of something is dependent on the laws of physics and material reality not on what fictive process can be realized in someone's head.
These heads are conscious heads. And we're talking about conscious entities. That's the point you're missing--we're talking about the very ability to create an entity that imagines things.
So if a silicon chip becoming conscious violates the laws of physics then no matter how hard we try it will not happen.

If on the other hand it does not violate the laws of physics then it may be possible.
Right. But just because you say it violates the laws of physics doesn't mean it violates the laws of physics. And just because a silicon chip is not a brain doesn't mean it violates the laws of physics for it to exhibit the behavioral properties of one.

Meanwhile, this was the position that you stated that I was calling out:
We can IMAGINE that a simulated sentient world can exist in the ones and zeros of silicon chips.....but that does mean that it is POSSIBLE for this imaginary construct to actually exist.

There are REAL PHYSICAL constraints why it cannot exist. These constraints cannot be IMAGINED AWAY.
And you were backing this up with rhetoric. What you were doing looked more like rationalizing your position and masquerading that as reasoning. And that was my problem with it. I didn't believe any of the rationalizations were legitimate reasoning.
Is a sentient silicon chip running a simulated program a violation of the laws of physics?
I contend that it MIGHT be so.....
Sure... I'm fine with a weakened position. But don't misrepresent your opposition. They're not arguing that a simulation of a brain will produce consciousness just because they can imagine that it will. They're arguing that it will (or might, depending on who is doing the talking) because actual, physical categories of entities would have to be behaving in a sufficient analogous way. The physical entities being referred to here are the necessarily existing substrates on which the simulation is being run--I think this is better understood if you ignore the entire "world of simulation" in itself and just imagine that the machine that is running it has to behave a particular way to make that simulation happen.

That particular way that it behaves is what is being argued will produce consciousness. Those machines have to be real--if they aren't, the simulation is not running.
 
Yes, and you're not addressing it.

Sure, I can't wait.

But you have a bootstrapping issue here. If a simulated person can't become conscious because simulations require an observer, then neither can a brain become conscious because a brain requires an observer. After all, the ultimate simulation is the real thing. And to just start with our knowledge that a brain does have an observer is jumping the gun--even begging the question.
But this is where what people are really trying to tell you comes into play.

You say that simulations are imaginary. But why, if simulations are so imaginary, do we bother to actually build them? The answer is obvious. It's because we need them to actually exist in order to do something.

If you want an imaginary simulation, have Donald Duck build it. If I build it, it's going to be real. I'd really like to reserve the concept of "imaginary simulation" to the thing that Donald Duck builds in a cartoon, because if I'm the boss, and I pay good money to someone to run a simulation for me, they had better be able to put their hands on some physical device that is actually performing a bit of work. I want to have some way of firing them when they simply "imagine" that the simulation exists.

But it gets worse...
No no no. You're trying to teach me a lesson of consigning all simulators to the world of Donald Duck. But you're coming up with all of your own tests.

I think I'll make a test of my own. My concern is that whatever rules you're coming up with have a bootstrapping issue. So to test this out, I want to make my simulation be an actual interpreter.

That's not a big problem. I need someone to simulate... I volunteer. I'm of sound mind and body, and I hereby attest to my own conscious state. Furthermore, being simulated requires no effort on my part--which is perfect for me!

But now I need something to simulate me. A computer is too easy--we can have debates indefinitely because some people have a major issue seeing a computer as being a real agency, while for others it's no problem. I have a much better idea.

I'm going to pick another physical system to simulate me. In fact, I'm going to choose as the simulator--westprog's brain. Oh, but don't worry, I'm not simply going to leave it at that--if I don't actually do any work building the simulation, then we can hardly call it one. So here's what I'll do. I'll set up the simulator by asking westprog to pretend he is me for the next few moments.

Now, you play with your simulation of a conscious entity, and I'll be running along with my own.
Sure. I'll give westprog a slinky and have him wait in a chair while I plan out my assignment of entities within my physical system that is simulating me to the entities in the simulated system. For giggles, I just had an interesting idea. While running my simulation, I'm going to run it in a simulated world. In fact, I won't even try too hard to make it realistic... I always wondered what it would be like if I could actually visit candy island. So I'm going to write a Candy Island simulator. To attach it to my simulation, I'll use virtual goggles and a headset.
That's true. To start with, whenever westprog says anything in English, while pretending to be me, I'm going to map that to the equivalent English. We'll call that run "A". Maybe I'll run the simulation again, in run "B", and XOR the letters of each word uttered in English together, map that to ASCII text, and presume that my simulation is saying that.
Right. In run "A", westprog says, "I can't believe it! You can even eat the trees here!" ...or something similar. I know because I always wanted to say that when I visited Candy Island.

In run "B", westprog's just spouting out gibberish, most of which is not only not pronounceable, but messes up my terminal.

Well in my case, the simulated me is running around in a fictional virtual world where he is on an island made entirely of candy.

And therefore, westprog cannot experience Candy Island. According to the rule.

But now I'm getting suspicious. I know westprog is conscious, and would even be conscious while pretending to be me. Even when he is immersed in a fictional world.

So something about your rule is bothering me.
That's correct. I am the one that is imagining that westprog is me in Candy Island! It's all how I interpret the results when I look at where westprog is!

Only, something is wrong here. westprog is conscious the entire time, when I'm looking at the results or not. In fact, I'm sort of jealous--he's actually experiencing the wonderful virtual immersed world of Candy Island. I'm just peeking in from time to time trying to infer how much I would enjoy it if I were that simulated entity...

Yeah, yeah, I know. That's a mistake. I'm anthropomorphizing westprog.
Well, certainly this particular one is true. westprog's brain state changes. Nevertheless, something is a bit odd about your application here. I can't help but think that this intended representation is supposed to be the one in my brain.

Well, let's try scenario B. Oh, yeah. Gibberish. That simulation's not working. I'm going to be nice to westprog, though, and not suggest dismantling it to debug it.

Well, in my case, I think there's plenty of information in my simulation to make this happen. In fact, there's information to spare, and a lot of that information is probably doing other important things.

But it changes when I put something I know is an interpreter in the same situation. Maybe I should invoke special pleading when I do that.
My simulator's interpreter works just fine, thank you. But per all of the rules we went through, I am supposed to presume it doesn't work.

That's the problem. Now, you never bothered to consider the problem--if you note, my entire question to you was, if your rules were true, then how come we are interpreters. Hopefully you'll see what I'm getting at, since in my case, I added an example that actually was an interpreter, and all of your rules still suggested that I treat it as if it wasn't one.

Your rules are garbage. They conclude that the machine is not an interpreter. And I'm sure that's the conclusion you wanted to get. The problem is, they conclude that even if I put an interpreter in. So I don't trust that those rules actually even work.

You'd better, before you start defending silly claims such as that westprog is not conscious.
Well, westprog was only pretending to be me, sure. But he's a real person. (Well, technically, in this case, he was imaginary, because it's a hypothetical experiment, but I hope you see that I was actually trying to raise that problem, which you managed to completely snuff off with a long post.)

I want you to note something here though. I never believed the simulated version of me was actually me. But the simulated version of me here was actually westprog. And he was conscious. And he is real.
It depends. If their simulation is being run in a Donald Duck cartoon, you have a point. If not, see your problem case above.

I'm not sure what Piggy was claiming. However, back when I was only myself, I don't believe that I ever claimed that a simulation wasn't conscious. I simply claimed that we have no evidence that a simulation is conscious. If we start with something that's conscious in the first place, then indeed it will be conscious when pretending to be something else. If we start with something that isn't conscious, then I see no reason that it becomes conscious by pretending to be someone else.
 
Ok, describe for me a system which, if observed by someone who knows nothing about anything except the system, would have to be "performing addition" in any way other than real addition (actually physically aggregating things).

Remember, you've got to posit a completely ignorant observer, or else you're bringing a brain into the system with the machine, and if brain states are required, then you know where we are.

O I C, so the observer has to be completely ignorant, except not ignorant of your concept of "actually physically aggregating things?"

Aside from the fact that this is obviously special pleading on your part, can you explain this: when you "aggregate" rocks in your backyard, those rocks are actually "replacing" the air where they lie, so it isn't "aggregating" anything unless someone is there to tell the difference between rocks and air.

How does that factor into your argument?

Seems to me the only genuine "aggregation" would be the aggregation of fundamental particles in empty space.
 
I would like to try another approach to this that hopefully doesn't spiral out of control. Lets partake in a thought exercise, taking baby steps, and see where people start to diverge.

So everyone here is in some sort of a space, which they can perceive only to a finite distance ( meaning even if they are outdoors there is some limit to the distance they can see stuff happening at ), looking at a computer and thinking about what is on the screen.

Does anyone disagree?

Furthermore, in this situation, everything that exists is composed of fundamental particles arranged and behaving in various ways. There is no constraint that we humans are aware of all the types of particles, or that we understand them fully, only that everything in the situation must be composed of "something" and we can call the smallest units we can observe "fundamental particles," mainly because that is what physicists term them.

Does anyone disagree?

Furthermore, the behavior of the particles can be described with mathematics. It is possible that new behaviors are discovered, and current mathematics can't describe the newly discovered behaviors, but when that happens we just supplement and improve our mathematics, so that in the end we can always describe all the behavior of all the particles we can observe.

I understand that things like quantum uncertainty are present, but those types of things can be counted under behaviors that aren't necessarily fully observable at all times, and anyway I think they are probably immaterial to the discussion -- we are concerned with the deterministic behaviors of particles here.

Does anyone disagree?

So to sum up the first baby step is to come to an agreement that we and the spaces we inhabit are composed of particles the behavior of which can be described mathematically. Whether the particles "follow" mathematics, or whether mathematics is just the "description" of the particles, or anything in between, isn't necessarily relevant.

If nobody disagrees we can move onto the next baby step in the thought exercise.
 
Yes, but you've failed to consider in detail this question: Can the object you propose to use here actually do this?

That is, can your computer running your simulation of the brain simultaneously produce those outputs from the inputs and make the body it’s in (whether it’s mechanical or biological) conscious?

Wait a moment (I thought I might be making a mistake putting all that stuff in a single post).

I started with a black box that replaces the visual cortex, and you appeared to agree that if it could interface with the incoming and outgoing neurons appropriately and reproduce the same outputs given the same inputs, that this could work - the patient could see and remain conscious.

I then suggested a number of scenarios based on that, replacing more subsystems in the same way, and/or extending the scope of the original black box to encompass more of the brain function. Finally I suggested replacing the whole brain with a black box (half seriously, half in jest).

My purpose was to see, given you accepted visual cortex replacement (didn't you?), whether you feel there is a point beyond which replacing those subsystems with black boxes would 'break' consciousness, or whether you feel it is possible to have a human-like black box consciousness that doesn't necessarily function in terms of artificial neurons (this because of previous suggestions that the physical structure would need to be emulated).

These questions obviously require some knowledge and understanding of the functional architecture of the brain, and some idea or ideas of which parts of that architecture might be involved in consciousness (e.g. the frontal cortex, but not the cerebellum - which is an obvious black-box candidate). I assume you have enough knowledge of and opinions about these things, given your steadfast and authoritative statements in the thread.

In other words, I'm curious to know what extent of the brain you think it might be theoretically possible to replace in the way described, and still maintain conscious function. What do you think the constraints are, where do you feel problems might lie, etc. (given that we could produce such black boxes and connect them) ? I think it's germane to the thread, but I'll understand if you don't want to tackle it.

Part of my motivation is that I think it may soon be possible to do this kind of replacement for real with very simple brains - to monitor the substantive inputs and outputs of a neural subsystem, train a learning system to reproduce the functionality, ablate the monitored tissue, and use the monitoring probes to enable the learning system to replace it. Clearly it's a long way from the pure speculation above, but it was the stimulus for it (ha-ha).
 
No, you're not saying that the brain "hosted by a computer" is being simulated by it?

Ok, then you tell me what in the world that means?
Emulated; neurons being emulated by artificial neurons.

How can a computer "host" an actual object? Can your computer host my truck? If it can't then it can't host my foot, or my brain.
We're talking about hosting a number of physical processes emulating neurons. As previously explained, this can be done using multiple physical processors or sharing the physical processing on a single processor by virtualisation.
 
Status
Not open for further replies.

Back
Top Bottom