Belz...
Fiend God
A physically real automaton:
Cyborgs are not automatons.
A physically real automaton:
It's not a strawman, it's an analogy. Sheesh.
A straw man is a component of an argument and is an informal fallacy based on misrepresentation of an opponent's position.[1] To "attack a straw man" is to create the illusion of having refuted a proposition by replacing it with a superficially similar yet unequivalent proposition (the "straw man"), and refuting it, without ever having actually refuted the original position.[1][2]
No. That's impossible.Is a sentient silicon chip running a simulated program a violation of the laws of physics?[/B]![]()
Why, yes, I have.No …. Have you heard of the Hasty generalization logical fallacy.That's irrelevant to my point. I'm sure you really want to make a point to correct something I never said, but all I need to establish is that at least one kind of physical lump of matter does this thing.
I don't know. You're not even defining your terms.Can you show me another?
Not exactly. I'm arguing that some of the arguments I've seen here are invalid.We are arguing about whether a silicon chip can become a conscious entity.....not whether there are aliens and gods.
How flattering of religious people to have you associate their arguments with my argument style! I'm sure they're going to appreciate the complement.The above is nothing but an argument akin to RELIGIOUS arguments.
That word "know" has a very specific meaning to me, and it's one of the few things I'm not going to budge on.We were discussing what we KNOW HERE AND NOW….. not what might be lurking in some corner or GAP of the universe.
Well, no, I'm not a theist. But you're apparently not understanding my style of argumentation here.Are you a theist? By your above theistic style argumentation we cannot dismiss Fairy Godmothers nor Thor and his hammer, since they could be out there too.
No, you're talking about what it is possible to construct in another substrate.We were discussing consciousness as we know it here on earth….. not the possible SPIRITS and other constructs that you and theists can concoct.
Why, yes, I have.
I don't know. You're not even defining your terms.Not exactly. I'm arguing that some of the arguments I've seen here are invalid.
How flattering of religious people to have you associate their arguments with my argument style! I'm sure they're going to appreciate the complement.
That word "know" has a very specific meaning to me, and it's one of the few things I'm not going to budge on.
Well, no, I'm not a theist. But you're apparently not understanding my style of argumentation here.
The key difference, and it makes all the difference in the world, is that you have an entity that is conscious. You have, effectively, a physical proof of concept demonstrating that it not only can exist in theory, but can and does exist in practice.
You cannot say that about Thor or Fairies.
No, you're talking about what it is possible to construct in another substrate.
Your line of reasoning is analogous to reasoning that we cannot build a flying contraption because all of the flying devices we know of are biological.
Let me put it this way:
A map is not the territory, but a map of a map is a map.
And consciousness is a map.
Okay, sure. An imagined conscious chip is an imagined construct.No… my argument is that an imagined conscious silicon chip running a simulation is just that…. An Imaginative construct.
Sure. However, there's an important caveat here you're not appreciating. My imagination can apply reasoning. I can imagine a work of art, and actually bring it into being. I can imagine a computer program and actually code it. Furthermore, imagination is where all of this reasoning is coming from about how to do things--and, sure, we can get the how's wrong. But we can get them right.Whether it might come to be true or not is not proven by the strength of the imaginative process nor by the fact that you can imagine it.
No, and I even claimed I didn't think it. But I do think that we are capable of reason, using only our imagination.If you think that by virtue of being able to imagine the device it makes it more possible, then consider how possible a Superman is by that same virtue.
Reread that post.Don’t forget that the post was in reply to your gobbledygook in this post about imagined stuff being real stuff since it is occurring in a real brain.
These heads are conscious heads. And we're talking about conscious entities. That's the point you're missing--we're talking about the very ability to create an entity that imagines things.The likelihood of something is dependent on the laws of physics and material reality not on what fictive process can be realized in someone's head.
Right. But just because you say it violates the laws of physics doesn't mean it violates the laws of physics. And just because a silicon chip is not a brain doesn't mean it violates the laws of physics for it to exhibit the behavioral properties of one.So if a silicon chip becoming conscious violates the laws of physics then no matter how hard we try it will not happen.
If on the other hand it does not violate the laws of physics then it may be possible.
We can IMAGINE that a simulated sentient world can exist in the ones and zeros of silicon chips.....but that does mean that it is POSSIBLE for this imaginary construct to actually exist.
There are REAL PHYSICAL constraints why it cannot exist. These constraints cannot be IMAGINED AWAY.
Sure... I'm fine with a weakened position. But don't misrepresent your opposition. They're not arguing that a simulation of a brain will produce consciousness just because they can imagine that it will. They're arguing that it will (or might, depending on who is doing the talking) because actual, physical categories of entities would have to be behaving in a sufficient analogous way. The physical entities being referred to here are the necessarily existing substrates on which the simulation is being run--I think this is better understood if you ignore the entire "world of simulation" in itself and just imagine that the machine that is running it has to behave a particular way to make that simulation happen.Is a sentient silicon chip running a simulated program a violation of the laws of physics?
I contend that it MIGHT be so.....
No!!
Are you then arguing that consciousness is the territory?No!!
Yes, and you're not addressing it.
Sure, I can't wait.
But you have a bootstrapping issue here. If a simulated person can't become conscious because simulations require an observer, then neither can a brain become conscious because a brain requires an observer. After all, the ultimate simulation is the real thing. And to just start with our knowledge that a brain does have an observer is jumping the gun--even begging the question.
But this is where what people are really trying to tell you comes into play.
You say that simulations are imaginary. But why, if simulations are so imaginary, do we bother to actually build them? The answer is obvious. It's because we need them to actually exist in order to do something.
If you want an imaginary simulation, have Donald Duck build it. If I build it, it's going to be real. I'd really like to reserve the concept of "imaginary simulation" to the thing that Donald Duck builds in a cartoon, because if I'm the boss, and I pay good money to someone to run a simulation for me, they had better be able to put their hands on some physical device that is actually performing a bit of work. I want to have some way of firing them when they simply "imagine" that the simulation exists.
But it gets worse...
No no no. You're trying to teach me a lesson of consigning all simulators to the world of Donald Duck. But you're coming up with all of your own tests.
I think I'll make a test of my own. My concern is that whatever rules you're coming up with have a bootstrapping issue. So to test this out, I want to make my simulation be an actual interpreter.
That's not a big problem. I need someone to simulate... I volunteer. I'm of sound mind and body, and I hereby attest to my own conscious state. Furthermore, being simulated requires no effort on my part--which is perfect for me!
But now I need something to simulate me. A computer is too easy--we can have debates indefinitely because some people have a major issue seeing a computer as being a real agency, while for others it's no problem. I have a much better idea.
I'm going to pick another physical system to simulate me. In fact, I'm going to choose as the simulator--westprog's brain. Oh, but don't worry, I'm not simply going to leave it at that--if I don't actually do any work building the simulation, then we can hardly call it one. So here's what I'll do. I'll set up the simulator by asking westprog to pretend he is me for the next few moments.
Now, you play with your simulation of a conscious entity, and I'll be running along with my own.
Sure. I'll give westprog a slinky and have him wait in a chair while I plan out my assignment of entities within my physical system that is simulating me to the entities in the simulated system. For giggles, I just had an interesting idea. While running my simulation, I'm going to run it in a simulated world. In fact, I won't even try too hard to make it realistic... I always wondered what it would be like if I could actually visit candy island. So I'm going to write a Candy Island simulator. To attach it to my simulation, I'll use virtual goggles and a headset.
That's true. To start with, whenever westprog says anything in English, while pretending to be me, I'm going to map that to the equivalent English. We'll call that run "A". Maybe I'll run the simulation again, in run "B", and XOR the letters of each word uttered in English together, map that to ASCII text, and presume that my simulation is saying that.
Right. In run "A", westprog says, "I can't believe it! You can even eat the trees here!" ...or something similar. I know because I always wanted to say that when I visited Candy Island.
In run "B", westprog's just spouting out gibberish, most of which is not only not pronounceable, but messes up my terminal.
Well in my case, the simulated me is running around in a fictional virtual world where he is on an island made entirely of candy.
And therefore, westprog cannot experience Candy Island. According to the rule.
But now I'm getting suspicious. I know westprog is conscious, and would even be conscious while pretending to be me. Even when he is immersed in a fictional world.
So something about your rule is bothering me.
That's correct. I am the one that is imagining that westprog is me in Candy Island! It's all how I interpret the results when I look at where westprog is!
Only, something is wrong here. westprog is conscious the entire time, when I'm looking at the results or not. In fact, I'm sort of jealous--he's actually experiencing the wonderful virtual immersed world of Candy Island. I'm just peeking in from time to time trying to infer how much I would enjoy it if I were that simulated entity...
Yeah, yeah, I know. That's a mistake. I'm anthropomorphizing westprog.
Well, certainly this particular one is true. westprog's brain state changes. Nevertheless, something is a bit odd about your application here. I can't help but think that this intended representation is supposed to be the one in my brain.
Well, let's try scenario B. Oh, yeah. Gibberish. That simulation's not working. I'm going to be nice to westprog, though, and not suggest dismantling it to debug it.
Well, in my case, I think there's plenty of information in my simulation to make this happen. In fact, there's information to spare, and a lot of that information is probably doing other important things.
But it changes when I put something I know is an interpreter in the same situation. Maybe I should invoke special pleading when I do that.
My simulator's interpreter works just fine, thank you. But per all of the rules we went through, I am supposed to presume it doesn't work.
That's the problem. Now, you never bothered to consider the problem--if you note, my entire question to you was, if your rules were true, then how come we are interpreters. Hopefully you'll see what I'm getting at, since in my case, I added an example that actually was an interpreter, and all of your rules still suggested that I treat it as if it wasn't one.
Your rules are garbage. They conclude that the machine is not an interpreter. And I'm sure that's the conclusion you wanted to get. The problem is, they conclude that even if I put an interpreter in. So I don't trust that those rules actually even work.
You'd better, before you start defending silly claims such as that westprog is not conscious.
Well, westprog was only pretending to be me, sure. But he's a real person. (Well, technically, in this case, he was imaginary, because it's a hypothetical experiment, but I hope you see that I was actually trying to raise that problem, which you managed to completely snuff off with a long post.)
I want you to note something here though. I never believed the simulated version of me was actually me. But the simulated version of me here was actually westprog. And he was conscious. And he is real.
It depends. If their simulation is being run in a Donald Duck cartoon, you have a point. If not, see your problem case above.
Ok, describe for me a system which, if observed by someone who knows nothing about anything except the system, would have to be "performing addition" in any way other than real addition (actually physically aggregating things).
Remember, you've got to posit a completely ignorant observer, or else you're bringing a brain into the system with the machine, and if brain states are required, then you know where we are.
Yes, but you've failed to consider in detail this question: Can the object you propose to use here actually do this?
That is, can your computer running your simulation of the brain simultaneously produce those outputs from the inputs and make the body it’s in (whether it’s mechanical or biological) conscious?
Emulated; neurons being emulated by artificial neurons.No, you're not saying that the brain "hosted by a computer" is being simulated by it?
Ok, then you tell me what in the world that means?
We're talking about hosting a number of physical processes emulating neurons. As previously explained, this can be done using multiple physical processors or sharing the physical processing on a single processor by virtualisation.How can a computer "host" an actual object? Can your computer host my truck? If it can't then it can't host my foot, or my brain.