westprog
Philosopher
- Joined
- Dec 1, 2006
- Messages
- 8,928
Tell you what, you hire me to fly you to Phoenix, I'll do it in a simulator. Satisfied?
Be fair. You'd have to fly a simulation of Belz to Phoenix. But presumably that would be just as good.
Tell you what, you hire me to fly you to Phoenix, I'll do it in a simulator. Satisfied?
The question of whether we are in a simulation or the external world necessarily acknowledges the external world.
Yes. A brain in a vat could only realize it was a brain in a vat if it had knowledge of the external world... something a brain in a vat could not have.
laca said:The question of whether we are in a simulation or the external world necessarily acknowledges the external world.
Yeah, just like the question whether Santa exists or not necessarily acknowledges Santa.![]()
laca said:Yes. A brain in a vat could only realize it was a brain in a vat if it had knowledge of the external world... something a brain in a vat could not have.
Exactly. But that doesn't stop the brain in the vat to entertain the idea of an external world, just as we can entertain the idea that we are in a vat ourselves, in spite of the impossibility of our ever having a chance to find out whether that's true or not.
I just pointed out that any distinction is arbitrary ind invalid. You can't tell whether you're in a simulation or not. It's hypocritical to label a hypothetical programmed consciousness as different in any way that matters from your own.
Did you even read what Iaca said ?
"Independent existence" ? Really, what the hell does that even mean ?
Again, it doesn't exist in a vacuum. My computer "knows" some things, and any simulated character running on one of its programs may be given access to that knowledge. How does it not know anything then ?
Pixy is correct when he points out, contra what you said in an earlier post, that you consider consciousness to be related to substance rather than behaviour. Otherwise you wouldn't insist that it's the same as any other substance.
The best analogy, as I said, is between consciousness and "running". But for some reason you seem to think that a simulated person doesn't run in the simulation, or at least, if it does, "so what ?" So what ? So it RUNS, and it defeats your entire argument, is what.
To make a suitably programmed computer play music, we have to connect it to (or include within the computer) a d/a converter, amplifier, and speaker. We do not have to include or attach a guitar or a symphony orchestra.
To make a suitably programmed computer display a photograph, we have to connect it to an output buffer and a display screen. We do not have to attach a cat, mother-in-law, or mountain range, to display photographs of those things.
To make a suitably programmed computer walk, we do have to attach legs. But we can define what properties and abilities those legs need to have. Mechanical linkages, actuators, and sensors (force and position, typically), and a motive power source are required. Reflexes, balance, and control knobs for pace and speed are not required, because the computer provides those functions. So robot legs can suffice; we do not have to attach a man or a horse.
To make a suitably programmed computer conscious, what do we have to add?
If the answer is "an entire biological brain," that is a lot like requiring a symphony orchestra to be attached to a computer for it to play Beethoven's Ninth, or a live horse for it to walk. Which doesn't mean it can't be the right answer (though it is obviously wrong with regard to playing music or walking), but then the question is why no portion at all of the brain's activity in producing consciousness can be performed instead by the computer. Why not?
If the answer is "something, but we don't know what," then the question is, if you don't know what, what justifies the conclusion that any such "something" exists?
If the answer is "some portion of a biological brain, but we don't know what portion," the same question: what justifies the conclusion that the necessary portion is nonempty?
In all of the above examples, and all others used in this thread (flying an airplane, controlling a power plant, playing chess, etc.) we can state very specifically what additional hardware must be attached to a suitably programmed computer in order to create the corresponding real-world behavior. What additional hardware is required for the real-world behavior of consciousness? If we cannot answer, then why not, and what justifies any constraints or assumptions we place on that non-answer?
The computationalists' answer is clear and specific: no additional hardware is needed (though a minimal amount, e.g. a keyboard or microphone and a text display or audio output, would be needed for us to perceive the conscious behavior). What are the alternative answers?
Respectfully,
Myriad
As I've pointed out before, to make a computer able to control external devices, it has to use a real-time model, not the Turing machine model. Therefore any theories of computer consciousness which rely on the Turing model aren't applicable.
Is it logically possible for a computer program feature to conjecture it's a computer program feature? I tend to doubt it.
Most practical computers have clocks, so operating on a real-time model is not a problem.
So, is that your answer to what additional hardware is required to permit a suitably programmed computer to be conscious? A clock? Because that's no problem. Or is there something else?
Respectfully,
Myriad
Iaca is postulating that a simulated consciousness might be possible, in order to prove that a simulated consciousness is possible.
If we are, indeed, in some form of simulation, then it might be possible to create a simulation within this simulation, and for entities within that simulation to be conscious. We don't know this for sure, even if we allow that we might be in a simulation, because the rules that apply outside the simulation might not be the same as those that apply within.
It's all moonshine anyway, since if we aren't in a simulation, then we can legitimately disregard the possibility.
The objection is specious anyway. Are the external devices the source of consciousness? No? Then it's irrelevant.
I don't understand your logic.
Is it logically possible for a computer program feature to conjecture it's a computer program feature? I tend to doubt it.
The objection is specious anyway. Are the external devices the source of consciousness? No? Then it's irrelevant.
It has already been explained that the passage of time is actually a series of ordered causal events, not some magical property of reality, and so the claim isn't valid to begin with -- all algorithms, even those run by the abstract turing machine, are a series of ordered causal events.
Be fair. You'd have to fly a simulation of Belz to Phoenix. But presumably that would be just as good.