PixyMisa
Persnickety Insect
Yes, I saw this point in my last response.No. You. Did. Not.
What. You. Are. Describing. Is. Impossible.
Show. Some. Specific. Code. If. You. Disagree.
(or to be precise you can only do it in the cases where you already know what the last instruction is)
There's two problems, though: First, it is actually possible; second, if you recall where this point came from, you're grasping at red herrings.
For example, a register that is set with the previous PC value when the PC value is changed by an instruction, and cleared when it isn't.What different method? What will find the last instruction even if it was a return or a jump?
Now, you may object that this is changing the computer.
Yes, it is. But you can do that. It's not like the halting problem, where there is provably no general solution. A Turing-equivalent machine can be reprogrammed so that it's a Turing-equivalent machine that knows what the last instruction was.
Now, knowing what the next instruction will be on a machine that can test-and-branch in a single instruction that can refer to the value of the next instruction - that can lead to contradictions. (Location 9: If the next instruction is at location 10, go to location 20. Of course, a real computer would evaluate this once, ignore the implicit contradiction, and continue on from location 20.)
No. But it is the key theorem underlying this argument, so it wouldn't hurt you to understand how important it is, and what its implications are.You cite this as an all purpose proof for every different occasion don't you?
What it proves is that if I can show something can be done on one computing device (of a long list of types), even if I had to change the details of the instruction set or registers or stack or some such, then it can be done on any of the other types. There are computing devices less general, i.e. less powerful, than a Turing machine. There are definitions of computers that are more powerful, that can execute supertasks, but they all involve physical infinities and cannot be built. (So the brain cannot be such in any case.)But it does not even come remotely close to proving what you claim.
And by the Church-Turing-Deutsch thesis, we can extend this to physical systems. Anything that the brain, a physical system, can do, we can do on a Turing machine. (There are a couple of subtle issues where we can't, again, relating to infinities, but no argument has been presented that these issues are relevant to the discussion, and in any case they run smack into the Planck scale nature of the Universe and may be provably physically irrelevant if not provable mathematically irrelevant.)
Anyway, remember how we got into this discussion? You were insisting that each step of a computer program was isolated and couldn't know about other steps or other data. This is true as a tautology: Each step of the program is deterministic and refers only to the values to which it refers. But those values can be anything, and which values an instruction refers to can change as the program executes.
So, for example, a generalised neural net can be implemented on any Turing-equivalent device. Do you put forward the same objection with respect to a neural net, and if so, how do you apply it?