If I believed that human intelligence was software, then I'd believe it was entirely non-material.
To assume that human intelligence is software is to make a judgement about something which we don't yet know.
We do know our thought processes are as dependent on the physical structure of our brains as they are on electrical/chemical signals. Any simulation of the human mind would have to model the brain as well, but there's no reason that can't be done.
If you can't override the prediction, do you have free will then? (at least not absolutely free will).
If you have no knowledge of this prediction, how does someone else predicting what you are going to do in any way affect your behavior? Being predictable doesn't mean you don't have free will.
If you are told the prediction, then the prediction would have to predict your reaction to the prediction as well.
For example, if it was predicted that you were going to have toast for breakfast, you might respond: "Damn right I am, I like toast". On the other hand, just to be contrary, you might respond: "Screw that, I'm having cornflakes instead!"
The end result would be ridiculous recursive predictions like "I predict that if I had predicted you would have had toast for breakfast, you'd have had cornflakes instead." or "I predict that if I had predicted you'd have had cornflakes if I had predicted you'd have had toast, then you'd have had eggs instead."
The question of whether or not a hypothetical omniscient prescient entity deliberately manipulating me into eating cornflakes for breakfast by slyly predicting I was going to have toast is a violation of my free will is something I'm content to leave for others to argue about for the time being.
So: Do you think that living organism type machines (if machines they are) degrade so quickly after being turned off that they're difficult to restart?
As far as I'm aware, living organisms don't come with an off switch, they just keep on running until they break down. Depriving a cell of oxygen is a bit like driving a car full speed down the freeway with the radiator disconnected.
Once the engine blows up, you don't expect it to start again.
H.A.L.
Heuristically programmed ALgorithmic Computer.
...and Heuristics constitute a "fairly simple algorithm," where, exactly?
Wikipedia said:
Heuristics are intended to gain computational performance or conceptual simplicity, potentially at the cost of accuracy or precision.
Heuristic programming is
about simplicity (or computational speed). Although, I admit I should have said "
relatively simple" insted of "fairly simple".
Your woo seems to derive from assuming that all things can be replicated. Please, carry on with your woo. But try to understand that semantic games don't change what things are, only how you bat them around on a forum.
Biologists have already manually assembled simple life forms such as viruses from basic molecules, and have simulated them on computers down to the quantum level. Why do you believe that this process cannot be scaled up (in
theory if not practice) to accommodate larger organisms, such as humans?
(If it ever does become possible to replicate an intact individual human mind
complete with memories in a computer simulation, I doubt it will happen in my lifetime.)