But, it can be "approximated" to an arbitrary degree of precision. And since the 3-body system is also limited in precision (by, at the very least, quantum uncertainties in the initial state), the actual movement of actual bodies in space, and comparable analog systems, do not constitute hypercomputation. (The notion that a three-body system is even Turing-equivalent is highly speculative.)
Yes, considering ideal gas law for instance, the approximations don't even have to include location information of any particular particle and we still get arbitrarily high degree state information of the ensemble. It's true that, even for classical systems, we can't prove it's fundamentally Turing in nature, only that the observables are Turing equivalent. I used equivalent in a slightly different sense than you in that equivalent doesn't indicate what it fundamentally is, only that the result is equivalent to the expectations of an ensemble of Turing machines.
It is very unlikely that approximation errors would be any significant barrier to a functionally equivalent simulation of a human brain. A real brain continues to generate conscious awareness even under conditions that measurably change the response characteristics of individual neurons; e.g. alcohol intoxication or hypoglycemia (within limits). Consequently there's no reason to expect that a simulation of a brain would fail to function because of the far more subtle effects of rounding error in the 100th decimal place.
Respectfully,
Myriad
True, approximation errors as noted is not even a barrier when no location information of any singular Turing device is included (gas law). Even in a Quantum system, if you've ever heard of Exact Uncertainty, it was derived from the assumption that the uncertainty resulted from a quantum level equivalent of Brownian motion. Which allowed the Schroedinger's equation to be derived directly from the Uncertainty Principle.
J. Phys. A 35 (2002) 3289-3303
www.iop.org/EJ
http://arxiv.org/abs/quant-ph/0102069
It remains that in QM quantization pertains to properties, not particles as such. So what meaning is taken from this remains suspect.
Yes, the real brain continues to generate conscious awareness when individual neurons are knocked out completely. Much the same way perturbing individual molecules has no effect on the accuracy of gas law. So you are correct that the accuracy of pen and paper is sufficient in principle, yet pen and paper doesn't have an explicit system to self refer back to a unique subset of itself, or a generalization of itself. You can add this after the fact, but it wasn't an explicit part of the initial system, nor is the self referring back an explicit result of the initial calculation in adding it. That connection came from you. You've effectively proved you have the properties your trying to recreate on paper.
Yet as even the quantum case involving Exact Uncertainty illustrates, Turing fundamentals, are not a barrier to creating a machine with the property of consciousness, in a sense such as ours. Nor is the approximations. Just that such machine must maintain persistent causal connections not inherent in pen and paper. The pen and paper causal connection is maintained only through the pen operator, you. Thus demonstrates a set of your properties.
I'm looking to the kind of hardware needed to accomplish it. The standard artificial neurons come close, but the logical architecture is too restrictive in defining what constitutes inputs and outputs. What is the output of your idea, before you write it down or express it? Even your memory is not a recording, but an associative reconstruction from bits of data. It's why eyewitnesses are such horrible witnesses, and produces false memories so easily.
We'll get there one day, and our present hardware artificial neurons are not excessively deficient, but not with the present methodology of thinking about it.