Such a model might well tell us something about the way that swallowing food takes place. However, such a model will not provide nourishment, or in any way replace or duplicate the physical act of swallowing.
A virtual reality version of swallowing, complete with information about possible nourishment, etc. could provide sufficient input for whatever part of consciousness requires it. (In this example.)
A physical, robotic construct could do the same. Granted, it might not be a "Turing Machine" under your usage of that term. But, it would still be a
computing machine.
Either way, there is no reason why conscious awareness can't be constructed and emulated in a computing system.
This claim rests on the assumption that the process of the brain is essentially a computational process - in the same way that swallowing involves moving food from the mouth to the stomach.
This has not been demonstrated.
What is the magical realm of reality, you are thinking of, that would be exempt from being emulated in a computing system?
If neural networks turn out to be equivalent in functionality to the brain, then perhaps that analogy will turn out to be the most useful. We don't yet know enough for that.
Neural networks were designed to emulate how the brain functions, in a
relatively abstract, high-level manner.
Neuroscience already knows many of the differences between how neural networks work and how the brain actually works. (For example: Neural networks often use back-propagation through the same virtual neurons that ran the forward processes. In biology, neurons generally process things in one direction, and then hormones generally take up the other direction. (though, reality is evena bit more complicated than that)).
The problem is NOT that "we don't know enough". We know a
FREAKIN' LOT, about how the brain works, these days.
The problem is that we can't figure out why there
should be inherent limits that the more basic neural networks would have, compared to the more complex actual brains. The ultimate results can, potentially, be the same, even if the proximate details are not.
Your argument would require knowledge about what those inherent limits would be. Do you have any insights to offer about that?
More to the point, it is pretty much universally agreed that the essential function of neurons is to take a seemingly continuous spectrum of input and convert it to something like a digital signal.
It isn't a coincidence that the things we consider "able to compute" all feature similar behavior, that behavior being the ability to map a given set of inputs to a much smaller set of outputs.
I agree with that, in a very general sense. But, I don't think it's as relevant to the point as you think.
I think the most important property of neurons, for the discussion of consciousness, is that
they report on the states of other cells. This makes them uniquely qualified to build and compute models of the self, when bundled the right way.
That they do so by acting as analog-to-digital converters, is only circumstantial.