I am a cognitive neuroscientist - though not a computer modeller (though I've worked with a few). However, I know enough to state the following as being a fair and accurate account relevant to that discussed above.
I can confirm that computer models of brain processes are mot regarded, by anyone I know of, as literal incarnations of the human brain. They are approximations - how close, is still a matter of debate. It is a form of comparative psychology - where it is just thought of as an approximation. Now there are many different approaches to using such methods including connectionism, neural modelling, AI, etc - they are not all the same. However, the principle is the same - they are trying to examine the principle components of how groups of neurons "might" be processing information and that includes; encoding, transforming, representing and retrieving data. The models have merit - but are limited.
There is an excellent issue of the journal "Cognition" - a special issue from the 1980s I think on connectionism with lots of big thinkers writing on the role and contribution of connectionist models in psychology. Even though dated, I highly recommend it as the debate captured in it is still relevant. The basic take home message is that such models are at their best when trying to account for very basic early sensory processing and not high-level cognition. So basic visual / language processing is an area where many models do a reasonable job of providing a candidate explanation - but only for very basic things (edge detection, early attentional selection, shading, orientation, early integration).
There are computational models of hallucination - and in line with my earlier point - these are mainly for early and basic processes. So, Jack Cowan (Chicago) and Paul Bressloff (UK) have developed models based on the Euclidean geometry of the primary visual cortex and breaking symmetrical patterns (based on fluid dynamics) of excitation / inhibition (imagine ripples in a pond). However, the imagery produced by the model is basic and meant to simulate the low-level stage 1 hallucinations of drug use, migraine aura and epilepsy. There is nothing even mid-level, let alone high level about the experiences it is trying to model. In addition, the 'tunnel' experiences it produces are not in line with the tunnel experiences described by NDEers (though Sue Blackmore and Tom Troscianko have a basic model that does fit better with descriptions). None of these models require, need, or are based on hand-wavy ideas of quantum computing. No need, classical understandings at the macro level still need exploring and are likely to produce the best results.
It is true that few if any connectionist / computer model takes into account the complex interplay of neurotransmitters. But it's even worse than that. One the whole, they don't take into account ionic processes, enphatic transmission (non-synaptic) or the role of glial cells in communication. Some neurotransmitters are actually gases as well (like Nitric oxide), and some ions can move in waves (calcium waves), some inhibitory neurotransmitters actually become excitatory under seizure conditions, the state of the neuron membrane is in a constant state of flux, and so on. So basically, many 'principles' get bent in exotic situations like hallucinations - but they are understandable under classical notions of brain science. Again, no need to even contemplate quantum anything.
Before his death, Victor Stenger e-mailed me for some information on a talk he was giving on NDEs. I duly sent him my papers and asked him about what he thought of QM and the NDE. The answer was that the brain is too warm, and wet and the macro-level of neurotransmitters is too big for QM effects to exist on them. He was utterly unconvinced by it (though I appreciate this is an area of debate).
Much of the work on QM and computer models, when applied to brains, is simply theoretical with no direct evidential basis that I can see. Full of unwarranted assumptions like "...if we assume x, and then assume y and z, then.......magic happens.....". Well, hold on, why assume x, y, and z, in the first place and what happens if we don't make such unwarranted assumptions?
