The reason why I very strongly insist that subjective "private" experiences are a matter of physics is because our experiences are highly specific physiological responses to specific kinds of physical stimuli.
OK, then, point to whatever specific physical thing encodes an experience and explain why only that physical thing can encode that experience.
If my central point is that we are lacking the knowledge of the "specific physical thing" that is a sufficient indicator of consciousness why in blue barfing blazes would I then turn around and claim knowledge of what it is?
Information in the abstract does not trigger the sensation of pain or the perception of red; specific chemical signals in the body are required to trigger these experiences.
The specific chemical signals in our body are an artifact of our evolutionary history. As the Wasp rightly pointed out, there is no reason to believe that the specific chemical signals (I will assume you are talking about neurotransmitters here) we use are required for the job -- in theory we could swap them out for an entirely different set of neurotransmitters and receptors and get functionally identical results.
The substitutes must be able to atleast chemically mimic the "natural" signal molecules in order to produce similar effects.
As a thought experiment, would you agree that we could (in theory) replace the neurons in the brain one at a time with nanomachines that are functionally identical at a neuron-by-neuron basis? Why or why not?
Whatever devices are used they must inhere the relevant physical properties that allow our biological neurons to produce consciousness. Being as how we do not know the physical "whats" and "hows" of consciousness we have no way of knowing what hardware systems would be sufficient beyond our own. Its common flipping sense, dude.
If a person is exposed to psychoactive substances their change in mental states is due to the nervous system's physical reaction to the reagent and not a magical emergent property of algorithmic code execution.
I am personally subjectively familiar with the process.
We already know that neural networks do not process information the same way that our usual Von Neumann/Harvard architecture computers do -- no surprise there. That does not stop us (in theory) from emulating neural networks using algorithmic processes to whatever degree of detail needed. In practice of course, there are huge engineering challenges, but they are just that --
engineering challenges.
It's entirely unjustified to assume that merely
emulating the computational functions of our neurons is sufficient to produce consciousness -- especially when we have not yet discovered what consciousness is or how it is produced in the first place. Even assuming that we actual do learn what physically constitutes consciousness, simulating it would not reproduce it anymore than a computer simulation of gravity produces actual gravitational effects. You CANNOT engineer a feature into a system without having a rudimentary understanding of it or, at the very least, have the ability to physically identify it. For the life of me, I don't get why you are so resistant to facing this fact.
We are also pretty sure that psychoactive substance work because they (or their metabolites) either impersonate neurotransmitters, or they mess with the release/reuptake systems in synapses for particular neurotransmitters, thereby messing with the usual synaptic weighting (and, therefore, firing rate of the target neuron) for the sites the drugs affect. If that network happens to participate in a process involved with consciousness, then consciousness will be affected or altered, but as a result of the drug messing with the synaptic weights or firing rates of the neurons involved, not because the drug inherently possesses some magical consciousness altering property.
Right. So basically you're saying that the chemical properties of neurotransmitters, and the physical conditions of biological brains have absolutely no relevance to the production of sensations, emotions, or other subjective states. You can't even address the basics of what qualia are, or how they are produced, yet you insist that creating them is a simple matter of "engineering". Just who do you think you're kidding?
The different chemical compositions of the substances that bind to cellular receptors determine the type of subjective response, if any, that a given conscious person will experience.
First, the specific substances (I will assume you are talking about neurotransmitters and substances that impersonate them here) and their receptors are artifacts of our evolutionary history -- there is no reason to assume that the specific ones we use are the only ones possible.
I never said they are the only possible ones.
Second, I see no reason to expect that we will find consciousness at the level of synaptic activity and neural excitation levels, especially since pretty much the same neurochemistry is used by everything that has a nervous system, whether or not we recognize them as conscious.
The point is to find out WHAT consciousness is and HOW that "synaptic activity" produces it in the first place. Despite all your handwaving, you clearly do not know the answers to any of these questions yet you act as if my pointing out this ignorance is itself a radical unjustified claim. Get real, man.
Somehow, properties of the signal molecules are conveyed by the EM signals along the cell membrane which, in turn, appear to have direct affect on the subject's conscious experiences.
No, the neurotransmitters do not have any special properties beyond the fact that they bind to neurotransmitter receptors. Even then, we have no reason to directly equate neural polarization levels and excitation thresholds to consciousness and subjective experience, and even if we did there is no reason that we could not in theory simulate those features in an artificial neural network.
Oh my
god! Could you
-please- spare me the constant appeals to
"but we can simulate it!". First of all, we do not know what consciousness is to begin with, so claims of knowledge of how to simulate it are complete ****ing rubbish. Second of all, even if we did have the knowledge required to design such a simulation, simulation itself is, in principle, NOT a reproduction.
There is no way a computer scientist can properly address the
questions I raised in terms of I/O switching functions because they are inherently
biophysics questions.
I have good reasons to doubt that, as outlined above.
Your entire position basically boils down to:
"Brains compute, therefore consciousness is a prior
computation. All that is needed to produce consciousness is to emulate the computations of the brain and call it a day."
Our consciousness and conscious experiences are undeniably a result of the physical conditions and biological mechanisms of our brains
Indeed.
Yet, in the same breath, you'll handwave away the significance of those conditions or even the need to understand how they relate to consciousness.
At some point we're going to have to deal with the actual physics of what the brain is doing instead of arrogantly -- lazily -- chalking it up to "computation/information processing/SRIP/etc." because the prospect of unknown science makes our brains hurt.
Well, we have no indications that there is an "unknown science" at play, at least not at the level of physics or chemistry. I understand you believe differently, but it is just a belief right now.
Stop lying to yourself. Its a flat fact that we do not know what consciousness is or understand how the chemistry/physics of the brain produces subjective experience. Your claim that computation is a sufficient explanation of consciousness is not only a
-belief-, its a completely unjustified one at that.
Do you -honestly- believe that any substrate of any composition will have subjective experience merely because its implementing a particular switching pattern?
As long as it meets the requirements I outlined, then yes.
How can you maintain this
-belief- when you can't even answer the most rudimentary
questions about subjective experience? Earlier you claimed I set the bar too high. The problem is that your preferred conception of consciousness is completely and thoroughly inadequate as an explaination, and you know it. Cut the bull.
But you could determine this if you knew it were implementing a particular line of code?
A particular line of code in isolation? No, not any more than you could determine that a system is conscious by looking at a single neuron.
So, by your criteria, how would one go about discerning if a nematode has subjective experiences and if so, what the range of its experiences are, what it will experience given a particular stimulus, and what its experiencing during a given period of time? If you cannot answer these questions you do not have a sufficient theory of consciousness, and all your handwaving bluster about computational criteria amounts to nothing more than a pile of empty platitudes.
If consciousness itself could be reduced to something apart from all of those cognitive capacities what makes you think that its simply a matter of computational coding?
What makes you think it is not? Nevermind, you think it is some special property of our neurochemistry.
Yes, its such an earth shatteringly radical concept that the biophysics of the brain is relevant to consciousness. What will I think of next?
So all substances are conscious and it's simply a matter of waking them up with the correct algorithmic procedure?
No, I think that systems that meet the criteria I outlined are conscious, no matter what they are built out of.
Yet you can't tell me the first thing about what those allegedly conscious systems are experiencing or how your "criteria" even relate to those experiences. Get real.
We just established that linguistic and/or motile behavior may not be possible for a some conscious entities. Absent such a behavior test, how else could one discern whether or not they're conscious?
In general, we cannot. We could probably build a test for consciousness that is not behavioral if we know the details of how consciousness is implemented in entities of whatever type, but such tests would not be general.
In other words, when you were claiming that you knew the sufficient criteria for discerning consciousness you were just talking outta your behind.
As far as the subject is concerned, their sensory experience is very much like an output; some think of it as something akin to a fully immersive theatrical experience. Sensory stimuli that do not make it onto this stage of conscious experience are what we call subliminal. In any case, if we identify consciousness we'd have identified the experiencer.
The Cartesian Theater called, they want their homunculus back.
Have a match with that straw?
The problem is that you're putting the cart way before the horse.
You are insisting that our designs for a cart replacement are ridiculous and can never work because we swapped the horse for an internal combustion engine.
You've gotta be kidding me. You have a theory of consciousness that explains nothing, criteria for discerning consciousness that can't even tell us if a nematode has subjective experiences and you seriously consider your "model" to be the epistemic equivalent of an internal combustion engine? Are you completely
daft?
We have not yet identified what physically constitutes our consciousness and computational descriptions are not a sufficient substitute.
Yes, and?
Translation: "Sure, we have no idea what consciousness is. Big deal. Whats your point?"
At this point, computation has just become a "god of the gaps" explanation. Computationalism isn't science, its a placative ideology serving to distract AI researchers from the fact that they really have no idea what consciousness is.
That is your opinion, certainly.
If you're any reflection of the average computationalists its a stark fact.