Okay, cool. I think thats as good a response that anyone can give on this subject at the moment.
However, there are some short comings to this scheme: The first is that inputs into a system are sensory only if the given system has subjective sensibility.
I am not sure I follow you here -- are you suggesting that unconscious animals do not register or act on sensory input in any fashion?
If the entity in question is not conscious, then their responses to inputs/stimuli are no more sensory than a mousetrap having it's trigger tripped. In order to qualify as being conscious the system in question as to experience internal/external stimuli as qualia.
As you've already surmised from my line of argument so far, I think that the computational architecture of the brain serves as the systemic constraint that organizes qualia into an internal model of the world relevant to the subject's survival. However, the actual qualia themselves are a product of the brain's biophysics.
The second is that symbols only take on the force of being symbols if there is a conscious subject associating those symbols with meaning(s).
Sorry, I was using the term "symbol" in a Shannon information theory sense. I probably should have just used the term "output", as the nervous system processes information whether it possesses what we call "consciousness" or not.
Thats a major reason for my objection to thinking of consciousness purely in IP terms.
So we're still left with having to explain the whole subjective aspect of the issue [i.e. consciousness]: What is it in physical terms, and what are the sufficient conditions for it?
I think describing it in physical terms will be useful to the same degree that describing any moderately complex computational process (an algorithm for simulated annealing or a forward chaining expert system or whatever) in terms of what is happening at a transistor by transistor level on my laptop is -- OK for reverse engineering if that is all you have, not so useful once we figure out what is happening.
Going back to the generator analogy:
Lets say that a 16th century tinkerer was introduced to an early 20th century hand cranked dynamo wired to an incandescent bulb without any introduction or explanation. With no understanding of the physical principles underlying it's design
[such as the role of the magnet and electrical coil] he goes on to build a replica that emulates the structure and moving parts perfectly but they do not generate any electrical power. He has to know what the appropriate materials are in order to build a physically efficacious reproduction. If the tinkerer is thinking of the device purely in the mechanical terms hes familiar with and lacks any concept of electricity
[or worse, tacitly rejects any suggestion of a 'mysterious' energy he has no understanding of] then he will be forever stuck in the mud -- his efforts will go no where.
AI researchers of today are basically doing the same thing with regard to the brain. Many desire to reproduce a product of brain activity
[consciousness] but they don't really have any idea of HOW the brain produces it or even what it is. So they just emulate the brain's computational architecture
[since thats something they feel they have a pretty good technical grasp of] and completely disregard any need to understand the underlying physics of the brain. This is a dire mistake.
What I'm objecting to is the assertion thats frequently made here that we already have a sufficient answer. We most assuredly don't.
I argue that we have what we need to find a sufficient answer, and that we will not need to describe the answer in terms of the four fundamental forces any more than we have to for any other biological process.
The thing is that we can atleast describe those biological processes in physical terms if need be
[for instance we're gaining an ever better understanding of the physics of photosynthesis]. The thing is we don't even know how to describe consciousness in biological terms yet.
The problem regarding consciousness is twofold. The obvious issue is epistemic; we don't know how we can identify consciousness unequivocally in entities other than ourselves nor do we have accesses to mental states other than our own. The second problem is ontological; we strait-up don't know what consciousness is or how it fits into the larger ontological frame of the our physical sciences. These two limitations conflate in turning consciousness into a very tricky scientific and philosophical problem. IMO, just chalking it all up to "computation" and calling it a day is tantamount to rolling over in defeat.
Physically, the difference is shown as the varying frequencies of the brain's EM activity. Each frequency range is correlated with a particular conscious state, or lack thereof.
Are you referring to EEG related stuff? That is a very crude diagnostic indeed.
Unfortunately EEGs, MEGs, and other brain imaging techniques, combined with the self reports of human subjects are the best avenue we have in the scientific study of consciousness. We haven't gleamed nearly enough from this yet to devise means of reproducing consciousness artificially. Without a rigorous scientific theory of consciousness on par with the criteria I listed, synthetic consciousness is just a pipe dream.
Every single one of those biological mechanisms -- [1] membrane potentials, [2] polarization, [3] signal transduction, etc.. -- all of them, utilize EMF interactions.
Yes, at the level of individual atoms. When we are talking about processes happening at the cellular level we do not care about EMF interactions at all beyond what is necessary to explain the chemistry of what is happening.
But they are EMF interactions all the same and we can afford to abstract them into biological language precisely because we understand the role those interactions play in the 'public' observables of biochemical interaction. We can even isolate the molecules in question and observe them under controlled conditions outside of the context of a living cell. Physically speaking, we know what the cells are doing and, if need be, we can always express the cellular activity in terms of physics.
Its not nearly so simple with consciousness. Not only must we employ cumbersome imagining techniques on
-live- subjects, those subjects must also be sapient enough to report their 'private' states. We can abstract those subjective states into psychological language but, unlike with other biological processes, we've not even the barest understanding of it at the level of physics.
Conscious experience is not a functional abstraction of what neural cells do but what they are actually physically producing.
We disagree. I see it as an artifact of the way our nervous system models, learns from, and adapts to our environment. I don't see individual nerve cells producing much beyond metabolic waste products, heat, and the odd depolarization event. The only thing that is interesting about them from the standpoint of consciousness is that their depolarization events can be controlled by other nerve cells, and that they can be connected in huge, ornate networks. Other than that, they suck at being antennae and they are way too hot and dense for quantum effects to start being interesting.
On the contrary, a lot of the cutting edge research in biophysics investigates the role that quantum level interactions play in biological functioning -- photosynthesis just being one of them. Even a quick search on this topic is bound to bring up articles on the subject.
[If you're interested in further reading on the subject I'd recommend a book my bio proff. recommended to me: The Rainbow and the Worm. I found it very tough to go thru since much of the material in it is very technically dense but its still a facinating read all the same
]
Even so, I'm sure you realize that the only way that we can falsify any claim to creating a conscious system is a scientific theory of consciousness that meets the criteria I listed earlier.
Not exactly -- I think that the only criteria we have to establish if something is conscious or not right now is to interact with it and see if it acts like a conscious entity. I realize this is a very crude test, but it is likely to be a good as we can get for a while. I also think that establishing a scientific test based on the physical properties of neurons is the wrong way to go about it -- at the very least, I would focus more on their properties as information processors, and I would look more at how the networks as a whole in the brain behave rather than focusing on individual neurons.
I've no doubt that consciousness is the result of the global activity of the brain and that our cognition is a reflection of it's computational architecture. Even so, the fact remains that conscious experience itself is a product of the brain's
-physical- activity. We must understand how it reduces to biophysical terms before we can learn to instantiate it into artificial systems.
Unlike Chalmer's, I'm not smugly content with thinking of consciousness as an insoluble philosophical conundrum. I think that science can make real inroads in this area. I also think that philosophy should be used as a tool to help us attack this problem, not as a means rationalize it into an eternal mystery box.
I think that so far philosophy has made a hash of it -- too much thinking about the problem, not enough of it empirical. Bring on the science.
Scientific theorycrafting is an inherently philosophical endeavor. Shoddy philosophical thinking in science can be just as detrimental as lousy experimental design.
Every object inheres information and every process is processing information.
Yeah, quantum mechanics 101. Neurons do not just process information in that trivial sense, though -- they do it by summing their excitatory inputs, subtracting inhibitory inputs, and firing if their input passes a certain threshold. Totally different ball of wax.
But understanding
-how- those input/outputs translate into sensation requires understanding neural activity in physical terms rather than just the computational.
For example, understanding the computational architecture of output display does not tell one how to build a working computer monitor. In instantiating those features into a hardware system there must be a physical understanding of the materials involved and how they should be integrated. Consciousness is no different.
Its the literal physical flipping of the computer hardware's switching mechanisms. The computer simulation of the power plant is just a representational tool. Like language, it only takes on symbolic significance in the minds of the humans who use the computer.
So all that switch flipping is still just a simulation even though the power plant would stop functioning (possibly catastrophically) if the computer running it crashed or was switched off?
My point is that the simulation is just a
representation of the physical plant, not a magical looking glass world. Literally speaking, its merely a switching pattern on computer hardware thats integrated into the functioning of the actual power plant. There is no electrical power being generated "inside" the simulation; the power is being generated by the physical equipment of the plant. The computer simulation isn't anymore a power plant than the written word "muffin" is an actual pastry.