• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Are You Conscious?

Are you concious?

  • Of course, what a stupid question

    Votes: 89 61.8%
  • Maybe

    Votes: 40 27.8%
  • No

    Votes: 15 10.4%

  • Total voters
    144
You seem to be firmly stuck in a conceptual mode [by will or by flub] thats preventing you from seeing what I'm getting at. Metaphorically speaking, what I'm trying to get you to do is step back and stop thinking merely in terms of the abstract symbolism you're using to count tally sticks and focus on the sticks as physical objects.

I see what you are getting at (consciousness as we know it currently only happens in brains, so it is something brain-specific),

Nescafe, I'm genuinely shocked. Is that -honestly- what you think my point is? Really? :confused:

I just disagree that consciousness is necessarily brain-specific -- brains are evolved information processors, so (in principle) we can implement the information processing that brains do in some other suitable physical object.

And what constitutes a "suitable physical object"?

Remember that computations are carried about by physical hardware.

Digital physics aside, of course.

[Wow, you really wanna start playin' games now, huh? Up until a couple of posts ago, I was under the impression that you wanted to have a meaningful discussion. But since you insist... :-/ ]

Oh, lets not put digital physics aside. I forgot that one can conjure any physical object they want by simply computing it into existence. Heck, why don't we digitally transmute lead into gold while we're at it? :rolleyes:

Understanding, in the abstract, the computational ops that underly the symbolic representations on your calculator's screen is not the same as understanding the LCD thats displaying those symbols or knowing how to make one.

Right, that abstract understanding enables you to implement them on any suitable physical substrate that you have sufficient technical proficiency with.

I guess that rules out consciousness then.

Step back into metaphor with me because I really don't think you truly grok what it is I've been saying. Think of the mind as a computer monitor, consciousness as the illumination of the screen, and qualia as the various color pixels that are able to be produced by the display. Symbols on the screen are the products of the computations performed, but the actual display [i.e. the screen, the pixels, and the power used to light the screen] used to conveying those symbols is a product of the -physics- of the hardware.

Right. And in support of my viewpoint, any display that meets the minimum necessary technical requirements will do whether it is based on liquid crystals, lasers, mirrors, phosphor coated tubes, cuttlefish rhodopsins, whatever.

Or counting your fingers, or using a reeeeaally big complicated abacus. Oh, wait -- I thought we were talking about consciousness. Wrong discussion I guess.

Interesting that you appear to view consciousness in a transmission/reception paradigm, though.

What the hell are you talking about? Are you referring to how neurons transmit and receive electrochemical signals? Is that some kind of radical paradigm held in minority view or something?

So do you think conscious experience is something going on in a magical ether realm of abstraction separate from the physical universe?

No, I just recognize that information processing does not depend on any particular substrate -- any substrate that meets the minimum requirements of being able to accept, store, transform, and output information will do.

Wait. I thought you believed that every substrate -is- the acceptance, storage, transformation, and output of information? Have you lost faith in digital physics already?

Nescafe, if I take a pencil and write "1" on a piece of paper is it literally the number one? :rolleyes:

There is no literal physical number 1. That does not stop it from being a useful abstraction.

Just a "useful abstraction"? You were just touting the belief that the physical world we live in is digital. Is it a "useful abstraction" too?

Your view of what we must do to understand consciousness seems as silly as insisting that the only way we can understand 1 is by understanding what exactly is happening with that graphite/clay mark on the paper at the atomic level.

Oh, I get it. So your consciousness is not a literal physical process, its just a "useful abstraction" like the rest of the "abstract" universe we live in...


Garbage in, garbage out.

...is just a switching pattern on a computer that we use to symbolically represent an actual power plant.

To the same degree of oversimplification, consciousness is just a pattern of neural discharges in the brain.

Unless you have evidence for it being more than that?

You've just pithily demonstrated for me how the whole physical universe reduces down to digital abstraction. There is no difference between a simulation and the physical system its simulating because everything is made of ones and zeros (which don't literally exist, mind you) and all of reality has just disappeared in a puff of logic. How could I possible present any evidence now? :p
 
Last edited:
Nescafe, I'm genuinely shocked. Is that -honestly- what you think my point is? Really? :confused:
If I missed something essential, please elaborate on it.

And what constitutes a "suitable physical object"?
Anything with enough information processing power and I/O capacity -- say a personal computer from 30 years in the future.

[Wow, you really wanna start playin' games now, huh? Up until a couple of posts ago, I was under the impression that you wanted to have a meaningful discussion. But since you insist... :-/ ]

Oh, lets not put digital physics aside. I forgot that one can conjure any physical object they want by simply computing it into existence. Heck, why don't we digitally transmute lead into gold while we're at it? :rolleyes:
You misunderstand me. I do not care if digital physics is "real" -- it is just a neat idea. The "of course" was actually non-sarcastic. Perhaps special sarcasm punctuation is actually a good idea.

I guess that rules out consciousness then.
I think we have plenty of experience in building both general purpose and specialized information processing systems -- we lack the necessary abstract knowledge to implement human-style consciousness in them. I simply do not think that looking at what the brain does down to the level of quantum or classical EMF field dynamics will help us -- instead, we need to look up to the architecture of the interconnected information processing systems that the brain appears to be made up of. I also think that we need a working model that passes whatever functional tests for consciousness we care to set before we can hope to have an empirical explanation of what causes consciousness -- without that, we are effectively working from a single datapoint.

What the hell are you talking about?

This:
AkuManiMani said:
Think of the mind as a computer monitor, consciousness as the illumination of the screen, and qualia as the various color pixels that are able to be produced by the display.

Computer monitors are output devices that just display the information they receive from somewhere else (maybe with a little postprocessing, but nothing terribly interesting). All the interesting information processing happens somewhere else.

Wait. I thought you believed that every substrate -is- the acceptance, storage, transformation, and output of information?
Nope. Even if it turns out to be the case, there is every reason to believe that physics as we understand it would still hold, unless somehow the apparent consistency of the universe we inhabit turns out to be a mirage.

Have you lost faith in digital physics already?
I never had any.

Oh, I get it. So your consciousness is not a literal physical process, its just a "useful abstraction" like the rest of the "abstract" universe we live in...
No.
 
I'm sorry but, nuclear reactors were not build merely by tinkering and happenstance. It took centuries of accumulated scientific empirical investigation, experimental ingenuity, and theoretical insight. It was only after humans acquired the requisite empirical knowledge and theoretical understanding that they could design working nuclear rectors.


This is silly! And very wrong. :)

Henri BecquerelWP made his discovery in 1896. By tinkering and happenstance.

In fact there is a lot whole lot of tinkering and happenstance before you get to Enrico Fermi and the Chicago Pile-1WP in 1942.

That is 46 years and much of it is tinkering and happenstance.

Now the process by which large amoounts of the two isotopes of uranium were seperated out from each other (the real trick of the Manhattan Project) did require a lot of deliberate chemical engineering and tinkering and happenstance.

Your vision of technological advance is rather abstracted and divorced from the reality of the way things work. It is a lot of tinkering and happenstance.

:)
 
PixyMisa said:
However we are back at the conundrum; where does stimulus/response awareness become aware of itself, that is, conscious?
You just answered your own question. ;)
Actually, no I didn't. Would you care to try, referring to your model?
 
Last edited:
Nescafe, I'm genuinely shocked. Is that -honestly- what you think my point is? Really? :confused:

If I missed something essential, please elaborate on it.

I'm not arguing that only brains can be capable of supporting consciousness. Again, all I'm saying is that until we understand what consciousness is in physical terms, and learn how biological brains produce it we cannot design it into artificial systems. Like I said before, its a no brainer that brains compute, but computation does not "explain" the physical capacity for subjective experience anymore than it "explains" semiconductivity.

And what constitutes a "suitable physical object"?

Anything with enough information processing power and I/O capacity -- say a personal computer from 30 years in the future.

The fact that you implicitly assume that producing consciousness is a matter of switching capacity shows that you're conflating it with intelligence -- they are not the same thing. How do you know that something as simple as a nematode doesn't experience rudimentary sensations and feelings?

[Wow, you really wanna start playin' games now, huh? Up until a couple of posts ago, I was under the impression that you wanted to have a meaningful discussion. But since you insist... :-/ ]

Oh, lets not put digital physics aside. I forgot that one can conjure any physical object they want by simply computing it into existence. Heck, why don't we digitally transmute lead into gold while we're at it? :rolleyes:

You misunderstand me. I do not care if digital physics is "real" -- it is just a neat idea. The "of course" was actually non-sarcastic. Perhaps special sarcasm punctuation is actually a good idea.

In other words, I just got trolled? :p

I guess that rules out consciousness then.

I think we have plenty of experience in building both general purpose and specialized information processing systems -- we lack the necessary abstract knowledge to implement human-style consciousness in them.

I think you misunderstand what I mean when I say "consciousness". I'm not specifically referring to human cognitive capacities. I mean the ability to perceive, sense, feel, or otherwise experience subjectively. If one were to strip away all of a human's intelligence, linguistic skills, motility, powers of abstraction, or what-have-you, they still retain their inner "private" experiences. Conceivably, an entity as dumb as a brick can still be conscious [though they probably wouldn't live very long].

I simply do not think that looking at what the brain does down to the level of quantum or classical EMF field dynamics will help us -- instead, we need to look up to the architecture of the interconnected information processing systems that the brain appears to be made up of. I also think that we need a working model that passes whatever functional tests for consciousness we care to set before we can hope to have an empirical explanation of what causes consciousness -- without that, we are effectively working from a single datapoint.

There is only one significant data point sufficient to qualify a system as being conscious and it's currently the one that is out of our epistemic scope: subjective sensibility.

As of now we cannot tell what, or if, a given entity is experiencing beyond their ability to self-report. The obvious problem with this is that outward motor response or language may not be available to a subject that really is conscious; like in the case of individuals who are completely paralyzed or not sapient enough to communicate their subjective states. On the other side of the coin, outward communication itself is not a sufficient indicator of subjective experience; for example, just because a chat program responds to the typed query "how are you feeling today?" with "I feel ecstatic!" does not mean that it feels that emotion or is even capable of feeling at all.

The above are some of the major reasons why I'm insisting that we must be able to identify the sufficient physical conditions for consciousness and gain a theoretical understanding of the fundamental 'hows'-'whys'-&-'wherefores' that govern the variations of subjective states. As long as subjectivity lacks a solid theoretical frame in the physical sciences the quest for artificial consciousness really amounts to nothing but a modern incarnation of alchemy. This is a clear case of poor philosophical thinking leading to lousy science.

What the hell are you talking about?

This:
AkuManiMani said:
Think of the mind as a computer monitor, consciousness as the illumination of the screen, and qualia as the various color pixels that are able to be produced by the display.

Computer monitors are output devices that just display the information they receive from somewhere else (maybe with a little postprocessing, but nothing terribly interesting). All the interesting information processing happens somewhere else.

Its pretty well established that sensory information is parsed by the unconscious processing of the brain before a subject ever experiences them as qualia. Subjective experience is very much an "internal" output of brain IP.

Wait. I thought you believed that every substrate -is- the acceptance, storage, transformation, and output of information?

Nope. Even if it turns out to be the case, there is every reason to believe that physics as we understand it would still hold, unless somehow the apparent consistency of the universe we inhabit turns out to be a mirage.

The problem with digital physics is that its an ontology that reduces all things to abstract functions; even if one accepts it as a valid metaphysical view they must still assume some essential substance(s) performing those functions.

Have you lost faith in digital physics already?

I never had any.

So why did you invoke it as if it were a sufficient counter argument? :confused:

Oh, I get it. So your consciousness is not a literal physical process, its just a "useful abstraction" like the rest of the "abstract" universe we live in...

No.

If you agree that consciousness is a concrete physical phenomena and not merely a functional abstraction, why do you insist that it can only be explained in computational terms rather than physical? :confused:
 
Last edited:
I'm sorry but, nuclear reactors were not build merely by tinkering and happenstance. It took centuries of accumulated scientific empirical investigation, experimental ingenuity, and theoretical insight. It was only after humans acquired the requisite empirical knowledge and theoretical understanding that they could design working nuclear rectors.

This is silly! And very wrong. :)

Henri BecquerelWP made his discovery in 1896. By tinkering and happenstance.

Dave, you forgot to mention the following decades of requisite scientific development leading the monumental Manhattan Project. That endeavor alone required the pooling of some of the best scientific minds of the time, and state of the art theoretical physics [not to mention a colossal budget] before we could technologically hardness nuclear power for weapons and power plants. I think we can agree that discovering radioactivity and understanding it well enough to build nuclear reactors are two very different things ;)
 
Actually, no I didn't. Would you care to try, referring to your model?

He is incapable of providing sufficient answers to your questions because his lousy excuse of a "model" is insufficient; it is itself a self-referential tautology.
 
Last edited:
This is silly! And very wrong. :)

Henri BecquerelWP made his discovery in 1896. By tinkering and happenstance.

In fact there is a lot whole lot of tinkering and happenstance before you get to Enrico Fermi and the Chicago Pile-1WP in 1942.

That is 46 years and much of it is tinkering and happenstance.

Now the process by which large amoounts of the two isotopes of uranium were seperated out from each other (the real trick of the Manhattan Project) did require a lot of deliberate chemical engineering and tinkering and happenstance.

Your vision of technological advance is rather abstracted and divorced from the reality of the way things work. It is a lot of tinkering and happenstance.

:)

Becquerel certainly did not think that he would be able to create an atomic pile until he figured out what it was he had discovered.
 
AkuManiMani said:
Hm...Would you consider the experience itself to be something scientifically unknowable

Yes, but I don't think it's true that we'd necessarily know how it works for us to reproduce it.

In a way, we kinda already reproduce it every time we make babies. If we wanna gain technical mastery over consciousness we need not only a way to physically verify that experiences are happening without the need for self-reports, but also a theory that explains how it works.

IMO, just the epistemic challenge alone is enough to make this problem as hard as, if not more so, than humans learning to harness nuclear power. Atleast with nuclear power the objects of our study were relatively easy to identify physically. With consciousness, the only experiences we have direct access to are our own. This is gonna be really tricky :-X
 
Last edited:
I enjoy one-liners, myself, but do you think you could've been more explicative ?

A dead dog doesn't work. A sick dock doesn't work properly. Surely you can tell the difference.

Work as what? A wind turbine?
 
I'm not arguing that only brains can be capable of supporting consciousness. Again, all I'm saying is that until we understand what consciousness is in physical terms, and learn how biological brains produce it we cannot design it into artificial systems. Like I said before, its a no brainer that brains compute, but computation does not "explain" the physical capacity for subjective experience anymore than it "explains" semiconductivity.

Ah. I misunderstood your position, then. We seem to be mostly in agreement, except I think that the capacity for consciousness will be more usefully explained in terms of information processing rather than in physical terms.

Just to make my viewpoint clear, I think that any system that has the ability to model itself and its environment, use that model to choose between courses of action, and that can adjust its model based on the consequences of its choices would necessarily have "private" experiences in some sense, and should be considered conscious. I see no reason that any of the above would have a uniquely identifiable set of physical correlates.

The fact that you implicitly assume that producing consciousness is a matter of switching capacity shows that you're conflating it with intelligence -- they are not the same thing.

I do not think that producing consciousness is just a matter of switching capacity. Much more important is how that switching capacity is organized.

How do you know that something as simple as a nematode doesn't experience rudimentary sensations and feelings?
I do not.

In other words, I just got trolled? :p
You got autotrolled. ;)

If one were to strip away all of a human's intelligence, linguistic skills, motility, powers of abstraction, or what-have-you, they still retain their inner "private" experiences. Conceivably, an entity as dumb as a brick can still be conscious [though they probably wouldn't live very long].
In theory. I would not want to create such an entity, though.

There is only one significant data point sufficient to qualify a system as being conscious and it's currently the one that is out of our epistemic scope: subjective sensibility.
Indeed. I just don't think that there are any unique physical correlates to subjective sensibility.

As of now we cannot tell what, or if, a given entity is experiencing beyond their ability to self-report. The obvious problem with this is that outward motor response or language may not be available to a subject that really is conscious; like in the case of individuals who are completely paralyzed or not sapient enough to communicate their subjective states. On the other side of the coin, outward communication itself is not a sufficient indicator of subjective experience; for example, just because a chat program responds to the typed query "how are you feeling today?" with "I feel ecstatic!" does not mean that it feels that emotion or is even capable of feeling at all.
Yep.

The above are some of the major reasons why I'm insisting that we must be able to identify the sufficient physical conditions for consciousness and gain a theoretical understanding of the fundamental 'hows'-'whys'-&-'wherefores' that govern the variations of subjective states. As long as subjectivity lacks a solid theoretical frame in the physical sciences the quest for artificial consciousness really amounts to nothing but a modern incarnation of alchemy. This is a clear case of poor philosophical thinking leading to lousy science.
I think that any test for consciousness must necessarily be behavioral -- anything else, and your test will be tied to the physical details of a specific type of potentially conscious system.

Its pretty well established that sensory information is parsed by the unconscious processing of the brain before a subject ever experiences them as qualia. Subjective experience is very much an "internal" output of brain IP.
I would characterize it as high-level feature of the type of processing that the brain does, not as an output.

The problem with digital physics is that its an ontology that reduces all things to abstract functions; even if one accepts it as a valid metaphysical view they must still assume some essential substance(s) performing those functions.
As long as the ultimate substance behaves consistently, I do not care what it is, even it it is something as weird and counterintuitive as the ultimate substance being an abstract function.

So why did you invoke it as if it were a sufficient counter argument? :confused:
I was arguing against the "self-evident fact" part. It used to be self-evident that a particle could not be two places at once, or that
Euclid via Wikipedia said:
If a line segment intersects two straight lines forming two interior angles on the same side that sum to less than two right angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles.
We no longer think they are self-evident.

If you agree that consciousness is a concrete physical phenomena and not merely a functional abstraction, why do you insist that it can only be explained in computational terms rather than physical? :confused:
My consciousness is a physical process. Your consciousness is a physical process. Consciousness is an abstract label that denotes something we have in common about the physical processes that are us.

Moreover, I do not insist that it can only be explained in computational terms, I think that explaining consciousness in computational terms will turn out be be the most useful way of doing so.
 
Last edited:
I'm not arguing that only brains can be capable of supporting consciousness. Again, all I'm saying is that until we understand what consciousness is in physical terms, and learn how biological brains produce it we cannot design it into artificial systems. Like I said before, its a no brainer that brains compute, but computation does not "explain" the physical capacity for subjective experience anymore than it "explains" semiconductivity.

Ah. I misunderstood your position, then. We seem to be mostly in agreement, except I think that the capacity for consciousness will be more usefully explained in terms of information processing rather than in physical terms.

Just to make my viewpoint clear, I think that any system that has the ability to model itself and its environment, use that model to choose between courses of action, and that can adjust its model based on the consequences of its choices would necessarily have "private" experiences in some sense, and should be considered conscious. I see no reason that any of the above would have a uniquely identifiable set of physical correlates.

The reason why I very strongly insist that subjective "private" experiences are a matter of physics is because our experiences are highly specific physiological responses to specific kinds of physical stimuli. Information in the abstract does not trigger the sensation of pain or the perception of red; specific chemical signals in the body are required to trigger these experiences. Once these chemicals trigger the appropriate receptors the signal is converted into a flux in the voltage of the cell's membrane [which is itself serves as an LC semi-conductor with IMPs functioning as logic gates]. Sure, in the functional language of computer science, these chemicals are "inputs" but the actual experiences triggered by them are products of the physical conditions of the nervous system.

If a person is exposed to psychoactive substances their change in mental states is due to the nervous system's physical reaction to the reagent and not a magical emergent property of algorithmic code execution. The different chemical compositions of the substances that bind to cellular receptors determine the type of subjective response, if any, that a given conscious person will experience. Somehow, properties of the signal molecules are conveyed by the EM signals along the cell membrane which, in turn, appear to have direct affect on the subject's conscious experiences. There is no way a computer scientist can properly address the questions I raised in terms of I/O switching functions because they are inherently biophysics questions. Our consciousness and conscious experiences are undeniably a result of the physical conditions and biological mechanisms of our brains, and it's essential properties cannot be seriously tackled purely in terms of IP functional language. At some point we're going to have to deal with the actual physics of what the brain is doing instead of arrogantly -- lazily -- chalking it up to "computation/information processing/SRIP/etc." because the prospect of unknown science makes our brains hurt.

The fact that you implicitly assume that producing consciousness is a matter of switching capacity shows that you're conflating it with intelligence -- they are not the same thing.

I do not think that producing consciousness is just a matter of switching capacity. Much more important is how that switching capacity is organized.

Do you -honestly- believe that any substrate of any composition will have subjective experience merely because its implementing a particular switching pattern?

How do you know that something as simple as a nematode doesn't experience rudimentary sensations and feelings?

I do not.

But you could determine this if you knew it were implementing a particular line of code?

If one were to strip away all of a human's intelligence, linguistic skills, motility, powers of abstraction, or what-have-you, they still retain their inner "private" experiences. Conceivably, an entity as dumb as a brick can still be conscious [though they probably wouldn't live very long].

In theory. I would not want to create such an entity, though.

If consciousness itself could be reduced to something apart from all of those cognitive capacities what makes you think that its simply a matter of computational coding?

There is only one significant data point sufficient to qualify a system as being conscious and it's currently the one that is out of our epistemic scope: subjective sensibility.

Indeed. I just don't think that there are any unique physical correlates to subjective sensibility.

So all substances are conscious and it's simply a matter of waking them up with the correct algorithmic procedure?

The above are some of the major reasons why I'm insisting that we must be able to identify the sufficient physical conditions for consciousness and gain a theoretical understanding of the fundamental 'hows'-'whys'-&-'wherefores' that govern the variations of subjective states. As long as subjectivity lacks a solid theoretical frame in the physical sciences the quest for artificial consciousness really amounts to nothing but a modern incarnation of alchemy. This is a clear case of poor philosophical thinking leading to lousy science.

I think that any test for consciousness must necessarily be behavioral -- anything else, and your test will be tied to the physical details of a specific type of potentially conscious system.

We just established that linguistic and/or motile behavior may not be possible for a some conscious entities. Absent such a behavior test, how else could one discern whether or not they're conscious?

Its pretty well established that sensory information is parsed by the unconscious processing of the brain before a subject ever experiences them as qualia. Subjective experience is very much an "internal" output of brain IP.

I would characterize it as high-level feature of the type of processing that the brain does, not as an output.

As far as the subject is concerned, their sensory experience is very much like an output; some think of it as something akin to a fully immersive theatrical experience. Sensory stimuli that do not make it onto this stage of conscious experience are what we call subliminal. In any case, if we identify consciousness we'd have identified the experiencer.

The problem with digital physics is that its an ontology that reduces all things to abstract functions; even if one accepts it as a valid metaphysical view they must still assume some essential substance(s) performing those functions.

As long as the ultimate substance behaves consistently, I do not care what it is, even it it is something as weird and counterintuitive as the ultimate substance being an abstract function.

If there is a substance "behaving" then it is in not abstract in any sense of the word.

So why did you invoke it as if it were a sufficient counter argument? :confused:
I was arguing against the "self-evident fact" part. It used to be self-evident that a particle could not be two places at once, or that
Euclid via Wikipedia said:
If a line segment intersects two straight lines forming two interior angles on the same side that sum to less than two right angles, then the two lines, if extended indefinitely, meet on that side on which the angles sum to less than two right angles.
We no longer think they are self-evident.

I'm sorry but it's an a prior fact that computations are functions performed by substrates. You can not empirically falsify this anymore than you could empirically falsify that driving is performed by vehicles rather than vis versa. Functions do not "do" anything, they are "done".

If you agree that consciousness is a concrete physical phenomena and not merely a functional abstraction, why do you insist that it can only be explained in computational terms rather than physical? :confused:
My consciousness is a physical process. Your consciousness is a physical process. Consciousness is an abstract label that denotes something we have in common about the physical processes that are us.

Moreover, I do not insist that it can only be explained in computational terms, I think that explaining consciousness in computational terms will turn out be be the most useful way of doing so.

The problem is that you're putting the cart way before the horse. We have not yet identified what physically constitutes our consciousness and computational descriptions are not a sufficient substitute. At this point, computation has just become a "god of the gaps" explanation. Computationalism isn't science, its a placative ideology serving to distract AI researchers from the fact that they really have no idea what consciousness is. They're Columbuses who fervently want to believe that they've found the western route to the East Indies when, in actuality, they've really been completely sidetracked.
 
Last edited:
The reason why I very strongly suspect that subjective "private" experiences are a matter of physics is because our experiences appear to be highly specific physiological responses to specific physical stimuli. Information in the abstract does not trigger the sensation of pain or the perception of red; specific chemical signals in the body are required to trigger these experiences. Once these chemicals trigger the appropriate receptors the signal is converted into a flux in the voltage of the cell's membrane [which is itself serves as an LC semi-conductor with IMPs functioning as logic gates]. Sure, in the functional language of computer science, these chemicals are "inputs" but the actual experiences triggered by them are products of the physical conditions of the nervous system.

If a person is exposed to psychoactive substances their change in mental states is due to the nervous system's physical reaction to the reagent and not a magical emergent property of algorithmic code execution. The different chemical compositions of the substances that bind to cellular receptors determine the type of subjective response, if any, that a given conscious person will experience. Somehow, properties of the signal molecules are conveyed by the EM signals along the cell membrane which, in turn, appear to have direct affect on the subject's conscious experiences. There is no way a computer scientist can properly address the questions I raised in terms of I/O switching functions because they are inherently biophysics questions. Our consciousness and conscious experiences are undeniably a result of the physical conditions and biological mechanisms of our brains, and it's essential properties cannot be seriously tackled purely in terms of IP functional language. At some point we're going to have to deal with the actual physics of what the brain is doing instead of arrogantly -- lazily -- chalking it up to "computation/information processing/SRIP/etc." because the prospect of unknown science makes our brains hurt.



Are you suggesting that there is something inherently special or significantly different about particular neurotransmitters such that pain could not be possible, for instance, in the absence of glutamate? I couldn't theoretically replace all the glutamate pre-synaptically with acetylcholine and replace all the ionotropic and metabatropic glutamate receptors with functionally equivalent acetylcholine receptors and produce the exact same effect?

I fail to see what is so special about chemical transmission.

The wording above is also somewhat misleading. The 'signal' is not 'converted' into ion flux across membranes. The signal *is* the ion flux across membranes. That is what receptor-ligand interactions do.

Just because brains happen to use chemicals, how does this change the issue?

There is a bid engineering problem at the synapse, but what you are speaking of above is not it.
 
The reason why I very strongly insist that subjective "private" experiences are a matter of physics is because our experiences are highly specific physiological responses to specific kinds of physical stimuli.

OK, then, point to whatever specific physical thing encodes an experience and explain why only that physical thing can encode that experience.

Information in the abstract does not trigger the sensation of pain or the perception of red; specific chemical signals in the body are required to trigger these experiences.

The specific chemical signals in our body are an artifact of our evolutionary history. As the Wasp rightly pointed out, there is no reason to believe that the specific chemical signals (I will assume you are talking about neurotransmitters here) we use are required for the job -- in theory we could swap them out for an entirely different set of neurotransmitters and receptors and get functionally identical results.

As a thought experiment, would you agree that we could (in theory) replace the neurons in the brain one at a time with nanomachines that are functionally identical at a neuron-by-neuron basis? Why or why not?

If a person is exposed to psychoactive substances their change in mental states is due to the nervous system's physical reaction to the reagent and not a magical emergent property of algorithmic code execution.

I am personally subjectively familiar with the process.

We already know that neural networks do not process information the same way that our usual Von Neumann/Harvard architecture computers do -- no surprise there. That does not stop us (in theory) from emulating neural networks using algorithmic processes to whatever degree of detail needed. In practice of course, there are huge engineering challenges, but they are just that -- engineering challenges.

We are also pretty sure that psychoactive substance work because they (or their metabolites) either impersonate neurotransmitters, or they mess with the release/reuptake systems in synapses for particular neurotransmitters, thereby messing with the usual synaptic weighting (and, therefore, firing rate of the target neuron) for the sites the drugs affect. If that network happens to participate in a process involved with consciousness, then consciousness will be affected or altered, but as a result of the drug messing with the synaptic weights or firing rates of the neurons involved, not because the drug inherently possesses some magical consciousness altering property.

The different chemical compositions of the substances that bind to cellular receptors determine the type of subjective response, if any, that a given conscious person will experience.

First, the specific substances (I will assume you are talking about neurotransmitters and substances that impersonate them here) and their receptors are artifacts of our evolutionary history -- there is no reason to assume that the specific ones we use are the only ones possible.

Second, I see no reason to expect that we will find consciousness at the level of synaptic activity and neural excitation levels, especially since pretty much the same neurochemistry is used by everything that has a nervous system, whether or not we recognize them as conscious.

Somehow, properties of the signal molecules are conveyed by the EM signals along the cell membrane which, in turn, appear to have direct affect on the subject's conscious experiences.

No, the neurotransmitters do not have any special properties beyond the fact that they bind to neurotransmitter receptors. Even then, we have no reason to directly equate neural polarization levels and excitation thresholds to consciousness and subjective experience, and even if we did there is no reason that we could not in theory simulate those features in an artificial neural network.

There is no way a computer scientist can properly address the questions I raised in terms of I/O switching functions because they are inherently biophysics questions.
I have good reasons to doubt that, as outlined above.

Our consciousness and conscious experiences are undeniably a result of the physical conditions and biological mechanisms of our brains
Indeed.


and it's essential properties cannot be seriously tackled purely in terms of IP functional language.
Not so much.

At some point we're going to have to deal with the actual physics of what the brain is doing instead of arrogantly -- lazily -- chalking it up to "computation/information processing/SRIP/etc." because the prospect of unknown science makes our brains hurt.
Well, we have no indications that there is an "unknown science" at play, at least not at the level of physics or chemistry. I understand you believe differently, but it is just a belief right now.

Do you -honestly- believe that any substrate of any composition will have subjective experience merely because its implementing a particular switching pattern?
As long as it meets the requirements I outlined, then yes.

But you could determine this if you knew it were implementing a particular line of code?
A particular line of code in isolation? No, not any more than you could determine that a system is conscious by looking at a single neuron.

If consciousness itself could be reduced to something apart from all of those cognitive capacities what makes you think that its simply a matter of computational coding?
What makes you think it is not? Nevermind, you think it is some special property of our neurochemistry.

So all substances are conscious and it's simply a matter of waking them up with the correct algorithmic procedure?
No, I think that systems that meet the criteria I outlined are conscious, no matter what they are built out of.

We just established that linguistic and/or motile behavior may not be possible for a some conscious entities. Absent such a behavior test, how else could one discern whether or not they're conscious?
In general, we cannot. We could probably build a test for consciousness that is not behavioral if we know the details of how consciousness is implemented in entities of whatever type, but such tests would not be general.

As far as the subject is concerned, their sensory experience is very much like an output; some think of it as something akin to a fully immersive theatrical experience. Sensory stimuli that do not make it onto this stage of conscious experience are what we call subliminal. In any case, if we identify consciousness we'd have identified the experiencer.
The Cartesian Theater called, they want their homunculus back.

The problem is that you're putting the cart way before the horse.

You are insisting that our designs for a cart replacement are ridiculous and can never work because we swapped the horse for an internal combustion engine.

We have not yet identified what physically constitutes our consciousness and computational descriptions are not a sufficient substitute.
Yes, and?

At this point, computation has just become a "god of the gaps" explanation. Computationalism isn't science, its a placative ideology serving to distract AI researchers from the fact that they really have no idea what consciousness is.
That is your opinion, certainly.
 
Hmm, how about this:

(a) The content of conscious processes is information. This information is computable.

However, there is another essential property of conscious processes (perhaps this property should be rightly called the property of consciousness or being):

(b) There is something that it is like to be that information that makes up the conscious processes of Democracy Simulator, to use a (handy) example.

The conflict here seems to be between parties that are (a) theorists and parties that are (b) theorists. I hope I have been following correctly.
Of course, if you are an (a) theorist, in that consciousness is only computable information, then of course you are not going to agree with someone who also believes (b) to be true, as to whether consciousness is computable. If (a) and only (a) is true, then a simulation of conscious processes is consciousness. If, however there is something that it is like to be conscious, then we cannot be sure that a simulation will do.

Obviously I have pinned my colours to the mast and I do believe that (b) is also true - that there is something that it is like to be Democracy Simulator - I believe that this consciousness/being property is not information and not computable, it is a/the phenomenal reality. I would base this belief off three axioms (borrowed roughly from Objectivism):

1) There is existence
2) There is identity

and

3) There is experience of existence and identity.

Now 3) I would say is the phenomenal reality of being/consciousness.

Of course one may wish to attack these axioms, but the process is self-defeating as due to their axiomatic nature, in order to refute these axioms, one must recognise their validity.

I understand that there has been an attempt to do away with consciousness (b) but I think that the attempt to do so is similar in effect to attempting to do away with the concept of existence and on equally mistaken grounds. I am happy to go into this if anyone else wants to go down that road.
In short, would a simulation of existence be the same as existence? - obviously the informational content would be the same, but is there anything that it is like to exist? I think the 'no' answer to the (second) question has obvious flaws. In other words, one could have all the informational content of all existing things and yet one would still not know what existence was. Hence there is no existence? I don't think so.

To PixyMisa I would ask the question:
Is there anything that it is like to be PixyMisa?

No you have it wrong.

Group A (myself included) says the "what it is like to be the information" is simply the effect of being the information. It is trivial identity.

Group B says the "what it is like to be the information" cannot be merely the effect of identity.

EDIT: It is an interesting question, whether or not identity can be computed. I think the main problem is figuring out what exactly the phrase "everything can be computed" even means. I will make a post about this later tonight.
 
Last edited:

Back
Top Bottom