• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Explain consciousness to the layman.

Status
Not open for further replies.
What is unique about pain that simulated life can't feel it?

There is nothing in the robot that can be pain-hurt from the sensations coming from a mic or camera. At least nothing I could figure out. Do you honestly believe that a robot could be pain-hurt by its sensations?
 
Not sure if this is what Rocketdodger was refering to, but: Electrical signals in neurons tend to fire in discreet levels, effectively digitizing any information they carry. This is NOT always binary digital (which can be represented by 1s and 0s), but it is not analog in nature, either.

Remember: "Digital" does not always mean "binary". Our usual counting system, using 10 digits (0 to 9) is also digital.... it has digits, you see.

If neurons were analog, any waves of variation would carry from one neuron to the other, like ripples of waves in a pond. What actually happens is that most small variations in the signal are NOT preserved across different neurons it travels across.

I see. I think "quantal" might be a better adjective here, although that comes undesirably close to using the Q word.

That said, I think it's an oversimplification to consider the whole brain discrete because one particular element of neuronal function is, however major. On the pre-axon hillock side of things, everything which goes into forming the action potential is certainly analog. Integrate-and-fire neurons cannot replicate that functionality except in degenerate cases; you need a multi-compartment model to correctly emulate the real thing. On the other side of the signal, the spikes themselves contribute not only to the efferent neuron's state but also to the local field potential of the surrounding area, which affects the firing of nearby neurons in very subtle yet important ways (and on a large scale comprises what we call brain waves).

Not to mention the support structures - glia, vasculature - which modulate themselves to enhance neural function with no discrete elements in their functionality at all.

Finally if you'll humor a little pedantry, neurons don't universally fire action potentials. C elegans lacks sodium channels; its neurons really do communicate using calcium waves. Ours only do so indirectly, but y'know, still.
 
Last edited:
Considering how dualism seems to creep up in your own posts, I'd be careful with accusations like these.

I like the "seems to creep up". I suppose I should just be careful in case the Dualism Committee issues a report. Maybe I should lie low for a while.
 
So in the last dozen or so pages, has there been anything but a circular argument from incredulity? That is, "consciousness can't be simulated because I figure there'll be something that's vital to consciousness which can't be simulated?"

This is not the argument. The claim is that while a simulation of respiration is not respiration, or the simulation of a kidney is not a kidney, the simulation of a conscious brain will be a conscious brain. That unlike every other physical process in the body, a simulation will have the functionality of the thing it simulates.

The justification for this claim is usually an assertion along the line of "consciousness is a computational process, and hence equivalent to any other computational process". This remains an assertion, and it is for the people making this assertion to support it, not for anyone else to disprove it.
 
Unrelated to my post.

Your post was a direct (one word) response to my post, which claimed that reality couldn't be emulated on a computer. In order to clarify what I meant, I've defined the difference between emulation and simulation.

Programs on a computer that purport to represent reality are not equivalent to that reality. They are real processes in this world. They do not constitute other worlds of their own. Halo is not a real interstellar war.
 
There is nothing in the robot that can be pain-hurt from the sensations coming from a mic or camera. At least nothing I could figure out. Do you honestly believe that a robot could be pain-hurt by its sensations?

I don't think we have a robot that can do it, yet.

But there is no reason, in principle, that we can't build a robot that can feel real, genuine pain-hurt (and other forms of qualia, for that matter); once we understand a lot more about how the sensation of pain emerges.
 
This is not the argument. The claim is that while a simulation of respiration is not respiration, or the simulation of a kidney is not a kidney, the simulation of a conscious brain will be a conscious brain. That unlike every other physical process in the body, a simulation will have the functionality of the thing it simulates.

The justification for this claim is usually an assertion along the line of "consciousness is a computational process, and hence equivalent to any other computational process". This remains an assertion, and it is for the people making this assertion to support it, not for anyone else to disprove it.

Actually, when you consider from back near the start of the thread that the only definition of consciousness which didn't rely on subjective "I knows it when I sees it" metrics was "consciousness is whatever the brain does," then yes, a simulation of whatever the brain does will do whatever the brain does. That's kind of the point.
 
There are whole books on the subject that are a lot more rigorous. I used the term "emerge" as a summary, NOT as a mysterious black box or anything. You can't expect me to rewrite whole chapters of materials every time I talk about small matters of computing consciousness.



I can offer a summary of the most compelling theory I have heard:

A mapping of relationship between various models of the self (which are, themselves, modeled after reporting of the states of various aspects of the body; called "proto-self" and "core self", etc.) with other objects external to the self; that can be sustained for a certain amount of time, within the network. This second-order mapping can be called the "autobiographical self".

And it is also worth noting that some form of memory might be needed to make the relationships in the mapping make any kind of sense. "Memory", however, would be more of a systematic reconstruction of possible playbacks of past responses to stimuli (motor and emotional, etc.), as opposed to the conventional view that "memory" is the playback of a recording from the senses.

That was summary was more or less written in my own words. But, there are books that will elaborate on that, if you care to read them.

I focused on "inputs" recently, because someone brought up the idea that other physiological processes might be necessary for consciousness, than merely brain computation. If so, I argued how they can also be simulated to provide sufficient data. The exact details are probably not important, but if you insist on a summary:

Human-like consciousness seems to rely on states of the body being reported in some way, (Physical pain (perhaps from injury); or a sense of hunger and fatigue or being satiated and energetic); in order to develop models of the self. Each one of these can be fooled by anyone who intervenes in the process of reporting. And, there is no reason why any of them can never be simulated. You can call such reporting "inputs", if you would like.

In Antonio Damasio's book, he even makes specific claims as to how the posteromedial cortices (PMCs) have a unique set of connections to other parts of the brain that convey information about the body's states, including routes to the older brain stem areas, that other parts of the brain are not exactly privy to.

I don't see anything here that refers directly or implicitly to computation. Obviously there are connections in the brain. That doesn't imply that these connections are digital or computational.

You offer no examples of what you are talking about. Conceptually, I can see no reason why such examples can exist, right now. But, I am willing to be proven wrong. Explain, perhaps even only in principle, how something relevant to consciousness can never, ever be computed.

That's covered by Penrose. I don't know if his reasoning is sound, but he makes a case that the human brain is capable of performing tasks that are inherently unfathomable

The claims I am making are NOT vague assertions. We really DO know a LOT about how consciousness works, already!! Even if we don't know everything, we are still making progress on mastering the mysteries!

And, NOTHING discovered so far contradicts the idea that consciousness can be emulated or simulated on a computer, or in a robot.

That is why I haven't made the claim that such emulation or simulation is impossible. I've simply stated that it is at best unproven.

I urge you to read up on this exciting stuff, before you make such accusations, again.
It will be a claim of mysticism and magic (and god, if you like), until demonstrated otherwise.


The most compelling computational claim I have seen is that the REPORTING of physical processes is an essential ingredient in human-like consciousness. (Though, a more generalized version might not even include that, but that's a rabbit hole we can explore later.)

The argument about which physical process are (or are not) "essential" to consciousness is a non-sequitur. What is really important is that there is something, somewhere, that can somehow be mapped into a model of the self.



I can agree with this, actually.

That is too bad for them. Nothing we know, so far, contradicts the assertion.

As long as productive science can be achieved in following that idea, it will continue to be followed.



Can you give me an example of something that can't be computed by a Turning machine, but can still be computed by a different machine, or not?!

I don't know if there are functions which can be calculated on other machines- but I know that there are functions which cannot, in principle, be calculated on a Turing machine. I'll repeat the quote from the Stanford site, which covers this better than I can hope to do.

Stanford said:
The Church-Turing thesis does not entail that the brain (or the mind, or consciousness) can be modelled by a Turing machine program, not even in conjunction with the belief that the brain (or mind, etc.) is scientifically explicable, or exhibits a systematic pattern of responses to the environment, or is ‘rule-governed’

It's fairly explicit- it cannot be demonstrated as a matter of certainty that the function of the brain is computable. I don't have specific examples to prove that the brain is non-computable. If I did, I wouldn't be saying that the matter remained unproven - I would be claiming that it is impossible that brain be considered computable.
 
I see. I think "quantal" might be a better adjective here, although that comes undesirably close to using the Q word.

That said, I think it's an oversimplification to consider the whole brain discrete because one particular element of neuronal function is, however major. On the pre-axon hillock side of things, everything which goes into forming the action potential is certainly analog. Integrate-and-fire neurons cannot replicate that functionality except in degenerate cases; you need a multi-compartment model to correctly emulate the real thing. On the other side of the signal, the spikes themselves contribute not only to the efferent neuron's state but also to the local field potential of the surrounding area, which affects the firing of nearby neurons in very subtle yet important ways (and on a large scale comprises what we call brain waves).

Not to mention the support structures - glia, vasculature - which modulate themselves to enhance neural function with no discrete elements in their functionality at all.

Finally if you'll humor a little pedantry, neurons don't universally fire action potentials. C elegans lacks sodium channels; its neurons really do communicate using calcium waves. Ours only do so indirectly, but y'know, still.

Mathematically, any continuous event that precedes a discrete event can be replaced by a discrete event. For example, if you have a switch that flips based on the outside temperature, you can replace the continuous outside temperature with a discrete set of temperatures and the switch will function exactly as it did before as long as there is a boundary between the discrete values at the point when the switch flips.

So I don't see how replacing any of the "continuous" functions of neurons with discrete functions in a way that is transparent to the functioning of the neural network will alter the end behavior. In fact by definition it would *not* alter the end behavior.
 
Mathematically, any continuous event that precedes a discrete event can be replaced by a discrete event. For example, if you have a switch that flips based on the outside temperature, you can replace the continuous outside temperature with a discrete set of temperatures and the switch will function exactly as it did before as long as there is a boundary between the discrete values at the point when the switch flips.

So I don't see how replacing any of the "continuous" functions of neurons with discrete functions in a way that is transparent to the functioning of the neural network will alter the end behavior. In fact by definition it would *not* alter the end behavior.

Well, the bolded bit is the rub. Things get complicated quick.
 
Actually, when you consider from back near the start of the thread that the only definition of consciousness which didn't rely on subjective "I knows it when I sees it" metrics was "consciousness is whatever the brain does," then yes, a simulation of whatever the brain does will do whatever the brain does. That's kind of the point.

No, an emulation of the brain will do whatever the brain does. However, even this is not sufficient. If we want to do everything that the brain does, then we need something identical to the brain. I don't think that even the most extreme of the "consciousness = life" proponents would insist that every characteristic of the brain is necessary for consciousness, but that is what the above definition implies.

If we are to produce a useful definition of consciousness, we need to abstract the essential elements of the functionality of the brain. However, we are not yet able to do that - and certainly we are not able to guarantee that the functionality of the brain will be adequately represented by a digital computation.
 
I don't want to start a derail, but what digital nature?

Erm, I agree that "digital" is a loaded term. I would say "switching" nature but that is a loaded term as well. The behavior of interest, though, is the ability to map a larger set of external states to a smaller set of internal states in a way that offers utility of survival to some system.

Let me explain:

If a system S exhibits a behavior X that tends to increase the chances of S existing in a form that exhibits X at some point in the future, then all else being equal S will have a higher chance of not only existing but furthermore existing in that specific form at some point in the future.

Contrast this with a system that exhibits no such behavior -- all else being equal, such a system has no influence over the chances of its own existence at some point in the future.

The only type of behavior that satisfies this constraint is what I referenced above -- call it "digitization" or "discretization" or "switching" or "computing" or whatever, it doesn't matter. By mapping a large set of external states to a smaller set of internal states, a system effectively "decides" something.

And a "decision" is the only way for a system in a relatively constant configuration to increase chances of survival -- in some external states, the system goes to a certain internal state that increases its chances, in other external states, the system goes to another internal state. The essential feature of this behavior is that by having a smaller set of internal states than external states the configuration of the system can remain constant enough to be able to exhibit the same type of behavior in the future.

If you actually look at life in detail, you will see that this is the one feature shared by all aspects of it. Nucleotide polymers were able to come out of the primordial gunk precisely because their propagation mechanism is able to map a large set of external states to a smaller internal one, namely "if another molecule near me is potentially part of a similar polymer, I will react with it to form another polymer." Another polymer which continues on into the future. Cells are built of thousands of such chemical reactions, each of them acting in a discrete fashion, allowing the cell to "decide" what to do in numerous external states. Too much salt in the surrounding fluid? Do X. Detect chemical messenger molecules on my surface? Do Y. Internal division clock reached a threshold? Do Z.

Fast forward to an entire organism, that uses neurons to decide things for it. Something cutting my arm? A neuron maps the million possible things that could be happening to a simple set of pulses. Photons hitting my eye? A neuron maps the infinite possible combinations to a simple set of pulses. Pulses which can be used for the purpose of extending my life into the future, so that the neurons which generated those pulses ( and everything else that is part of me ) can continue to exhibit the same behavior in the future.

Thus, "quantal" behavior as you call it is a sort of magic. It leads to systems that keep themselves existing by active behavior, like life.
 
No, an emulation of the brain will do whatever the brain does. However, even this is not sufficient. If we want to do everything that the brain does, then we need something identical to the brain. I don't think that even the most extreme of the "consciousness = life" proponents would insist that every characteristic of the brain is necessary for consciousness, but that is what the above definition implies.

If we are to produce a useful definition of consciousness, we need to abstract the essential elements of the functionality of the brain. However, we are not yet able to do that - and certainly we are not able to guarantee that the functionality of the brain will be adequately represented by a digital computation.

No, a simulation should be sufficient to do whatever the brain does. An emulation will do whatever the brain does the way the brain does it. I happen to think emulation will turn out to be easier since you can sacrifice wasted processing in return for not having to guess at function, as the conversation between me and rocketdodger will be drifting toward, but simulations should be more than sufficient once we know more about the problems.

The rest of your post goes all god of the gaps. We've encountered nothing in neurobiology yet that can't be simulated or emulated, so all signs point to machine consciousness being perfectly doable. To insist that we should try to remain carefully neutral just in case something unsimulatable wanders in from left field is silly. We don't have souls, westprog, I'm sorry.
 
Erm, I agree that "digital" is a loaded term. I would say "switching" nature but that is a loaded term as well. The behavior of interest, though, is the ability to map a larger set of external states to a smaller set of internal states in a way that offers utility of survival to some system.

Let me explain:

If a system S exhibits a behavior X that tends to increase the chances of S existing in a form that exhibits X at some point in the future, then all else being equal S will have a higher chance of not only existing but furthermore existing in that specific form at some point in the future.

Contrast this with a system that exhibits no such behavior -- all else being equal, such a system has no influence over the chances of its own existence at some point in the future.

The only type of behavior that satisfies this constraint is what I referenced above -- call it "digitization" or "discretization" or "switching" or "computing" or whatever, it doesn't matter. By mapping a large set of external states to a smaller set of internal states, a system effectively "decides" something.

And a "decision" is the only way for a system in a relatively constant configuration to increase chances of survival -- in some external states, the system goes to a certain internal state that increases its chances, in other external states, the system goes to another internal state. The essential feature of this behavior is that by having a smaller set of internal states than external states the configuration of the system can remain constant enough to be able to exhibit the same type of behavior in the future.

If you actually look at life in detail, you will see that this is the one feature shared by all aspects of it. Nucleotide polymers were able to come out of the primordial gunk precisely because their propagation mechanism is able to map a large set of external states to a smaller internal one, namely "if another molecule near me is potentially part of a similar polymer, I will react with it to form another polymer." Another polymer which continues on into the future. Cells are built of thousands of such chemical reactions, each of them acting in a discrete fashion, allowing the cell to "decide" what to do in numerous external states. Too much salt in the surrounding fluid? Do X. Detect chemical messenger molecules on my surface? Do Y. Internal division clock reached a threshold? Do Z.

Fast forward to an entire organism, that uses neurons to decide things for it. Something cutting my arm? A neuron maps the million possible things that could be happening to a simple set of pulses. Photons hitting my eye? A neuron maps the infinite possible combinations to a simple set of pulses. Pulses which can be used for the purpose of extending my life into the future, so that the neurons which generated those pulses ( and everything else that is part of me ) can continue to exhibit the same behavior in the future.

Thus, "quantal" behavior as you call it is a sort of magic. It leads to systems that keep themselves existing by active behavior, like life.

You mean a stable attractor?

I think there's a more precise synonym used a lot in computational neuroscience, but damned if I can think of it just now.
 
I don't see anything here that refers directly or implicitly to computation. Obviously there are connections in the brain. That doesn't imply that these connections are digital or computational.
The models of self I am referring to emerge as a computation within the brain. Sorry if I did not make that point explicit.

I already covered the digital nature of the brain, by pointing out how neurons fire at discreet levels, not continuous waves.


That's covered by Penrose. I don't know if his reasoning is sound, but he makes a case that the human brain is capable of performing tasks that are inherently unfathomable
"Currently" unfathomable, if you lived in 1989.

We learned a lot more about the mind since then. And, quantum computation has not shown to be part of it, yet. More progress in the field of consciousness has been made at larger-than-quantum levels.

I used to think QM had something to do with consciousness, myself, some time ago. But, I've learned a lot more, since then.

That is why I haven't made the claim that such emulation or simulation is impossible. I've simply stated that it is at best unproven.
Seems like an unproductive attitude, to me, but fine: Be that way.

It's fairly explicit- it cannot be demonstrated as a matter of certainty that the function of the brain is computable. I don't have specific examples to prove that the brain is non-computable. If I did, I wouldn't be saying that the matter remained unproven - I would be claiming that it is impossible that brain be considered computable.
I can't say that there is 100% certainty that the brain or mind is computable, either. Who knows, maybe someday we will discover something very specific about the mind that can't ever be computed, in any way.

But, until such a discovery is made, it looks like productive science can still be achieved by trying. The more we try, the more we learn. That is my position.

We're not asking to build perpetual motion machines, here. There are specific reasons those won't work. We have no such reasons for conscious robots to exist, yet.
 
No, a simulation should be sufficient to do whatever the brain does. An emulation will do whatever the brain does the way the brain does it. I happen to think emulation will turn out to be easier since you can sacrifice wasted processing in return for not having to guess at function, as the conversation between me and rocketdodger will be drifting toward, but simulations should be more than sufficient once we know more about the problems.

The rest of your post goes all god of the gaps. We've encountered nothing in neurobiology yet that can't be simulated or emulated, so all signs point to machine consciousness being perfectly doable. To insist that we should try to remain carefully neutral just in case something unsimulatable wanders in from left field is silly. We don't have souls, westprog, I'm sorry.

I'll continue to make the point which never seems to be addressed.

We don't expect a computer simulation of respiration to actually convert oxygen to carbon dioxide. We don't expect a computer simulation of the kidneys to leak urine. We don't expect a computer simulation of a foot to kick someone we disagree with. Yet it is to be taken as obvious that a computer simulation of a brain would be able to think.

I don't see why this particular physical process is supposedly unlike all other physical processes in the body.
 
I'll continue to make the point which never seems to be addressed.

We don't expect a computer simulation of respiration to actually convert oxygen to carbon dioxide. We don't expect a computer simulation of the kidneys to leak urine. We don't expect a computer simulation of a foot to kick someone we disagree with. Yet it is to be taken as obvious that a computer simulation of a brain would be able to think.

I don't see why this particular physical process is supposedly unlike all other physical processes in the body.
Because consciousness is a process, not a product. It's whatever the brain does.

Using one of your analgies at random, let's assign a term to the kidneys. "Makinwater" is whatever the kidney does. A computer simulation of the kidney would not leak urine, but it would demonstrate the process of makinwater.
 
The models of self I am referring to emerge as a computation within the brain. Sorry if I did not make that point explicit.

I already covered the digital nature of the brain, by pointing out how neurons fire at discreet levels, not continuous waves.

And you're having a dispute over that in a separate subthread.

"Currently" unfathomable, if you lived in 1989.

No, I mean that according to Penrose' claim, there are functions that are provably non-computable, which a human mind can solve. That's his proof that the brain is not a Turing Machine. I am not insisting that his argument is true, but I think it's at least something to be born in mind. The debate continues.

We learned a lot more about the mind since then. And, quantum computation has not shown to be part of it, yet. More progress in the field of consciousness has been made at larger-than-quantum levels.

I used to think QM had something to do with consciousness, myself, some time ago. But, I've learned a lot more, since then.

Seems like an unproductive attitude, to me, but fine: Be that way.


I think there's a name for the unproductive attitude of not believing that something is true until it's proven. Perhaps someone else on the skeptics forum can think of it.

I can't think how progress is precluded in any way by an acceptance that we are not yet certain of the answers. If we look at scientific history, most bottlenecks have resulted from blind attachment to certainties that have later turned out to be false.

I can't say that there is 100% certainty that the brain or mind is computable, either. Who knows, maybe someday we will discover something very specific about the mind that can't ever be computed, in any way.

But, until such a discovery is made, it looks like productive science can still be achieved by trying. The more we try, the more we learn. That is my position.

We're not asking to build perpetual motion machines, here. There are specific reasons those won't work. We have no such reasons for conscious robots to exist, yet.

I agree that this is an abstract discussion. The optimism of the early days of Artificial Intelligence has disappeared now. We didn't get conscious computers after thirty years, or fifty years. We can settle down for a long wait.
 
Because consciousness is a process, not a product. It's whatever the brain does.

And now we have a definition.

How do we know that consciousness is a process? How do we know that it is not connected with the physical activities of the brain?

You can assert this, but not demonstrate it.

Using one of your analgies at random, let's assign a term to the kidneys. "Makinwater" is whatever the kidney does. A computer simulation of the kidney would not leak urine, but it would demonstrate the process of makinwater.

It would demonstrate it. It wouldn't be doing it. It would allow predictions to be made as to the rate of production of urine, but nobody thinks that it's the same process.
 
And now we have a definition.

How do we know that consciousness is a process? How do we know that it is not connected with the physical activities of the brain?

You can assert this, but not demonstrate it.

Because "whatever the brain does" is the only thing all the things people assert to be a part of consciousness have in common. If you think there is more to it than that, you carry the burden of proof.


It would demonstrate it. It wouldn't be doing it. It would allow predictions to be made as to the rate of production of urine, but nobody thinks that it's the same process.

I thinks it's the same process. I believe I said as much.
 
Status
Not open for further replies.

Back
Top Bottom