• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Has consciousness been fully explained?

Status
Not open for further replies.
How can we tell when we've created consciousness? How can we measure it? How do we quantify it?

If the behavior is exactly the same, you get either consciousness or a p-zombie (if you accept the existence of those), and then we can discuss on how to tell the difference.

But first I'd like to know from AkuManiMani why such a functional replica would be impossible.
 
Last edited:
Like I said, it would be a lot simpler to put that line of thinking to the test by just using the ready-grown brain of a cadaver; all of the neural architecture is already in place. Based upon what you're suggesting, it should be a relatively strait-forward task of getting it to produce convincing behaviors without consciousness.

I want to create a functional replica out of computer chips. I'm not interested in "simpler" solutions using cadavers, because then we still don't understand anything.

Using a computer chip, we know for sure there's no more hidden "magic" inside, because we know exactly how it works.

I'm still interested where you think the problem is in producing a functional replica of a neuron using a computer chip. So far, you've been carefully avoiding that question. Is it because you don't know the answer ?
 
One small problem with dead bodies: They're dead.

And unlike our abilities with computer chips, we lack the skills to bring them back alive. Besides that route would be pointless. It would be much simpler to use a cadaver before it's dead.
 
It's necessary in this particular discussion to be very precise as to what is meant. "Simulation" has been used in a lot of different ways.

You can find that out by reading my messages. But, just for the record, I'll explain it, and add some detail at the same time.

I was talking about a system, consisting of:

  • Suitable I/O converters that can translate between all kinds of real-world analog systems and the digital domain. For example, a microphone, combined with amplifier and an ADC (Analog/Digital Converter), as you would find on your computer's sound card. Depending on the application, you could pick from hundreds of different converters: temperature, touch, sound, motion, light, etc...
  • A physical implementation of a digital computer system, which attaches to all these I/O converters. This is basically a standard computer. Since this is a thought experiment, the computers can be as fast, or as small, as we want them. If desired, we can also substitute a single computer for a large network of smaller ones, which is functionally equivalent.
  • A software program running on that system, performing an accurate simulation of whatever physical thing we want to replicate.

I claim that using suitable components, we can create a replica of a human brain, that will be indistinguishable from a real brain, as long as we treat it as a black box. So, you can only look from the outside of the I/O boundary, not inside.

A simple version could have a USB headset attached, that you could put on your head, and have a realistic conversation with.

AkuManiMani seems to claim that such a system, no matter how accurately we mimic a physical brain (even to the level of individual neurons), is only capable of producing canned responses. I want to know why.
 
Well, for one thing, you are putting a homunculus in there to 'read the pain' and anytime you see a homunculus you're dealing with some form of dualism.
I didn't use the phrase "reading the pain" and I'm not sure where you see a homunculus in what I wrote. I can only assume you interpreted what I wrote in a very different way to what I intended it to mean.

Same with the subjective sensation of pain. It plays a functional role. It did not arise in beings with language or in brains that have a homunculus sitting in them monitoring for pain in order to suppress it.
Ditto above, and from memory I didn't mention language either? Too lazy to look again right now!

The "how does it do that?" question is very appropriate, so I think one of looking at the issue really is to look at the function that pain serves.
The main (almost "only") "how does it do that?" question that I am really interested in concerns how "phenomenal experience" (of any kind, not just pain) is generated (by "pure computation" or possibly even by other means). My background is mainly in mathematics and software development. Yet I can see no way that software alone could give rise to anything like that subjective experience. The question I have is also nothing to do with the "quality" (clarity or strength or timeliness or degree of integration, etc.) but simply the existence of such an experience.

Say you are nature and you want to get an animal to avoid things that hurt its body. How are you going to do it? You'd have to send some sort of signal within the animal for it to carry out the appropriate action in order to survive. What we call the subjective sensation of pain seems to fill that functional role quite well.
Well, I am a part of nature! And if was to write some software to control a simple robot, then I might start with logic that read (or "processed" if that is clearer) the outputs from pressure/damage sensors placed around the exterior parts to detect collisions and "respond" in some way to avoid damage - say by moving rapidly away for a short distant. However I wouldn't generate a subjective/phenomenal/conscious "pain-like" experience - and if fact, I don't even know know how to do that.

Sorry this response is so late but I do appreciate the effort you made in providing the in depth details about pain. However, mostly I couldn't really see how it was particularly relevant to the the questions in my mind. I am focussed on how mere neurons firing in various patterns in my brain (all essentially a "mechanical process" as it were) can give rise to the subjective feelings that I experience (some of the time). It may be that it's all just some kind of "illusion" as has been suggested, and in that case I'd like to understand how that illusion is generated.
 
It may be that it's all just some kind of "illusion" as has been suggested, and in that case I'd like to understand how that illusion is generated.

The problem with trying to understand how the illusion works, is that you're under the influence of the illusion, while you're trying that.

It's like trying to explain the magic trick where the lady is sawed in half, while still trusting your senses that she is. In order to see the illusion, you have to look at it objectively, which is impossible with your own mind.

The only objective way to look at this, is by examining other people, and then the whole problem disappears.
 
Try to stick with the pills that remove the feeling of pain, and not complete zombie pills. The first kind is much more interesting to think about what you would feel. Do you suppose those kinds of pain-feeling suppressing pills would be possible in theory ? And if so, how would you feel after taking them ?
Okay, here's what you said earlier:
Suppose you have a chronic pain condition, and none of the standard pain killers can provide good relief. You're talking to your doctor, and he suggests using a new, experimental drug, that doesn't take away any functional part of the pain, but it just removes the subjective feeling that it hurts. The rest of your consciousness remains unaffected, so you can still function exactly the same, and nobody will be able to tell any difference. If you hurt yourself, you'll still yell "ouch", and you'll still apply all the normal remedies like you did before. So, you have all the functional benefits of the pain mechanism, without the nasty side effect that it feels so bad.

Suppose the drug really works. You feel no more pain..... But then your arm grabs the bottle of old pain killers, and you see yourself swallow a couple, and when the doctor asks how you are feeling, you hear your voice say : "I'm afraid the new pills aren't working". Think about it. How would you think you'd react ?
Not sure where you're trying to go with this. Presumably how I would feel or react depends on how the stuff happening at neuron level relates to my subjective experiences. You have apparently hinted that I should be expecting some kind of out of body type feeling by using a third person view. I can't tell if that was you giving away what you thought the answer should be, or whether you were trying to lead me in some direction. The only "experience" I can think of offhand that might be similar is when having a dream or nightmare containing oddball stuff of that nature. If your point was that I was supposed to end up feeling confused because my voice or arm seemed to have a life of their own, well then, consider me confused. There is a "wandering hand" syndrome, from memory, that you might be interested in.

Sure, but only if we all agree that the program possesses consciousness.
No. Say someone claims to have (for example) a machine that turns water into wine. I don't have to agree that it works before being interested in having a closer look at it, especially if the only description I have is very general. Of course, we might still end up arguing about whether the output really was wine but at least I'd be able to look at the internals of the machine and come up with some kind of opinion on whether it could be doing something like what was being claimed.

Obviously, it's not a matter of saying "I'll accept it". It's something that involves a big change in the way we're looking at things. Similar to a patient suffering from Anton's syndrome admitting they are blind, but even more difficult.
I'm not really sure what you are trying to say. But how about you take a version of my red pill now seeing that I tried your pill? Imagine that a baby is conceived, born and develops in such a way that all the "physical aspects" of it's body and brain are just as we would normally expect them to be. However we suppose that this baby never has any "subjective experience" (which it would seem is true for all of us moments after the sperm has fertislised the egg anyway) throughout it's entire life. Neurons are still there. Eyes work, stuff goes in, etc. Baby learns, grows, etc. When and why does the lack of subjective/phenomenal experience make any difference. or if there is some point in development where there is no choice, when is that, and why?

I came up with a better exampe. Anton-Babinksi syndrome is usually discovered by other people because the patient start to walk into objects and walls.

Suppose we have a person suffering from Anton-Babinksi, but there's a direct neural pathway from the visual to the motor cortex that still works, that makes them instinctively avoid any objects and walls. So, they don't really "see" things, but their body will act as if they do. This kind of condition will be much harder to diagnose. The patient will claim everything is fine, and also appears to behave normally. You can ask them the color of an object, and, even though they can't see it, their mouth will voice the appropriate answer.
Okay. If you say so. What am I supposed to make of this? I don't think we're on the same wavelength. Weird stuff can happen if bits of the brain are altered or damaged and we can also imagine that weird stuff could happen in various thought experiments. I'm get the feeling that you think subjective experience necessarily exists because otherwise there'll be some obviously absurd situation? You might have to just lay out your argument in black and white if that's what you're trying to prove.
 
Last edited:
Like I said, it would be a lot simpler to put that line of thinking to the test by just using the ready-grown brain of a cadaver; all of the neural architecture is already in place. Based upon what you're suggesting, it should be a relatively strait-forward task of getting it to produce convincing behaviors without consciousness.

I want to create a functional replica out of computer chips. I'm not interested in "simpler" solutions using cadavers, because then we still don't understand anything.

Using a computer chip, we know for sure there's no more hidden "magic" inside, because we know exactly how it works.

Wait wait wait...You've just spent your last several post responses to me arguing that we need to create 'functional replicas' of neurons based off of the most accurate readings and understandings we have of how real neurons work. When I suggest just using actual neurons and brains to test your hypotheses [they ARE the systems you're trying to model, after all] your response is that it wouldn't work because we still don't understand anything about the actual neurons you're suggesting that we model. Are you really not picking up on the glaring problem here....?

I'm still interested where you think the problem is in producing a functional replica of a neuron using a computer chip. So far, you've been carefully avoiding that question. Is it because you don't know the answer ?

Carefully avoiding it? I've spent the past couple forums pages explaining why such a thing would not produce convincing behaviors as you hope it would. Here you are, of your own admission, without any understanding of the "magic" that allows biological neurons to produce consciousness, yet you propose that by modeling from such a poor understanding one can replicate the behaviors produced by that consciousness. I've two words for ya: Cargo Cult.
 
Like I said, it would be a lot simpler to put that line of thinking to the test by just using the ready-grown brain of a cadaver; all of the neural architecture is already in place. Based upon what you're suggesting, it should be a relatively strait-forward task of getting it to produce convincing behaviors without consciousness.
One small problem with dead bodies: They're dead.

Yea, no sh**, Sherlock. Got any clues as to why that would be an issue? :rolleyes:
 
No. Say someone claims to have (for example) a machine that turns water into wine.

You have to agree on what 'wine' is. Suppose the machine does nothing, but I claim it produces wine. You taste it, and you say it's water. I'll say, no it's wine, but I'll admit, it tastes like water because it's very, very, diluted wine.

But how about you take a version of my red pill now seeing that I tried your pill? Imagine that a baby is conceived, born and develops in such a way that all the "physical aspects" of it's body and brain are just as we would normally expect them to be. However we suppose that this baby never has any "subjective experience" (which it would seem is true for all of us moments after the sperm has fertislised the egg anyway) throughout it's entire life. Neurons are still there. Eyes work, stuff goes in, etc. Baby learns, grows, etc. When and why does the lack of subjective/phenomenal experience make any difference. or if there is some point in development where there is no choice, when is that, and why?

If all the physical aspects are identical, the lack of subjective experience will make no difference (assuming no magic is involved). This baby may grow up, and turn into a philosopher like Chalmers.

Okay. If you say so. What am I supposed to make of this? I don't think we're on the same wavelength. Weird stuff can happen if bits of the brain are altered or damaged and we can also imagine that weird stuff could happen in various thought experiments. I'm get the feeling that you think subjective experience necessarily exists because otherwise there'll be some obviously absurd situation? You might have to just lay out your argument in black and white if that's what you're trying to prove.

There is no black and white. I can't explain it, so I'm trying to come up with thought experiments to make you (and me) think, and imagine what it is like.

If you can imagine what it would be like to have Anton's syndrome, in combination with the ability to instinctively avoid walking into objects, then perhaps you can imagine that you have that exact condition right now.

Understanding consciousness is not about getting an explanation. Basically the explanation has already been given. What is left, is the difficult part of accepting it.
 
I didn't use the phrase "reading the pain" and I'm not sure where you see a homunculus in what I wrote. I can only assume you interpreted what I wrote in a very different way to what I intended it to mean.


Here is what you wrote:
Clive said:
If the sensation of pain is merely the result of neurons firing in some pattern somewhere in the brain, then why can't the other parts of the brain that need to come up with a remedy simply read the outputs from the part of the network that is generating the "pain pattern"? Why bother to generate a "subjective pain" as well? And how does it do that?


The exact phrase you wrote was "read the outputs from the part of the network that is generating the 'pain pattern'", which I simplified to reading the pain since that seems to be what you are talking about. Having other parts of the brain read the outputs and make sense of them for a coordinated response generally implies a homunculus. If you meant something else then I apologize because I misunderstood your point.

Every part of the brain "reads" the outputs from other areas; one region influences another. That is just a description of the way the brain works. But there is no central area that understands it all and provides an appropriate behavior which is what would be necessary if there is no subjective experience.

The whole point of providing a subjective experience is because that is natures way of motivating animals to do something. The subjective pain experience (well, the suffering part of pain) appears to be the motivation to move.

Ditto above, and from memory I didn't mention language either? Too lazy to look again right now!


I know you didn't mention language. I was trying to bring out the issue that natural selection can only work with what it has to work with, so there are built in constraints to the kinds of solutions it can manufacture.


The main (almost "only") "how does it do that?" question that I am really interested in concerns how "phenomenal experience" (of any kind, not just pain) is generated (by "pure computation" or possibly even by other means). My background is mainly in mathematics and software development. Yet I can see no way that software alone could give rise to anything like that subjective experience. The question I have is also nothing to do with the "quality" (clarity or strength or timeliness or degree of integration, etc.) but simply the existence of such an experience.

Right, I understand. I am trying to suggest that the 'feeling' part of pain, for instance, is not necessarily what we think it is. It may actually be the motivation to move. Wouldn't you think that programming the strong motivation to move would be easier to accomplish than something that we think of as pure sensation?

The location and intensity issue is one that I mentioned for completeness sake because I don't want folks to think that pain is entirely accounted for by the suffering aspects of it. It is, unfortunately, more complicated than that.

I realize that we all have difficulty with the 'trippy' nature of phenomenal experience, but it may not be as difficult as we think.


Well, I am a part of nature! And if was to write some software to control a simple robot, then I might start with logic that read (or "processed" if that is clearer) the outputs from pressure/damage sensors placed around the exterior parts to detect collisions and "respond" in some way to avoid damage - say by moving rapidly away for a short distant. However I wouldn't generate a subjective/phenomenal/conscious "pain-like" experience - and if fact, I don't even know know how to do that.


OK, but that is because you have done what a simple animal does. It has a pure stimulus response model, which is exactly what we do when we put our hands on a burning stove and move the hand away without thinking or even feeling any pain. The pain comes later. None of that is conscious.

What you would need to do is introduce a different way of programming the robot so that it was motivated to move but that motivation competed with other possible behavioral motivations and it had the ability to evaluate each of the motivations in terms of its appropriateness for the situation.

What I am suggesting is that the motivation to move -- one that is not just stimulus-response -- would be the subjective experience of pain. That seems a more tractable problem to me.

Sorry this response is so late but I do appreciate the effort you made in providing the in depth details about pain. However, mostly I couldn't really see how it was particularly relevant to the the questions in my mind. I am focussed on how mere neurons firing in various patterns in my brain (all essentially a "mechanical process" as it were) can give rise to the subjective feelings that I experience (some of the time). It may be that it's all just some kind of "illusion" as has been suggested, and in that case I'd like to understand how that illusion is generated.


It isn't more neuron firings per se but particular types of neuron firings. The location is also not what is important; that is simply the location we have for those processes in our brains. What is important for the location is that we come to understand how the cingulate gyrus does it.
 
The problem with trying to understand how the illusion works, is that you're under the influence of the illusion, while you're trying that.

It's like trying to explain the magic trick where the lady is sawed in half, while still trusting your senses that she is. In order to see the illusion, you have to look at it objectively, which is impossible with your own mind.

The only objective way to look at this, is by examining other people, and then the whole problem disappears.
Feel free to ask Pixy to explain how it might be an illusion as he first raised it. The point about the illusion (if that's what it is) needing to delude itself has already been raised. I threw in the possibility just for completeness.

How can I ever "look at it objectively" if I'm under the influence of this (alleged) illusion all my waking time anyway? Maybe my subconscious will figure it out while I'm sleeping?
 
When I suggest just using actual neurons and brains to test your hypotheses [they ARE the systems you're trying to model, after all] your response is that it wouldn't work because we still don't understand anything about the actual neurons you're suggesting that we model. Are you really not picking up on the glaring problem here....?

The benefit of trying to duplicate is that we can figure out which parts are important and functional, and which parts are not.

Suppose, for instance, that quantum mechanical effects are necessary to produce our behavior, like some people have suggested. Well, if we ignore these effects in our model, we should discover a discrepancy in the results, and we learn something.

It's like researchers trying to come up with a good climate model. I hope you're not arguing this is a waste of time, because we can just wait for the real climate to change, with more accurate results.

Carefully avoiding it? I've spent the past couple forums pages explaining why such a thing would not produce convincing behaviors as you hope it would.

You have never explained why we couldn't duplicate the behavior of a single neuron, just taking it as a black box with inputs and outputs. Do you think there's consciousness in a single neuron ? If not, why do we need to know how it works before we can duplicate one ?
 
How can I ever "look at it objectively" if I'm under the influence of this (alleged) illusion all my waking time anyway?

That would be pretty much impossible, at least when looking at yourself.

You can objectively look at 6 billion other people, and understand they all took the red pill.
 
The benefit of trying to duplicate is that we can figure out which parts are important and functional, and which parts are not.

Suppose, for instance, that quantum mechanical effects are necessary to produce our behavior, like some people have suggested. Well, if we ignore these effects in our model, we should discover a discrepancy in the results, and we learn something.

It's like researchers trying to come up with a good climate model. I hope you're not arguing this is a waste of time, because we can just wait for the real climate to change, with more accurate results.

Its not necessarily a waste of time If they go about making the model and finding out the significant differences between the outputs of the model and that of the biological systems they're attempting to model. You really don't have to take my word for it since, as we speak, people are trying to go down the route you're suggesting right now. Wanna start placing bets? :)


You have never explained why we couldn't duplicate the behavior of a single neuron, just taking it as a black box with inputs and outputs. Do you think there's consciousness in a single neuron ? If not, why do we need to know how it works before we can duplicate one ?

As I was saying earlier, being alive is what seems to be, at the very least, a necessary condition for a system to support consciousness. I already mentioned that there are certain thermodynamic indicators of life and I think its these properties that make living neurons more suitable to the task than the non-living models you're suggesting. As PixyMisa so eloquently pointed out earlier: a cadaver won't work because its dead. Same applies to our current computer systems.
 
Last edited:
As I was saying earlier, being alive is what seems to be, at the very least, a necessary condition for a system to support consciousness.

Again, I'm not asking about consciousness. I'm asking why we couldn't replicate the functional behavior of a single neuron. . You keep answering unrelated questions.

I already mentioned that there are certain thermodynamic indicators of life and I think its these properties that make living neurons more suitable to the task than the non-living models you're suggesting.

Suitable for what ? Electrical pulses go in a neuron, and pulses come out. We can already duplicate that with dead electronics.

As PixyMisa so eloquently pointed out earlier: a cadaver won't work because its dead. Same applies to our current computer components.

If you measure a cadaver's neuron, there are no pulses coming out, so it's obvious what the problem is.
 
As I was saying earlier, being alive is what seems to be, at the very least, a necessary condition for a system to support consciousness.
No, not at all. Unless you want to classify computers as alive, which is a little odd.

I already mentioned that there are certain thermodynamic indicators of life and I think its these properties that make living neurons more suitable to the task than the non-living models you're suggesting.
This is very obviously and categorically untrue. Living neurons and logic gates in computers behave similarly. Dead neurons just sit there and smell bad.

As PixyMisa so eloquently pointed out earlier: a cadaver won't work because its dead.
Yes.

Same applies to our current computer systems.
No. As I so eloquently point out right now, computers behave like living brains.

If you want to compare a dead brain to something, a good example would be a rock: They are described by their bulk properties; nother one has any switching capability, neither one undergoes non-linear state transformations, neither one requires energy to continue to function, because neither one functions at all.

Give it up, AkuManiMani. You're just wrong.
 
Status
Not open for further replies.

Back
Top Bottom