Has consciousness been fully explained?

Status
Not open for further replies.
/derail

I for one thought it was funny as could be. But it's getting funnier. :p
By the way, what is your opinion on the simulated human thought experiment?

Would it exhibit human like behaviour?
 
As I said, there is no perfect computer in the real world. It does not mean the definition of a physical computation is invalid.

In case anyone is interested, I spent a good amount of hours formulating a response to this challenge of westprog's regarding "physical definition of computation."

The whole thing requires its own thread, and I am too lazy to write it all out, but the following brief should be enough to convince anyone that respectably has business being here discussing it in the first place.

Basically, it comes down three ideas: to stable systems of particles versus unstable ones, how systems interact with each other, and what computation actually does.

The concept of "stable" here is tricky, because -- per westprog's insistance -- we can't use a definition that is human dependent and saying "stability" means "it exists into the future for longer" begs the question of "what exists? Who defines what constitutes the 'system?'" So by "stable" I mean only that other systems can physically interact with said system in a similar way for a longer duration. For instance, a chunk of metal floating in space is more stable than a gas cloud in space because any system is able to interact with a chunk of metal in the same way for much further into the future, compared to a gas cloud whose molecules dissapate across space very quickly, etc.

The second idea is that similar interactions between stable systems (really, the interaction is what is stable, but you get the idea ) by definition can occur more frequently than between non-stable ones. Mathematically one can say stable systems can "recognize" each other with much higher reliability because they exist in a form that is "recognizable" for a longer time. For example, if two compounds are in a solution and tend to react with each other, then the longer each of those species exists on its own the more frequently they will be able to react. If each species was instead only transiently existed the reaction rate would be much slower.

Now both of those ideas are just a mathematical and scientific basis for the intuitively clear notion that things that are stable exist longer. Is it better to exist longer? No -- there is no "better" in this discussion. Things that exist longer just exist longer and in turn that means other things that also exist longer can interact with them more reliably.

Here is the point -- the physical definition of computation. I have already established that there is a reason systems would exist longer than other systems. I have already established that if there was a way a system could keep itself existing longer it would exist longer. By definition, there is a selective pressure favoring things that exist longer, so any systems that do it -- by any mechanism -- will statistically exist with higher frequencies as time goes on, all else being equal. The only way a system of particles can be said to increase its own stability is via computation

What is computation? It is the property of a stable system that can be mathematically described as mapping the set of all external states to a SMALLER set of internal states, where "state" is defined according to the idea of "stability" described in the first idea. A system can use a series of computations to exhibit different behaviors such that it's stability is increased. It is that simple.

All life does this (in fact, one can say that life and the machines life creates are the only things that exhibit series of computations -- or "computes" -- although I am sure a few other examples might be found). That is what makes life life -- it computes and by doing so it increases the stability of the life system and in turn it exists longer into the future.

So long story short, the next time westprog tries to wiggle out of questions with this "physical definition of computation" stuff, know that it isn't even an issue anymore. I defined it, it exists, it is a process that only certain systems undergo, not all. And most importantly, It is not human-centric in any way. The idea is completely independent of humans.
 
Robin said:
You did in your responses to my questions.
No, asked me what I meant by human-like behaviour and I was answering your question.
Okay. I lost track of the thread then. Sorry.
Er, no. You originally said it was "sufficiently detailed computer model of a human brain". Now you are changing it to a human, including it's brain. This is a somewhat different setup. The more aspects of a human you model, the more likely you will see your model exhibiting human like behaviors.
Let me quote what I said:
robin said:
Suppose that there was a sufficiently detailed computer model of a human brain, and say this brain is given realistic sense data, including modelling of the sense data associated with body control and feed back.
Perhaps it starts as a model of an embryo and models the brain development up to birth and then childhood even adulthood - obviously this would take vast computing power.
But suppose that could be done - do you think it possible that the virtual human being modelled would exhibit human like behaviour?
Now do you think I mean't realistic sense data of a brain without a body? Honestly?
Yes. I honestly didn’t know how much of a human you were talking about simulating. Were you talking about simulating a brain in a vat or an entire human complete with mother father and community? You didn’t specify that originally, but I now gather that you are speaking of the latter.

In answer to your question, if the simulation is as high quality and detailed as you are assuming, I agree that the model would display human like behavior. But I’m no longer sure of the point of your thought experiment. After all, there is no way of discerning whether or not we are currently part of such a high-caliber simulation. :)

An interesting side question is if the simulated human could perceive ourselves, how would it respond to us?
Of course there would be some way of observing the behavior of the model.
Yes. And of course it is presumed to be sufficient to answer the question
The point I was trying to make is that the interface that allows that to happen will inevitable skew our perceptions of the behavior of the model. How can we determine if the model is exhibiting the human like behavior or if it is an artifact of the interface with the model?
Care to provide an answer to this question? Presuming we can do so is rather an easy out. We assume a simulation, we assume we can discern the difference between what the simulation does and what we perceive it doing. Why not just assume it displays human like behavior? Why are you asking the question? I thought it might be to tease out some interesting assumptions about human beings and their behaviors. I guess I was wrong.
I'm not sure where you are going with this.
I suspect that you know where I am going with this and don't want to go there.
I don’t know why would you think that about me. Where is it that you think I don’t want to go?
I’m afraid I simply found your last paragraph a non-sequitor. Where exactly are you trying to go with this inquiry?
 
Last edited:
Okay. I lost track of the thread then. Sorry.

Let me quote what I said:
Yes. I honestly didn’t know how much of a human you were talking about simulating. Were you talking about simulating a brain in a vat or an entire human complete with mother father and community? You didn’t specify that originally, but I now gather that you are speaking of the latter.
In answer to your question, if the simulation is as high quality and detailed as you are assuming, I agree that the model would display human like behavior. But I’m no longer sure of the point of your thought experiment. After all, there is no way of discerning whether or not we are currently part of such a high-caliber simulation. :)
An interesting side question is if the simulated human could perceive ourselves, how would it respond to us?

Yes. And of course it is presumed to be sufficient to answer the question.
Care to provide an answer to this question? Presuming we can do so is rather an easy out. We assume a simulation, we assume we can discern the difference between what the simulation does and what we perceive it doing. Why not just assume it displays human like behavior? Why are you asking the question? I thought it might be to tease out some interesting assumptions about human beings and their behaviors. I guess I was wrong.
I don’t know why would you think that about me. Where is it that you think I don’t want to go?
I’m afraid I simply found your last paragraph a non-sequitor. Where exactly are you trying to go with this inquiry?

Such a simulation could not, in principle and practice, be established solely with the Turing model. Something else would be necessary.
 
Care to provide an answer to this question?
I don't really understand the question.

How would a property of interface design lead us to think that behaviour was human-like?

I imagine the answer to this question is that we would study the behaviour carefully rather than give it a cursory once over, to satisfy ourselves that it is really human-like.
 
In answer to your question, if the simulation is as high quality and detailed as you are assuming, I agree that the model would display human like behavior. But I’m no longer sure of the point of your thought experiment. After all, there is no way of discerning whether or not we are currently part of such a high-caliber simulation. :)
Some would say that we can tell that we are not part of such a simulation because we are conscious - and that the simulated human would not be conscious.
 
Such a simulation could not, in principle and practice, be established solely with the Turing model. Something else would be necessary.
So it could not, even in principle, be done on a computer that runs a program consisting of a set of instructions?

No matter how fast the computer, no matter how much memory, no matter how detailed the model - it could not model a human brain interacting with an environment so as to produce human-like behaviour?
 
In case anyone is interested, I spent a good amount of hours formulating a response to this challenge of westprog's regarding "physical definition of computation."

The whole thing requires its own thread, and I am too lazy to write it all out, but the following brief should be enough to convince anyone that respectably has business being here discussing it in the first place.

Basically, it comes down three ideas: to stable systems of particles versus unstable ones, how systems interact with each other, and what computation actually does.

The concept of "stable" here is tricky, because -- per westprog's insistance -- we can't use a definition that is human dependent and saying "stability" means "it exists into the future for longer" begs the question of "what exists? Who defines what constitutes the 'system?'" So by "stable" I mean only that other systems can physically interact with said system in a similar way for a longer duration. For instance, a chunk of metal floating in space is more stable than a gas cloud in space because any system is able to interact with a chunk of metal in the same way for much further into the future, compared to a gas cloud whose molecules dissapate across space very quickly, etc.

The second idea is that similar interactions between stable systems (really, the interaction is what is stable, but you get the idea ) by definition can occur more frequently than between non-stable ones. Mathematically one can say stable systems can "recognize" each other with much higher reliability because they exist in a form that is "recognizable" for a longer time. For example, if two compounds are in a solution and tend to react with each other, then the longer each of those species exists on its own the more frequently they will be able to react. If each species was instead only transiently existed the reaction rate would be much slower.

Now both of those ideas are just a mathematical and scientific basis for the intuitively clear notion that things that are stable exist longer. Is it better to exist longer? No -- there is no "better" in this discussion. Things that exist longer just exist longer and in turn that means other things that also exist longer can interact with them more reliably.

Here is the point -- the physical definition of computation. I have already established that there is a reason systems would exist longer than other systems. I have already established that if there was a way a system could keep itself existing longer it would exist longer. By definition, there is a selective pressure favoring things that exist longer, so any systems that do it -- by any mechanism -- will statistically exist with higher frequencies as time goes on, all else being equal. The only way a system of particles can be said to increase its own stability is via computation

What is computation? It is the property of a stable system that can be mathematically described as mapping the set of all external states to a SMALLER set of internal states, where "state" is defined according to the idea of "stability" described in the first idea. A system can use a series of computations to exhibit different behaviors such that it's stability is increased. It is that simple.

All life does this (in fact, one can say that life and the machines life creates are the only things that exhibit series of computations -- or "computes" -- although I am sure a few other examples might be found). That is what makes life life -- it computes and by doing so it increases the stability of the life system and in turn it exists longer into the future.

So long story short, the next time westprog tries to wiggle out of questions with this "physical definition of computation" stuff, know that it isn't even an issue anymore. I defined it, it exists, it is a process that only certain systems undergo, not all. And most importantly, It is not human-centric in any way. The idea is completely independent of humans.
Well I am interested - but I think I am going to have to take a while to take that in.
 
I don't really understand the question.

How would a property of interface design lead us to think that behaviour was human-like?
I imagine the answer to this question is that we would study the behaviour carefully rather than give it a cursory once over, to satisfy ourselves that it is really human-like.
How to determine what is human like and what is not? For example, I don't consider my computer to be exhibiting human like behavior just because I get responses in English.

Some would say that we can tell that we are not part of such a simulation because we are conscious - and that the simulated human would not be conscious.
I suppose some would. I don't agree. Is that the type of response you are looking for? Denial of consciousness in a simulation no matter how high quality?
 
Last edited:
You're going to have to unpack that, my friend.

What makes consciousness such a special function that it alone requires no executive mechanism to make it happen?

And if it is special, please explain how in the world that happens.

I also am not understanding what you mean by an executive mechanism.
Do you mean how do the neurons get started firing? Or what started the neurons firing in the specific way to bring about consciousness? Like maybe there is a trigger that turns it on at a specific point?

I tend to think of consciousness as a side effect of high intelligence.
 
How to determine what is human like and what is not? For example, I don't consider my computer to be exhibiting human like behavior just because I get responses in English.
Neither do I

But if I got responses in English from something I knew to be a neuron level simulation of a human brain and associated sense data for a virtual environment, that would obviously change the judgement.

In general I don't think it would be that difficult. For example I would regard your posts as human like behaviour.
I suppose some would. I don't agree. Is that the type of response you are looking for? Denial of consciousness in a simulation no matter how high quality?
It was just a comment, not a question. I was only pointing out that, for example, some people would say that a computer running a set of instructions would not have the same conscious experiences that humans do.
 
By the way, what is your opinion on the simulated human thought experiment?
That its' relationship to human thought would be the same as the relationship a simulated lifeform would have to that actual living lifeform.

Would it exhibit human like behaviour?
It could be programmed to do so for specific and limited human behaviors.
 
That its' relationship to human thought would be the same as the relationship a simulated lifeform would have to that actual living lifeform.


It could be programmed to do so for specific and limited human behaviors.
But you are changing the thought experiment.

The only behaviours it is programmed for is the way the components of the neural architecture interact with each other.

It is a detailed physical model of the human brain and sufficient simulated sense data to provide some sort of environment, including a body.
 
Last edited:
But you are changing the thought experiment.

The only behaviours it is programmed for is the way the components of the neural architecture interact with each other.

It is a detailed physical model of the human brain and sufficient simulated sense data to provide some sort of environment, including a body.
My answer doesn't change.
 
That its' relationship to human thought would be the same as the relationship a simulated lifeform would have to that actual living lifeform.

The million-dollar question is: what is the nature of that relationship?

It's easy to see the relationship between a piloting a flight-simulator and flying a plane.

It's not at all easy to see the relationship between "simulated" addition (a program written to add 2 and 2) and a human doing addition.

(I have "simulated" in scare-quotes above, because I, for one, have a hard time making sense of simulated addition. The same way I have a hard time making sense of simulated thought.)

Do you see the difficulty?

It could be programmed to do so for specific and limited human behaviors.

Which behaviors could it not be programmed to perform (or simulate, if you must)?
 
Neither do I

But if I got responses in English from something I knew to be a neuron level simulation of a human brain and associated sense data for a virtual environment, that would obviously change the judgement.

In general I don't think it would be that difficult. For example I would regard your posts as human like behaviour.
Sure. But as my illustration with English shows, a great deal of our interpretation of what is human like and what is not is based on implicit knowledge of the entity exhibiting the behavior as much as on the behavior itself. My posts could, presumably, be written by some sort of 'bot. Getting an error message is something that a human might choose to do. i don't think it's as easy to distinguish the two in your hypothetical situation as you are claiming. In fact, I think it might be devilishly difficult if someone were to contest the conclusion.

It was just a comment, not a question. I was only pointing out that, for example, some people would say that a computer running a set of instructions would not have the same conscious experiences that humans do.

That isn't the same thing. We can't state anything about whether a computer running a set of instructions would have the same conscious experiences that humans do. We can't even state that two humans have the same conscious experiences, we only assume they are (a reasonable assumption IMO but still an unproven assumption).

We could observe a simulated human (a set of instructions run by a computer*) and decide if it exhibits the human-like behaviors, including consciousness. But conscious experiences are inherently subjective and private. They can only be shared imperfectly through words, pictures, etc. I doubt a computer would be able to share with a human what it's consciousness felt like any better than another human can.

*Incidently, if you assume that consciousness could be created in that manner, do you consider the computer to be conscious or the instructions?
 
My answer doesn't change.
But your answer does not make sense in this context.

It is obviously not programmed to mimick human behaviours as you suggested - so any behaviours that the simulated human exhibited would have to be a consequence of the interactions in the neural architecture.

So why would it exhibit only "limited" behaviours? What are the limits?

Would it be able to talk? Would it be able to pass a primary school comprehension test?

Would it argue philosophy and claim to be conscious?
 
Sure. But as my illustration with English shows, a great deal of our interpretation of what is human like and what is not is based on implicit knowledge of the entity exhibiting the behavior as much as on the behavior itself. My posts could, presumably, be written by some sort of 'bot. Getting an error message is something that a human might choose to do. i don't think it's as easy to distinguish the two in your hypothetical situation as you are claiming. In fact, I think it might be devilishly difficult if someone were to contest the conclusion.
Actually I think it would be devilishly simple in the circumstances. Where would even the simplest English sentence come from if the thing you were modelling was the interactions of the components of a human brain's architecture?
That isn't the same thing. We can't state anything about whether a computer running a set of instructions would have the same conscious experiences that humans do. We can't even state that two humans have the same conscious experiences, we only assume they are (a reasonable assumption IMO but still an unproven assumption).
But you did say that we would not know if we were such a simulation, which suggests that you believed that your own conscious experience could be from a simulation.
We could observe a simulated human (a set of instructions run by a computer*) and decide if it exhibits the human-like behaviors, including consciousness. But conscious experiences are inherently subjective and private. They can only be shared imperfectly through words, pictures, etc. I doubt a computer would be able to share with a human what it's consciousness felt like any better than another human can.
All good points.

But if I were to learn, for example, that Malerin was a neuron level simulation then I would have to wonder, when he heaps such scorn on those who say they do not know if they are conscious, how he could have such certainty about his own consciousness if he were not conscious.
*Incidently, if you assume that consciousness could be created in that manner, do you consider the computer to be conscious or the instructions?
I don't assume that it could, in fact elsewhere I have mounted an argument in the opposite way - that my conscious experience could not arise from a computer running a program based on a set of instructions.

Not that I particularly hold to that view but as support to agnosticism about the question.

I don't expect to see a simulation of a complete human brain and associated virtual environment any time soon, not even in my lifetime.

But I might see a complete simulation of a mouse. If we could see a simulation of a mouse that behaved like a mouse then I would tend more toward computationalism.
 
Status
Not open for further replies.

Back
Top Bottom