AlBell
Philosopher
- Joined
- Mar 28, 2009
- Messages
- 6,360
/derailI'll try to remember in future how hurtful and demeaning that kind of joke can be.
I for one thought it was funny as could be. But it's getting funnier.
/derailI'll try to remember in future how hurtful and demeaning that kind of joke can be.
By the way, what is your opinion on the simulated human thought experiment?/derail
I for one thought it was funny as could be. But it's getting funnier.![]()
As I said, there is no perfect computer in the real world. It does not mean the definition of a physical computation is invalid.
Okay. I lost track of the thread then. Sorry.Robin said:No, asked me what I meant by human-like behaviour and I was answering your question.You did in your responses to my questions.
Yes. I honestly didn’t know how much of a human you were talking about simulating. Were you talking about simulating a brain in a vat or an entire human complete with mother father and community? You didn’t specify that originally, but I now gather that you are speaking of the latter.Let me quote what I said:Er, no. You originally said it was "sufficiently detailed computer model of a human brain". Now you are changing it to a human, including it's brain. This is a somewhat different setup. The more aspects of a human you model, the more likely you will see your model exhibiting human like behaviors.
Now do you think I mean't realistic sense data of a brain without a body? Honestly?robin said:Suppose that there was a sufficiently detailed computer model of a human brain, and say this brain is given realistic sense data, including modelling of the sense data associated with body control and feed back.
Perhaps it starts as a model of an embryo and models the brain development up to birth and then childhood even adulthood - obviously this would take vast computing power.
But suppose that could be done - do you think it possible that the virtual human being modelled would exhibit human like behaviour?
Care to provide an answer to this question? Presuming we can do so is rather an easy out. We assume a simulation, we assume we can discern the difference between what the simulation does and what we perceive it doing. Why not just assume it displays human like behavior? Why are you asking the question? I thought it might be to tease out some interesting assumptions about human beings and their behaviors. I guess I was wrong.Yes. And of course it is presumed to be sufficient to answer the questionOf course there would be some way of observing the behavior of the model.
The point I was trying to make is that the interface that allows that to happen will inevitable skew our perceptions of the behavior of the model. How can we determine if the model is exhibiting the human like behavior or if it is an artifact of the interface with the model?
I don’t know why would you think that about me. Where is it that you think I don’t want to go?I suspect that you know where I am going with this and don't want to go there.I'm not sure where you are going with this.
Okay. I lost track of the thread then. Sorry.
Let me quote what I said:
Yes. I honestly didn’t know how much of a human you were talking about simulating. Were you talking about simulating a brain in a vat or an entire human complete with mother father and community? You didn’t specify that originally, but I now gather that you are speaking of the latter.
In answer to your question, if the simulation is as high quality and detailed as you are assuming, I agree that the model would display human like behavior. But I’m no longer sure of the point of your thought experiment. After all, there is no way of discerning whether or not we are currently part of such a high-caliber simulation.
An interesting side question is if the simulated human could perceive ourselves, how would it respond to us?
Yes. And of course it is presumed to be sufficient to answer the question.
Care to provide an answer to this question? Presuming we can do so is rather an easy out. We assume a simulation, we assume we can discern the difference between what the simulation does and what we perceive it doing. Why not just assume it displays human like behavior? Why are you asking the question? I thought it might be to tease out some interesting assumptions about human beings and their behaviors. I guess I was wrong.
I don’t know why would you think that about me. Where is it that you think I don’t want to go?
I’m afraid I simply found your last paragraph a non-sequitor. Where exactly are you trying to go with this inquiry?
I don't really understand the question.Care to provide an answer to this question?
Some would say that we can tell that we are not part of such a simulation because we are conscious - and that the simulated human would not be conscious.In answer to your question, if the simulation is as high quality and detailed as you are assuming, I agree that the model would display human like behavior. But I’m no longer sure of the point of your thought experiment. After all, there is no way of discerning whether or not we are currently part of such a high-caliber simulation.![]()
So it could not, even in principle, be done on a computer that runs a program consisting of a set of instructions?Such a simulation could not, in principle and practice, be established solely with the Turing model. Something else would be necessary.
Breathing is volitional and autonomic, you can stop breathing especially from damage to the CNS and PNS.I don't think, however, that you forget to breathe.
Well I am interested - but I think I am going to have to take a while to take that in.In case anyone is interested, I spent a good amount of hours formulating a response to this challenge of westprog's regarding "physical definition of computation."
The whole thing requires its own thread, and I am too lazy to write it all out, but the following brief should be enough to convince anyone that respectably has business being here discussing it in the first place.
Basically, it comes down three ideas: to stable systems of particles versus unstable ones, how systems interact with each other, and what computation actually does.
The concept of "stable" here is tricky, because -- per westprog's insistance -- we can't use a definition that is human dependent and saying "stability" means "it exists into the future for longer" begs the question of "what exists? Who defines what constitutes the 'system?'" So by "stable" I mean only that other systems can physically interact with said system in a similar way for a longer duration. For instance, a chunk of metal floating in space is more stable than a gas cloud in space because any system is able to interact with a chunk of metal in the same way for much further into the future, compared to a gas cloud whose molecules dissapate across space very quickly, etc.
The second idea is that similar interactions between stable systems (really, the interaction is what is stable, but you get the idea ) by definition can occur more frequently than between non-stable ones. Mathematically one can say stable systems can "recognize" each other with much higher reliability because they exist in a form that is "recognizable" for a longer time. For example, if two compounds are in a solution and tend to react with each other, then the longer each of those species exists on its own the more frequently they will be able to react. If each species was instead only transiently existed the reaction rate would be much slower.
Now both of those ideas are just a mathematical and scientific basis for the intuitively clear notion that things that are stable exist longer. Is it better to exist longer? No -- there is no "better" in this discussion. Things that exist longer just exist longer and in turn that means other things that also exist longer can interact with them more reliably.
Here is the point -- the physical definition of computation. I have already established that there is a reason systems would exist longer than other systems. I have already established that if there was a way a system could keep itself existing longer it would exist longer. By definition, there is a selective pressure favoring things that exist longer, so any systems that do it -- by any mechanism -- will statistically exist with higher frequencies as time goes on, all else being equal. The only way a system of particles can be said to increase its own stability is via computation
What is computation? It is the property of a stable system that can be mathematically described as mapping the set of all external states to a SMALLER set of internal states, where "state" is defined according to the idea of "stability" described in the first idea. A system can use a series of computations to exhibit different behaviors such that it's stability is increased. It is that simple.
All life does this (in fact, one can say that life and the machines life creates are the only things that exhibit series of computations -- or "computes" -- although I am sure a few other examples might be found). That is what makes life life -- it computes and by doing so it increases the stability of the life system and in turn it exists longer into the future.
So long story short, the next time westprog tries to wiggle out of questions with this "physical definition of computation" stuff, know that it isn't even an issue anymore. I defined it, it exists, it is a process that only certain systems undergo, not all. And most importantly, It is not human-centric in any way. The idea is completely independent of humans.
How to determine what is human like and what is not? For example, I don't consider my computer to be exhibiting human like behavior just because I get responses in English.I don't really understand the question.
How would a property of interface design lead us to think that behaviour was human-like?
I imagine the answer to this question is that we would study the behaviour carefully rather than give it a cursory once over, to satisfy ourselves that it is really human-like.
I suppose some would. I don't agree. Is that the type of response you are looking for? Denial of consciousness in a simulation no matter how high quality?Some would say that we can tell that we are not part of such a simulation because we are conscious - and that the simulated human would not be conscious.
You're going to have to unpack that, my friend.
What makes consciousness such a special function that it alone requires no executive mechanism to make it happen?
And if it is special, please explain how in the world that happens.
Neither do IHow to determine what is human like and what is not? For example, I don't consider my computer to be exhibiting human like behavior just because I get responses in English.
It was just a comment, not a question. I was only pointing out that, for example, some people would say that a computer running a set of instructions would not have the same conscious experiences that humans do.I suppose some would. I don't agree. Is that the type of response you are looking for? Denial of consciousness in a simulation no matter how high quality?
That its' relationship to human thought would be the same as the relationship a simulated lifeform would have to that actual living lifeform.By the way, what is your opinion on the simulated human thought experiment?
It could be programmed to do so for specific and limited human behaviors.Would it exhibit human like behaviour?
But you are changing the thought experiment.That its' relationship to human thought would be the same as the relationship a simulated lifeform would have to that actual living lifeform.
It could be programmed to do so for specific and limited human behaviors.
My answer doesn't change.But you are changing the thought experiment.
The only behaviours it is programmed for is the way the components of the neural architecture interact with each other.
It is a detailed physical model of the human brain and sufficient simulated sense data to provide some sort of environment, including a body.
That its' relationship to human thought would be the same as the relationship a simulated lifeform would have to that actual living lifeform.
It could be programmed to do so for specific and limited human behaviors.
Sure. But as my illustration with English shows, a great deal of our interpretation of what is human like and what is not is based on implicit knowledge of the entity exhibiting the behavior as much as on the behavior itself. My posts could, presumably, be written by some sort of 'bot. Getting an error message is something that a human might choose to do. i don't think it's as easy to distinguish the two in your hypothetical situation as you are claiming. In fact, I think it might be devilishly difficult if someone were to contest the conclusion.Neither do I
But if I got responses in English from something I knew to be a neuron level simulation of a human brain and associated sense data for a virtual environment, that would obviously change the judgement.
In general I don't think it would be that difficult. For example I would regard your posts as human like behaviour.
It was just a comment, not a question. I was only pointing out that, for example, some people would say that a computer running a set of instructions would not have the same conscious experiences that humans do.
But your answer does not make sense in this context.My answer doesn't change.
Actually I think it would be devilishly simple in the circumstances. Where would even the simplest English sentence come from if the thing you were modelling was the interactions of the components of a human brain's architecture?Sure. But as my illustration with English shows, a great deal of our interpretation of what is human like and what is not is based on implicit knowledge of the entity exhibiting the behavior as much as on the behavior itself. My posts could, presumably, be written by some sort of 'bot. Getting an error message is something that a human might choose to do. i don't think it's as easy to distinguish the two in your hypothetical situation as you are claiming. In fact, I think it might be devilishly difficult if someone were to contest the conclusion.
But you did say that we would not know if we were such a simulation, which suggests that you believed that your own conscious experience could be from a simulation.That isn't the same thing. We can't state anything about whether a computer running a set of instructions would have the same conscious experiences that humans do. We can't even state that two humans have the same conscious experiences, we only assume they are (a reasonable assumption IMO but still an unproven assumption).
All good points.We could observe a simulated human (a set of instructions run by a computer*) and decide if it exhibits the human-like behaviors, including consciousness. But conscious experiences are inherently subjective and private. They can only be shared imperfectly through words, pictures, etc. I doubt a computer would be able to share with a human what it's consciousness felt like any better than another human can.
I don't assume that it could, in fact elsewhere I have mounted an argument in the opposite way - that my conscious experience could not arise from a computer running a program based on a set of instructions.*Incidently, if you assume that consciousness could be created in that manner, do you consider the computer to be conscious or the instructions?