Has consciousness been fully explained?

Status
Not open for further replies.
Yes, it's quite straightforward to take a program designed for real time data acquisition and control, grab the data going in and out, and run a simulated version of the program. What isn't possible is to take a simulation program and just plug it into the real world. It has to be rewritten to add in the real time facility that isn't present in a simulator.
Wrong. You can simulate time just as well as anything else.
 
Deterministic does not mean predictable.
I am using the Lorenz definition for deterministic - only one possible next state. That would seem to suggest predictability. How would we know that a process was deterministic unless we had a way of predicting it?
I repeat - the behaviour of the hurricane is as deterministic as the behaviour of the silicon chip.
They are both determistic. But we can predict the next state of the hurricane as a matter of probability.

We can determine the next state of a properly functioning CPU exactly.
If you are referring to the objective behaviour of physical systems, then the fact that the same physical laws apply means that they are equally deterministic. Being deterministic is an objective property of the system. Being predictable is a statement about our knowledge of the system. If you're describing objective properties of the system, then it's necessary to accept that its behaviour is determined solely by the laws of physics, not by what we know about it.
All irrelevant since you are still ignoring the important feature of the definition.

A computation has an exactly measurable next state. A physical process does not necessarily have an exactly measurable next state.
Well, maybe you shouldn't have been so quick to correct me.
I will remember to double check my spelling in future posts to you.
 
I don't have a firm opinion one way or the other. When people talk about a model 'behaving', it's generally referring information outputs that are displayed for our edification. It seems to me that any computer simulation's behavior is nearly as dependent on how it is interpreted by humans as it is the actual output.
No. The behaviour is not dependent on the interpretation at all. The meaning you assign to it - sure. But that applies equally to human behaviour.

Dealing with infants - or toddlers who've just decided not to speak right now, or don't have the experience to describe something - shows this clearly.

I guess I don't understand how you define 'human like behavior' for a computer model. Could you explain what you mean? How is the model's 'behavior' discerned and interpreted by humans? Does it interact with us directly, or only with simulated other beings in it's matrix like existance?
How about we start with language? It's certainly a behaviour, and it's certainly something computers have the capacity to engage in.
 
They are both determistic. But we can predict the next state of the hurricane as a matter of probability.

We can determine the next state of a properly functioning CPU exactly.
Yep. And, on the flip side, you can't predict it as a matter of probability. It's our old friend the halting problem in a new hat.
 
But it would behave like the system?

For example it might get onto the internet and start debating about consciousness?

It might, like Interesting Ian, argue that "I have incorrigible certain knowledge that I am conscious" - that sort of behaviour?

Do you think you might not be conscious, Robin?
 
I don't have a firm opinion one way or the other. When people talk about a model 'behaving', it's generally referring information outputs that are displayed for our edification. It seems to me that any computer simulation's behavior is nearly as dependent on how it is interpreted by humans as it is the actual output.

I guess I don't understand how you define 'human like behavior' for a computer model. Could you explain what you mean?
Well for example it receives pain data and it sends messages to muscles to flinch and sends messages to the relevant muscles to form the sound "ow".

It receives taste data and it sends messages to the relevant muscles to form the words "That tastes great".

It receives sense data representing a post on JREF and it responds by sending out messages to the finger muscles to type "But everybody knows that they are conscious - people who claim not to know that they are conscious are lying".

That sort of thing.
How is the model's 'behavior' discerned and interpreted by humans?
Do I really need to be that specific? Computer graphics and sound I suppose. It does not really matter how the data is read.
Does it interact with us directly, or only with simulated other beings in it's matrix like existance?
If the computer was fast enough it might interact with us directly but obviously if this was the case then we would have to be represented in it's virtual world.
 
Do you think you might not be conscious, Robin?
No, of course not.

What on earth would lead you to say that?

What is your opinion about my little thought experiment by the way?
 
Last edited:
No. The behaviour is not dependent on the interpretation at all. The meaning you assign to it - sure. But that applies equally to human behaviour.
Yes? So how do we determine whether or not it has human behavior without first defining what that means for a computer simulation.
How about we start with language? It's certainly a behaviour, and it's certainly something computers have the capacity to engage in.
Computers currently are able to that. Does that mean that displaying an error message should be considered a human like behavior? Or are we looking for something a bit more human like from our simulated brain?
 
I wonder if, when Schroedinger said "Suppose you seal a cat in a box...", people said "What kind of cat?"
 
Yes? So how do we determine whether or not it has human behavior without first defining what that means for a computer simulation.
If a computer exhibits the same behaviour as a human, then it exhibits the same behaviour as a human.

Computers currently are able to that. Does that mean that displaying an error message should be considered a human like behavior? Or are we looking for something a bit more human like from our simulated brain?
That's why Turing proposed his famous test: If you cannot tell the difference between computer behaviour and human behaviour, then you have no basis for insisting there is a difference.

Right now, of course, you can easily tell the difference. But there's no reason to suppose that this will remain true.
 
Robin said:
I don't have a firm opinion one way or the other. When people talk about a model 'behaving', it's generally referring information outputs that are displayed for our edification. It seems to me that any computer simulation's behavior is nearly as dependent on how it is interpreted by humans as it is the actual output.
I guess I don't understand how you define 'human like behavior' for a computer model. Could you explain what you mean?
Well for example it receives pain data and it sends messages to muscles to flinch and sends messages to the relevant muscles to form the sound "ow".
It receives taste data and it sends messages to the relevant muscles to form the words "That tastes great".
It receives sense data representing a post on JREF and it responds by sending out messages to the finger muscles to type "But everybody knows that they are conscious - people who claim not to know that they are conscious are lying".
That sort of thing.
That sort of thing is human-like behavior. If you assume that it shows human like behavior, then the answer to your question is obvious. Let’s start with your first example. How does our simulated brain subjectively experience a particular sort of input data (or a particular set of neural connections firing) as pain? Why does it flinch it simulated muscles? Is all of that programmed into the simulated brain? Or is it supposed to be emergent animal like behavior?
How is the model's 'behavior' discerned and interpreted by humans?
Do I really need to be that specific? Computer graphics and sound I suppose. It does not really matter how the data is read.
Yes. How can I judge when a non-human entity is displaying human-like behavior if it isn’t displayed in some way that can be compared to actual human behavior? A 3-D hologram would be best – as long as we are imagining impossible technological feats.
Does it interact with us directly, or only with simulated other beings in it's matrix like existance?
If the computer was fast enough it might interact with us directly but obviously if this was the case then we would have to be represented in it's virtual world.
So we would be peering in on it, it’s virual body and it’s virtual environment in order to determine if it’s displaying human like behavior? It seems to me that if the virtual body and virtual environment were sufficient similar to ours, it would likely display human like behaviors.

Currently, we have simulated characters generated in computer games that we can watch in their simulated environment. They display human like behavior but I don’t think that is what you are talking about. How are we going to distinguish between human-like behavior that is emergent from the simulated brain and human like behavior that is programmed into simulated people without such brains?
 
That sort of thing is human-like behavior. If you assume that it shows human like behavior, then the answer to your question is obvious.
But I didn't assume that it showed human-like behaviour.

That was the question I asked.
Let’s start with your first example. How does our simulated brain subjectively experience a particular sort of input data (or a particular set of neural connections firing) as pain? Why does it flinch it simulated muscles? Is all of that programmed into the simulated brain? Or is it supposed to be emergent animal like behavior?
The example is quite clear - it is a model of a human including it's brain.
Yes. How can I judge when a non-human entity is displaying human-like behavior if it isn’t displayed in some way that can be compared to actual human behavior? A 3-D hologram would be best – as long as we are imagining impossible technological feats.
If I were discussing a computer model of a seed growing into a plant, I wonder if you would be asking for all these details. I think not.

I mean really - don't you think it is implied by the example that there would be some way of observing the behaviour of the model? What would be the point in making a computer model of anything if you didn't also build in some way of observing the behaviour of the model?

By the way, when you say "impossible", do you mean "impossible by current technology", or "absolutely impossible"?
Currently, we have simulated characters generated in computer games that we can watch in their simulated environment. They display human like behavior but I don’t think that is what you are talking about. How are we going to distinguish between human-like behavior that is emergent from the simulated brain and human like behavior that is programmed into simulated people without such brains?
Because the example specifically stipulates that it is a model of a human including a human brain. The model does not stipulate that the programmers cheated and hard coded some human like responses to fool people.

Obviously if it is a model of the human brain then anything the model does is going to come from the interactions between the various components of the brain architecture in the model.
 
Last edited:
I am using the Lorenz definition for deterministic - only one possible next state. That would seem to suggest predictability. How would we know that a process was deterministic unless we had a way of predicting it?

They are both determistic. But we can predict the next state of the hurricane as a matter of probability.

We can determine the next state of a properly functioning CPU exactly.

No, you can't. It's certainly true that weather forecasting is fairly unreliable, and that computers work nearly all the time, but the next state of your CPU is a matter of probability. The more you know about the system, the higher the probability is, but it's certainly not a matter of absolute certainty. Computers fail.

All irrelevant since you are still ignoring the important feature of the definition.

A computation has an exactly measurable next state. A physical process does not necessarily have an exactly measurable next state.

Well, yes. Computations mathematical abstractions. Physical processes are what actually happens, in the real world. We have mathematical models of all sorts of physical processes. The models produce an exact next state. The actual physical processes approximate to the models.

I will remember to double check my spelling in future posts to you.

I'll try to remember in future how hurtful and demeaning that kind of joke can be.
 
So we would be peering in on it, its virtual body and its virtual environment in order to determine if it’s displaying human like behavior? It seems to me that if the virtual body and virtual environment were sufficient similar to ours, it would likely display human like behaviors.

I'd consider the ability to make posts on JREF and engage in coherent conversation a good start. I'm surprised that nobody has programmed something to insert


in random replies.
 
Robin said:
I am using the Lorenz definition for deterministic - only one possible next state. That would seem to suggest predictability. How would we know that a process was deterministic unless we had a way of predicting it?

They are both determistic. But we can predict the next state of the hurricane as a matter of probability.

We can determine the next state of a properly functioning CPU exactly.
No, you can't. It's certainly true that weather forecasting is fairly unreliable, and that computers work nearly all the time, but the next state of your CPU is a matter of probability. The more you know about the system, the higher the probability is, but it's certainly not a matter of absolute certainty. Computers fail.
I highlighted the word you missed.
Well, yes. Computations mathematical abstractions. Physical processes are what actually happens, in the real world. We have mathematical models of all sorts of physical processes. The models produce an exact next state.
And a properly functioning CPU produces an exact next state.
I'll try to remember in future how hurtful and demeaning that kind of joke can be.
Not hurtful or demeaning. Didn't even realise it was supposed to be a joke until you pointed it out.
 
But I didn't assume that it showed human-like behaviour.
You did in your responses to my questions.
The example is quite clear - it is a model of a human including it's brain.
Er, no. You originally said it was "sufficiently detailed computer model of a human brain". Now you are changing it to a human, including it's brain. This is a somewhat different setup. The more aspects of a human you model, the more likely you will see your model exhibiting human like behaviors.
If I were discussing a computer model of a seed growing into a plant, I wonder if you would be asking for all these details. I think not.
If I were actually attempting to form an opinion about the behavior of the model and if it exhibitly plant like behavior, I would ask similar questions and want to know details. Are you modeling the plant growing in virtual soil? Does it have to contend with competition from other plants? With predators? Or are you just modeling the growth behavior separate from any environmental interactions?
I mean really - don't you think it is implied by the example that there would be some way of observing the behaviour of the model? What would be the point in making a computer model of anything if you didn't also build in some way of observing the behaviour of the model?
Of course there would be some way of observing the behavior of the model. The point I was trying to make is that the interface that allows that to happen will inevitable skew our perceptions of the behavior of the model. How can we determine if the model is exhibiting the human like behavior or if it is an artifact of the interface with the model?
By the way, when you say "impossible", do you mean "impossible by current technology", or "absolutely impossible"?
"impossible by current technology"
Because the example specifically stipulates that it is a model of a human including a human brain. The model does not stipulate that the programmers cheated and hard coded some human like responses to fool people.
The problem here is that much of human behavior is hard-coded and we don't really know the limits of that hard-coding. Thus some hard-coding would be inevitable. At what point do we draw the line between what is a realistic simulation of human beings and what is 'cheating'?
Obviously if it is a model of the human brain then anything the model does is going to come from the interactions between the various components of the brain architecture in the model.
I'm not sure where you are going with this. Is this a model of a human including it's brain, or a model of human brain? Does it interact with a simulated environment of some sort, including other simulated humans?
 
Last edited:
I highlighted the word you missed.

And a properly functioning CPU produces an exact next state.

And a properly functioning hurricane corresponds exactly with the weather forecast.

However, a real CPU will always - that's always - have a finite chance of not returning the expected value. The only certain thing is that nothing in the real world is totally certain.

For certain things - a mountain vanishing into thin air in front of us - the probabilty is pretty small. Computers failing aren't quite as unlikely as that.

If we compare abstractions with the real world, the abstractions will always be perfect, and the real world events will always be just slightly uncertain.

Not hurtful or demeaning. Didn't even realise it was supposed to be a joke until you pointed it out.

Will put in :):):):) next time.
 
You did in your responses to my questions.
No, asked me what I meant by human-like behaviour and I was answering your question.

Don't bait and switch. Why would I be both asking if it displayed human-like behaviour and also stipulating that it displayed human like behaviour?
Er, no. You originally said it was "sufficiently detailed computer model of a human brain". Now you are changing it to a human, including it's brain. This is a somewhat different setup. The more aspects of a human you model, the more likely you will see your model exhibiting human like behaviors.
Let me quote what I said:
robin said:
Suppose that there was a sufficiently detailed computer model of a human brain, and say this brain is given realistic sense data, including modelling of the sense data associated with body control and feed back.

Perhaps it starts as a model of an embryo and models the brain development up to birth and then childhood even adulthood - obviously this would take vast computing power.

But suppose that could be done - do you think it possible that the virtual human being modelled would exhibit human like behaviour?
Now do you think I mean't realistic sense data of a brain without a body? Honestly?
Of course there would be some way of observing the behavior of the model.
Yes. And of course it is presumed to be sufficient to answer the question.
I'm not sure where you are going with this.
I suspect that you know where I am going with this and don't want to go there.
Is this a model of a human including it's brain, or a model of human brain? Does it interact with a simulated environment of some sort...
Yes, as I stipulated in the original thought experiment
, including other simulated humans?
Already answered the last time you asked this.
 
And a properly functioning hurricane corresponds exactly with the weather forecast.
All hurricanes function properly and none of them correspond exactly with any weather forecast. There is a whole branch of mathematics that explores why weather conditions can't be forecast exactly
However, a real CPU will always - that's always - have a finite chance of not returning the expected value. The only certain thing is that nothing in the real world is totally certain.
As I said, there is no perfect computer in the real world. It does not mean the definition of a physical computation is invalid.
 
Status
Not open for further replies.

Back
Top Bottom