• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

My take on why indeed the study of consciousness may not be as simple

But "understanding" of any sort has no relevance to my argument since it is about how the virtual brain would feel. Whether it would have conscious experiences that were identical to conscious experiences.
Why is conscious experience equivelant to "feeling"? Why can't understanding be simply to understand what conscious experience is? BTW: You have conceded that understanding, as I and Searle define it, has an emotional component (feeling).

The subject question I was asking is can computers feel?
To be exact, you said: "That this instant you are experiencing right now could have resulted from people writing down numbers in little boxes on pieces of paper."

Well it is also very unfair of you to continue to say that my argument is equivalent to Searle's. If you continue to do this then I will continue to ask for that definition of "understand".
Sorry no. You want "understand" to be the only point that Searle is making. It's not. To understand Searle's argument is to understand the conext of the Turing Test. Consciousness (strong AI)

The question of mine is clearly "do you think it is possible that this moment you are experiencing right now could in reality be a billion years of pencils marking on papers?"
The many replies to Searle apply to your argument for the very same reason. One of those replies is that our intuition is flawed becaue of time. Please see appeals to intuition.
 
Last edited:
Your statement was, as usual, emphatic and vague. What is factually false?
Westprog's statement.

But who said it does not have data?
Westprog. He said that the algorithmic model of consciousness asserts that conscious experience arises from the algorithm alone. This is of course not true.

If we can model the neuronal process, modelling the sensory inputs for a half of a second of consciousness ought to be easy. The algorithm has all the data for the neuronal states of an adult human at a particular age (which would obviously include memories) and the data for sensory input.
Yep, absolutely. No problem there.

You are not backing away from the "yes" are you?
No. ;)
 
Why is conscious experience equivelant to "feeling"? Why can't understanding be simply to understand what conscious experience is? BTW: You have conceded that understanding, as I and Searle define it, has an emotional component (feeling).


I'm not sure if this makes sense, but when I've tried to parse the words, the word "experience" seems to contain "feeling". I think that is why Damasio entitled one of his books "The Feeling of What Happens" -- that seems to cover the notion of what we mean by experiencing.

Understanding is a bit trickier, involving the structure of intentionality. I think Searle has argued that "meaning" is essentially mapping intentionality into language.
 
Nope, you have it exactly right.

The fact that something must be defined by an observer doesn't change the fact that a system which satisfies such a definition is different from one that does not regardless of whether an observer exists. All the observer does is ... observe.


Thanks for the help. So, Searle's argument is wrong; Westprog's argument along these lines is wrong for the same reason.

I've been thinking about the simulation argument as well. First, there is the issue of category error as Pixy has mentioned more than once.

Second, could it not be the case that one type of simulation could map exactly what is simulated -- namely any mental activity? Speaking dualistically, which is wrong of course, there is no actual physical output with mental activity in the same way that there is a physico-chemical process to digestion for instance. So, while a simulation of digestion does not physically digest anything, is it the case that a simulation of thinking does not think? Or simulation of feeling does not feel? Or simulation of consciousness is not conscious?

Simply because most simulations do not produce the output that the "real-world" processes do, does it mean that this is true of all simulations? Is the argument simply over-generalized?
 
Ichneumonwasp said:
Second, could it not be the case that one type of simulation could map exactly what is simulated -- namely any mental activity? Speaking dualistically, which is wrong of course, there is no actual physical output with mental activity in the same way that there is a physico-chemical process to digestion for instance. So, while a simulation of digestion does not physically digest anything, is it the case that a simulation of thinking does not think? Or simulation of feeling does not feel? Or simulation of consciousness is not conscious?
I would say that mental activity has physical output in exactly the same way as digestion: more mental activity, heat, speech, movement, etc.

We have to admit that simulated digestion can only digest simulated food and not actual food---a clear distinction. The question, as you say, is whether we are going to count simulated mental activity as actual mental activity, or whether we are going to be chauvinistic and require meat brains to call it mental activity.

~~ Paul
 
I'm not sure if this makes sense, but when I've tried to parse the words, the word "experience" seems to contain "feeling". I think that is why Damasio entitled one of his books "The Feeling of What Happens" -- that seems to cover the notion of what we mean by experiencing.

Understanding is a bit trickier, involving the structure of intentionality. I think Searle has argued that "meaning" is essentially mapping intentionality into language.
I'm not sure of your point.
 
Second, could it not be the case that one type of simulation could map exactly what is simulated -- namely any mental activity? Speaking dualistically, which is wrong of course, there is no actual physical output with mental activity in the same way that there is a physico-chemical process to digestion for instance. So, while a simulation of digestion does not physically digest anything, is it the case that a simulation of thinking does not think? Or simulation of feeling does not feel? Or simulation of consciousness is not conscious?

Simply because most simulations do not produce the output that the "real-world" processes do, does it mean that this is true of all simulations? Is the argument simply over-generalized?
We know the requirement for digestion are not present in a virtual computer simulation. Do we know that the requirements for consciousness are not present in a computer model?
 
In that "aha I'm aware" sense.
In that sense it is not aware at all.
How about aware that someone else is aware that you are aware (no, this is not a joke). It's called theory of mind.
If I had two robotic cars I could program each to be aware that the other car was aware of them. Still trivial
I think you are confused. That seems to me to be exactly the kind of thing you are trying to demonstrate with your thought experiment.
*sigh* Let me try again.

1. I don't agree that it is a meaningful definition of understanding.
2. Understanding is, in any case not relevant to the point I am making.
Oh, ok, then no, when you are asleep you do not have understanding.
By what logic?

1. understanding is not the same as memory
2. we have memory when we are asleep
___________________________________
Therefore we don't have understanding when we are asleep.

Yes?
You are not at all clear.
I began the sentence, "you seem to be defining understanding..". How could you think that I was telling you the way I defined "understanding".
I can't find a distinction between this and what you mean with your thought experiment.
So would it have been meaningful for Searle to ask:

"Could this moment you are experiencing now be the result of a Chinese man moving around tiles in a room?"
 
Last edited:
Why is conscious experience equivelant to "feeling"?
Well sensations really, but feelings is close to it.

In any case, why not deal with the thought experiment in the terms it was originally couched?
Why can't understanding be simply to understand what conscious experience is?
Because 1) that is circular, and 2) by your definition it would become "understanding is the feeling that we understand what conscious experience is and 3) I don't understand or even feel that I understand what conscious experience is, do you?
BTW: You have conceded that understanding, as I and Searle define it, has an emotional component (feeling).
Concede? I was the one who suggested that definition in the first place. You only agreed to it after.

I think that is what Searle really means, but do you think that Searle would really explicitly accept that definition?

If he did he would be arguing that computers can't think because they don't have human emotions. That would make the argument laughable.
To be exact, you said: "That this instant you are experiencing right now could have resulted from people writing down numbers in little boxes on pieces of paper."
Yes, that is the question I asked.

Just for the record, do you think it possible that the instant you are experiencing right now could have resulted from people writing down numbers in little boxes on pieces of paper?
Sorry no. You want "understand" to be...
No, let's get this clear. I don't want "understand" to be anything.

I don't want Searle to be anything.

I don't think "understanding" or the chinese room is even remotely relevant to what I am saying, not remotely. It bears nothing but a superficial resemblance to my thought experiment.

Searle was trying to rebut AI. I am not. How could they be the same?

Now who is dragging this out, you or me?
 
Last edited:
In that sense it is not aware at all.
That makes no sense at all. This is the distinction that Dennet makes.

1. I don't agree that it is a meaningful definition of understanding.
2. Understanding is, in any case not relevant to the point I am making.
A.) I don't care if you agree. I've told you that over and over. You are certainly entitled to an opinion. B.) You are the one that has pushed the understanding point.

By what logic?
You said we have understanding before we go to sleep and understanding after we awake. That can only take place if we have memory. I think your idea that we can understand something when we are asleep is absurd. I'm only trying to understand your point. If memory isn't your explanation then I've no idea how you can claim that we can understand something while we are asleep. What is it that we can understand while we are asleep?

Now, don't retort that the understanding was there before sleep and it is there after. THAT'S THE RESULT OF MEMORY.

"Could this moment you are experiencing now be the result of a Chinese an moving around tiles in a room?"
With the right inputs of course it could. "Experience" is simply another aspect of consciousness. The Chinese room is ostensibly "experiencing" communication. Do you not "experience" a conversation? Don't you find a conversation stimulating?

Yes, of course.
 
Last edited:
Primarily the Church-Turing thesis, which demonstrates the equivalence of all known forms of computation. (It is actually is a proof despite the name; the "thesis" is with respect to its broader implications.)
Is there a proof for it? Is there even a rigorous definition for effectively calculable function?
Well, no, you didn't say that;
Well excuse me that is just what I said, that I would wait for the results.
There is simply no way a physical process can not be computable.
Computationally modelling a system is different from computing a system.

You cannot say that any model is equivalent to the system (unless the model in question is the system itself).

There is no "equivalence" between the model and system it models.
 
Because 1) that is circular, and 2) by your definition it would become "understanding is the feeling that we understand what conscious experience is and 3) I don't understand or even feel that I understand what conscious experience is, do you?
A.) Asserting this doesn't make it so. I've explained this to you countless times and you simply ignore the explanation. You won't even quote me. So in the future I'm going to ignore this point. B.) No. Not at all. Understanding that we are conscious is simply a component of consciousness. C.) Of course. I have an intuitive feeling that I do. Yeah.

BTW: I used the word "simply" and I regret that. I should have said "why can't consciousness be in part, understanding that we are consciousness". Do you understand that you are conscious?

If he did he would be arguing that computers can't think because they don't have human emotions.
What? It's very likely that the Turing test won't be won unless and until computers DO have the equivalent of human emotions. Unless and until computer can experience (experience also has an emotional component). Unless and until computers can think.

Experience and understanding are not mutually exclusive things. Experience and understanding are not discreetly different things. Humans gain understanding via experience.

See, you are trying to shave aspects off of consciousness that you can't reasonably shave off. Consciousness isn't simply discrete modules we call experience, sentience, cognition, thinking, emotion, understandint, etc.. On the contrary, as I've been telling you over and over, there is much overlap and interdependency of all of these things.

Just for the record, do you think it possible that the instant you are experiencing right now could have resulted from people writing down numbers in little boxes on pieces of paper?
You are appealing to my intuition. I will honestly say that "no", intuitively I don't think it is possible. But then I don't think it is intuitively possible for light to have a dual nature that can change simply by being observed. I accept that it is a fact.

I've given you the explanation to as to why it is wrong to appeal to our intuitions but again, you ignore that.

I don't think "understanding" or the chinese room is even remotely relevant to what I am saying, not remotely. It bears nothing but a superficial resemblance to my thought experiment.
I think this is utter nonsense.

Searle was trying to rebut AI. I am not. How could they be the same?
Because you are describing AI. You might not realize it but you are. I keep telling you that consciousness isn't some discreet thing separate from understanding, experience, sentience, self-awareness, thinking, etc..

Where did you get the idea that there is a discrete thing called consciousness and a discrete thing called experience? I've never, ever read anything that gave even the faint hint that these are discreetly distinct concepts.

Now who is dragging this out, you or me?
You.
 
Last edited:
That makes no sense at all. This is the distinction that Dennet makes.
So you are suggesting that the thermometer is aware in the "aha I'm aware" sense?
A.) I don't care if you agree. I've told you that over and over.
No, in fact you suggested that this definition was what I meant by my thought experiment. I am telling you it is not, that it is not, as far as I know, even a meaningful definition.
You are certainly entitled to an opinion.
But please don't tell me what I mean by something I say
You are the one that has pushed the understanding point.
I pushed the point that Searle did not have a meaningful definition for "understanding" and that in any case it had nothing to do with what I said.

So can we get over this "understanding" business and get back to what I said?
You said we have understanding before we go to sleep and understanding after we awake. That can only take place if we have memory. I think your idea that we can understand something when we are asleep is absurd.
It depends what you mean by understanding.

If you define "understand" as the feeling humans have when they think they understand something, whether they do understand or not, then no, that feeling does not occur when we are asleep.

If, on the other hand, understanding is defined as an ability (as they do in academic circles for example) then that ability is encoded in the biochemical patterns of our brains and does not go anywhere when we sleep. An ability is a different thing from a memory.
With the right inputs of course it could.
I asked if it was a meaningful question with respect to the argument Searle made.

Was he saying that a computer could not think if a computer did not have identical conscious states to humans?

And again Seare was rebutting AI, I am not, so how could the positions be the same????
 
Last edited:
Because you are describing AI.
In AI there is no requirement whatsoever that an artificially intelligent system has identical conscious states to a human in order to be intelligent.

And I am in any case not rebutting anything, certainly not AI. As far as I am concerned, if a computer passes the Turing test convincingly, it is intelligent, end of story.

As for the rest of your post, just nonsense and I am tired of it.
 
Last edited:
Is there a proof for it?
Yes, there is.

Is there even a rigorous definition for effectively calculable function?
That's still being discussed. But how is it relevant?

Well excuse me that is just what I said, that I would wait for the results.
Did you read what you wrote and the context in which you wrote it? Was it a reply to my post or some random irrelevant ejaculation?

Computationally modelling a system is different from computing a system.
Yeah, because the latter is meaningless.

You cannot say that any model is equivalent to the system (unless the model in question is the system itself).
What do you even mean by "equivalent"?

There is no "equivalence" between the model and system it models.
Again, what do you mean by "equivalence"?

Can we use the behaviour of the model to predict the behaviour of the physical system? Of course we can. The entire modern world depends on that.
 
So you are suggesting that the thermometer is aware in the "aha I'm aware" sense?
No, of course not.

If, on the other hand, understanding is defined as an ability (as they do in academic circles for example) then that ability is encoded in the biochemical patterns of our brains and does not go anywhere when we sleep. An ability is a different thing from a memory.
The ability to understand and "understanding" are two very different things. This is the first time you've said "ability".

And again Seare was rebutting AI, I am not, so how could the positions be the same????
  1. Searle's experiment could be run using your method and vice versa.
  2. Experience, understanding, sentience, awareness, etc are all components of consciousness.
 
In AI there is no requirement whatsoever that an artificially intelligent system has identical conscious states to a human in order to be intelligent.
I never said otherwise.

And I am in any case not rebutting anything, certainly not AI. As far as I am concerned, if a computer passes the Turing test convincingly, it is intelligent, end of story.
The test could be run using your method.

...and I am tired of it.
Fair enough.
 
Last edited:

Back
Top Bottom