My take on why indeed the study of consciousness may not be as simple

The difference between seeing a red object and imagining the colour red, or dreaming about something red, is that seeing involves the eyes. I don't claim that this is 100% accurate, but it's certainly better than 99.9 %. Indeed, I can't think of an example of seeing something that didn't involve something optical.

That the thing we see might not be an accurate representation of external reality is always possible. Indeed, it's quite certain. There's always transcription errors.

Hmmm... moving the goalposts, anyone?

Before we were talking about experiencing red.

Now you seem to be talking about "seeing" red vs. "imagining" or "dreaming" red -- all of which most people would consider "experiencing" red in some way.
 
No. This is not true now - not if you're stating it as a matter of necessity - and in fact, never has been.

Well, it is true if you define "seeing" as "receiving input from optical sensors."

But, like I mentioned, that is radically changing the goalposts on westprog's part.
 
The difference between seeing a red object and imagining the colour red, or dreaming about something red, is that seeing involves the eyes.
The only thing this does is redefine the word "experience". That is not a problem in itself, but it doesn't remove the logical problems from the Mary's Room argument, and it certainly doesn't support your claims that there is a difference between humans and computers with respect to experience.

I don't claim that this is 100% accurate, but it's certainly better than 99.9 %. Indeed, I can't think of an example of seeing something that didn't involve something optical.
So are you making an actual definition here, or just waving your hands about?
 
The difference between seeing a red object and imagining the colour red, or dreaming about something red, is that seeing involves the eyes.
Involving the eyes isn't the same as requiring the eyes. This doesn't answer my question.

Why?

I don't claim that this is 100% accurate, but it's certainly better than 99.9 %. Indeed, I can't think of an example of seeing something that didn't involve something optical.

That the thing we see might not be an accurate representation of external reality is always possible. Indeed, it's quite certain. There's always transcription errors.
All of this is irrelevant to the point at hand. It's theoretically possible to stimulate the brain to experience red in blind people. There is no theoretical impossibility of programing the brain to see colored objects.

BTW: Instead of an optic nerve it's possible to wire a camera to the appropriate correlates in the brain. I think there is even a crude system that allows blind people to get crude visuals. I'm trying to find a link.
 
Well, it is true if you define "seeing" as "receiving input from optical sensors."

But, like I mentioned, that is radically changing the goalposts on westprog's part.
Right. He can define seeing that way if he wants, of course. It's not actually wrong in itself. But it doesn't actually change anything; all the things he's said that were wrong are still wrong, just with the wording rearranged a little.
 
westprog said:
Because it's a difference that applies to humans beings, and not computers.
Huh? You're saying that computers can generate internal qualia, but do not experience qualia via their senses? What computers are those?

We can fool brains quite easily. In fact, vast amounts of evolution is in trying to trick other species into mistaking tasty food for something poisonous or dangerous in order to avoid being eaten. We don't need to put the brain in a vat - in a year or two we'll have all-over body suits that can replicate any given environment in a way that we can't distinguish.

However, the experience of "seeing red" will still involve the optic nerve.
Agreed, if by "seeing" we mean "experiencing vision via the optic nerve." I've lost track of what we're arguing.

~~ Paul
 
Huh? You're saying that computers can generate internal qualia, but do not experience qualia via their senses? What computers are those?

No, I'm saying that the fact that there is no difference to a computer program in the way that it acquires data means that it is unlikely to have the experience at all.

I don't see the slightest reason why any computer program, however complex, should have internal qualia.

Agreed, if by "seeing" we mean "experiencing vision via the optic nerve." I've lost track of what we're arguing.

~~ Paul

The important thing is that the different experiences are readily distinguishable.
 
Westprog,

Since your argument essentially boils down to "syntax does not equal semantics" could you perhaps define what "meaning" means? Is it not possible that some form of syntax could produce semantics, some particular type of structured processing?
 
westprog said:
No, I'm saying that the fact that there is no difference to a computer program in the way that it acquires data means that it is unlikely to have the experience at all.

I don't see the slightest reason why any computer program, however complex, should have internal qualia.
What if we tag the data with its source? Then do you think that a computer program might be able to be conscious? I just don't understand what this source thing has to do with the price of beans.

~~ Paul
 
Westprog,

Since your argument essentially boils down to "syntax does not equal semantics" could you perhaps define what "meaning" means? Is it not possible that some form of syntax could produce semantics, some particular type of structured processing?

Anything's possible, I suppose, but until the process is actually demonstrated to occur, I remain highly sceptical.

The strange thing is that doubting the AI hypothesis is viewed as an attack on physicalism, when the AI concept involves discarding the physical elements of the brain, and regarding only some imposed mathematical model as being essential to the production of the ill-defined thing we call consciousness. It seems at odds with the way we usually regard mathematics as describing the way the universe works, but not intervening.
 
Anything's possible, I suppose, but until the process is actually demonstrated to occur, I remain highly sceptical.

Ok, that's fine. I don't have an answer either. I think it depends on some form of fuzzy syntax dealing with "feelings" (which are fuzzy to begin with, since as far as I can tell they are actually slight pushes toward a behavioral tendency).

The strange thing is that doubting the AI hypothesis is viewed as an attack on physicalism, when the AI concept involves discarding the physical elements of the brain, and regarding only some imposed mathematical model as being essential to the production of the ill-defined thing we call consciousness. It seems at odds with the way we usually regard mathematics as describing the way the universe works, but not intervening.


Yes, I agree. That is one of the problems in such discussions. I think it is important to distinguish between what has not been demonstrated and what cannot, in principle, be demonstrated.

If you think it is possible that syntax could produce semantics at some level, then I don't see what the controversy could amount to unless someone is claiming that human level consciousness has been attained by a computer. I doubt anyone would make that claim. The rest is probable quibble over word definition.
 
What if we tag the data with its source? Then do you think that a computer program might be able to be conscious? I just don't understand what this source thing has to do with the price of beans.

~~ Paul

If doesn't matter what we tag the data with. Why should it make any difference to the way a program deals with it?

We could indeed tag a block of visual data as being #326 obtained directly from camera, #332 as being saved data loaded from a file, and #114 as being generated from a test script, but why should the experience of the program be any different?

The price of beans is in the qualitative difference between the experiences of directly seeing something, and imagining the same scene, and in remembering the same scene. It is not merely that we can recognise the difference - we have different subjective reactions.

In order to claim that a computer program is capable of consciousness, we have to either accept that it experiences different data in different ways - something that computer programs do not do as usually understood - or as some philosophers would have it, deny the existence of subjective experience altogether.
 
If doesn't matter what we tag the data with. Why should it make any difference to the way a program deals with it?

We could indeed tag a block of visual data as being #326 obtained directly from camera, #332 as being saved data loaded from a file, and #114 as being generated from a test script, but why should the experience of the program be any different?

The price of beans is in the qualitative difference between the experiences of directly seeing something, and imagining the same scene, and in remembering the same scene. It is not merely that we can recognise the difference - we have different subjective reactions.

In order to claim that a computer program is capable of consciousness, we have to either accept that it experiences different data in different ways - something that computer programs do not do as usually understood - or as some philosophers would have it, deny the existence of subjective experience altogether.


Possibly it depends on the type of tag?
 
westprog said:
We could indeed tag a block of visual data as being #326 obtained directly from camera, #332 as being saved data loaded from a file, and #114 as being generated from a test script, but why should the experience of the program be any different?
If the program uses the tags in its algorithms, then it might have different experiences depending on the source. Isn't this a trivial issue compared to the question of whether a computer program can have an experience at all?

The price of beans is in the qualitative difference between the experiences of directly seeing something, and imagining the same scene, and in remembering the same scene. It is not merely that we can recognise the difference - we have different subjective reactions.
Right, and so could a computer, assuming it can have an subjective experience at all.

In order to claim that a computer program is capable of consciousness, we have to either accept that it experiences different data in different ways - something that computer programs do not do as usually understood - or as some philosophers would have it, deny the existence of subjective experience altogether.
I see no problem in accepting that a computer can experience different data in different ways, once we accept it can experience at all. The same goes for people: Precisely the same activation of neurons surely results in precisely the same experience; it requires different activation to produce different experiences.

~~ Paul
 
Possibly it depends on the type of tag?

The thing about computer programs is - all the data is of equivalent value. Everything that plugs into the computer is isolated via the system bus, device drivers and the operating system to end up just tweaking bits. All a computer program ever does is pull bits from registers and push other bits back. No matter how we tag the data, it's all equivalent. There is no qualitative difference - and this is a matter of design. The tags would just be more bits.

This is quite different both to the way that the brain works and the way we experience the functioning of the brain and nervous system. To me, any form of artificial consciousness would have to be centrally based around the direct connection to the external world. Computer programs exist in their own sensory deprivation tank. They are the constructs most isolated from the outside world, while human minds are the most connected.
 
In order to claim that a computer program is capable of consciousness, we have to either accept that it experiences different data in different ways - something that computer programs do not do as usually understood - or as some philosophers would have it, deny the existence of subjective experience altogether.
Subjective experiences in humans is the result of a complex system (the brain)dependent on a lot of variables that are dependent on dynamic initial conditions (phenom type and environmental). Humans are diverse and our brain structures are not simply genetically based. Even twins will have substantive differences in brain wiring due to phenom type differences (twins don't have the same brain folds for the same reason they don't have identical finger prints).

If we were all identical clones (including identical clones down to ever brain fold and neuronal wirig there would be less of what we would call subjective. However there would still be some degree of difference if we didn't all experience exact identical environmental variables.

A computer sufficently complex feed back loops and sensitivity to environmental conditions could without doubt develop subjective decision making capability. It is expected (predicted) See chaos theory.
 
Last edited:
If the program uses the tags in its algorithms, then it might have different experiences depending on the source. Isn't this a trivial issue compared to the question of whether a computer program can have an experience at all?


Right, and so could a computer, assuming it can have an subjective experience at all.


I see no problem in accepting that a computer can experience different data in different ways, once we accept it can experience at all. The same goes for people: Precisely the same activation of neurons surely results in precisely the same experience; it requires different activation to produce different experiences.

~~ Paul

My concern is to demonstrate that in the absence of the possibility of qualitatively different experiences, there seems to be no way that a computer program can have a subjective experience. When every source of information to the outside world is presented in exactly the same way, how can this result in a different reaction? And if the subjective reaction is always the same, isn't that the same as saying that it has no subjective reaction?

Human brains have a seamless, continuous connection to the outside world via the nervous system. The complexity of the system is partly due to the conflicting requirements of making the brain as exposed as possible to the entire wealth of sensory experience, and at the same time physically protected from harm. That's why the brain and nervous system form a single structure.
 

Back
Top Bottom