My take on why indeed the study of consciousness may not be as simple

A computer sufficently complex feed back loops and sensitivity to environmental conditions could without doubt develop subjective decision making capability. It is expected (predicted) See chaos theory.

A computer sensitive to environmental conditions would not be a computer as we at present understand it.
 
A computer sensitive to environmental conditions would not be a computer as we at present understand it.
Why would you say that? We have computers that are sensitive to the temprature and can turn on a heater. We have computers that are sensitve to light and can turn on the lights when it gets dark.
 
Last edited:
The thing about computer programs is - all the data is of equivalent value. Everything that plugs into the computer is isolated via the system bus, device drivers and the operating system to end up just tweaking bits. All a computer program ever does is pull bits from registers and push other bits back. No matter how we tag the data, it's all equivalent. There is no qualitative difference - and this is a matter of design. The tags would just be more bits.

This is quite different both to the way that the brain works and the way we experience the functioning of the brain and nervous system. To me, any form of artificial consciousness would have to be centrally based around the direct connection to the external world. Computer programs exist in their own sensory deprivation tank. They are the constructs most isolated from the outside world, while human minds are the most connected.

Right. One of the first places to start is with valuation. Which means beginning with "feeling", which means we have to define what in the world that means. What it seems to mean is that there are different types of tags placed on different inputs that result in different behavioral tendencies.

We don't do any of this with existing computer systems because computers are tools and we don't want any sort of fuzzy logic to rule them. We want particular types of output.

Our structure, however, was constrained by the contingencies of evolutionary processes. We live in a hostile environment, so we are structured to deal with a large variety of unpredictable contingencies.

Current computer systems are not because they exist in a different sort of environment. We could program a computer to experience I would think if we understood the process better, but I would guess that some of the "fuzzy logic" of programming behavioral tendencies would do trick. I don't it works to say -- it's just complex. We must define what the complexity consists in, and with that I agree with you.
 
The important thing is that the different experiences are readily distinguishable.
You are wrong as usual.
Visual auras are most common. A visual aura is like an electrical or chemical wave that moves across the visual cortex of your brain. The visual cortex is the part of your brain that processes visual signals. As the wave spreads, you might have visual hallucinations.



The best known visual aura is called a "fortification spectrum" because its pattern resembles the walls of a medieval fort. It may start as a small "hole" of light, sometimes bright geometrical lines and shapes in your visual field.


This visual aura may expand into a sickle- or C-shaped object, with zigzag lines on the leading edge. As it moves, it may appear to grow. Auras are not the same for all people, so you might also experience bright spots or flashes. Auras are sometimes accompanied by a partial loss of vision referred to as a scotoma. Auras commonly last 10 to 30 minutes.
http://www.mayoclinic.com/health/migraine-aura/MM00659
 
A computer sensitive to environmental conditions would not be a computer as we at present understand it.
9503_20060835432.jpg

http://www.gizmag.com/darpa-grand-challenge-winner-returns-to-silicon-valley/9503/
 
That often happens when the goalposts are mounted on rocket sleds.

I think it has something more to do with the quantum distribution of the defintion of consciousness.

There is a probability that is described by the square of the waveform of defintion, but it is equally likely to be anywhere within that zone of probability.
 
westprog said:
My concern is to demonstrate that in the absence of the possibility of qualitatively different experiences, there seems to be no way that a computer program can have a subjective experience. When every source of information to the outside world is presented in exactly the same way, how can this result in a different reaction? And if the subjective reaction is always the same, isn't that the same as saying that it has no subjective reaction?
This conversation will go nowhere until you define exactly what you mean by a qualitative experience and what is required to accept that such a thing is occurring. Otherwise we're going to have the "bits moving around just can't be conscious" circle dance again, and we've all had that dance too many times.

You inability to imagine bits moving around as conscious is not a sufficient argument. Why would systematically replacing each of my neurons with a little computer not do the trick?

~~ Paul
 
This conversation will go nowhere until you define exactly what you mean by a qualitative experience and what is required to accept that such a thing is occurring. Otherwise we're going to have the "bits moving around just can't be conscious" circle dance again, and we've all had that dance too many times.

You inability to imagine bits moving around as conscious is not a sufficient argument. Why would systematically replacing each of my neurons with a little computer not do the trick?

~~ Paul

That's the essence of the syntax doesn't equal semantics argument. He has already said that there is no insurmountable barrier in this regard, so his objection is simply that he doesn't see how it works yet. That is different from saying that it isn't possible.
 
How can we know that the definition of physical state is not dependent on a mental state?
In fact how do we even know that a physical state is not a mental state?


Ah, forgot to respond to this earlier!

You can't prove a negative !Kaggen, so it is on you to demonstrate taht a physical state id dependant upon a mental state.

And it would be up to yo to demonstrate that a physical state is a mental state.

Now of course our knowledge of such things is mental, and we can't know the actual construct of *it* whatever that *it* which is the world may be.

But we can't argue about gaps because there will always be more gaps than non-gaps.

All we can discuss are models.

Which brings me back to the point of responding to the post:

How could you tell the difference. If a physical state was dependant upon a mental state (and I can think of some states taht are) how could you tell the difference.

So say we charge a capacitor with electricty, that is dependant upon construction, but not amental state. How would we show that it is a dependancy?

Now the label of 'charged' is linked to mental states associated with the word 'charged'.
 
Ichneumonwasp said:
That's the essence of the syntax doesn't equal semantics argument. He has already said that there is no insurmountable barrier in this regard, so his objection is simply that he doesn't see how it works yet. That is different from saying that it isn't possible.
Fair enough. I don't see how it works, either.

~~ Paul
 
If doesn't matter what we tag the data with. Why should it make any difference to the way a program deals with it?
Because if the data is tagged, it's different.

We could indeed tag a block of visual data as being #326 obtained directly from camera, #332 as being saved data loaded from a file, and #114 as being generated from a test script, but why should the experience of the program be any different?
Because the data is different. The experience has to be different.

The price of beans is in the qualitative difference between the experiences of directly seeing something, and imagining the same scene, and in remembering the same scene. It is not merely that we can recognise the difference - we have different subjective reactions.
Yes. So?

In order to claim that a computer program is capable of consciousness, we have to either accept that it experiences different data in different ways
Which it does.

something that computer programs do not do as usually understood
Nonsense. It's unavoidable that computer programs experience different data in different ways.

or as some philosophers would have it, deny the existence of subjective experience altogether.
That works too.
 
Why would you say that? We have computers that are sensitive to the temprature and can turn on a heater. We have computers that are sensitve to light and can turn on the lights when it gets dark.
Westprog has an understanding of computers that excludes about 99.9% of what computers actually do.
 
The thing about computer programs is - all the data is of equivalent value. Everything that plugs into the computer is isolated via the system bus, device drivers and the operating system to end up just tweaking bits. All a computer program ever does is pull bits from registers and push other bits back. No matter how we tag the data, it's all equivalent. There is no qualitative difference - and this is a matter of design. The tags would just be more bits.
The same is true for humans.

This is quite different both to the way that the brain works and the way we experience the functioning of the brain and nervous system.
No, it's identical.

To me, any form of artificial consciousness would have to be centrally based around the direct connection to the external world.
Humans don't have one of those, so why would an artificial consciousness need one?

Computer programs exist in their own sensory deprivation tank.
No.

They are the constructs most isolated from the outside world, while human minds are the most connected.
Also no.

Would you like to play again?
 
This conversation will go nowhere until you define exactly what you mean by a qualitative experience and what is required to accept that such a thing is occurring. Otherwise we're going to have the "bits moving around just can't be conscious" circle dance again, and we've all had that dance too many times.

You inability to imagine bits moving around as conscious is not a sufficient argument. Why would systematically replacing each of my neurons with a little computer not do the trick?
And if it doesn't, how many neurons does Paul need to replace before he becomes a mindless zombie?
 
Not that exact link no but I have posted a number of links about the project. Westprog and Malerin just ignore them.

ETA: Actually westprog responded indirectly stating that we can model the brain in a computer the way we can model weather in a comptuer. It doesn't mean that it snows in the computer.

When I was a dualist my favorite example was an orange tree. You can model a tree in a computer but you won't get oranges.
 
Last edited:
And, of course, a comment about what in the name of all that is holy we actually mean by the words knowledge and learn.

~~ Paul
Yes, I pointed out that Malerin was using the terms "complete knowledge of" in two different ways within the same premiss.

Her knowledge would of course consist of a set of experimentally confirmed hypothesis.

And I already pointed out that physicalism no more predicts that a hypothesis about the perception of red will actually be red, any more than it predicts that a hypothesis about H20 will be wet.
 
Because I don't believe mental states are identical to brain states. But this isn't about how I define things.
On the contrary it is precisely about how you define things.

By the normal definition of scientific knowledge there is no problem posed by the Mary argument.

But you are defining knowledge of a mental state as actually being in that mental state.

I am merely pointing out that you have to apply your definition consistently.
This is about reductionism.
No it is not. Reductionists use the normal definition of scientific knowledge.
Brain states as a necessary condition for knowledge is ad hoc and absurd anyway
I am not sure what you mean. I am pretty sure we need brain states for knowledge

But it was your condition anyway. It has nothing to do with Physicalism or Materialism.
Yes, that is what I am saying.
Instructive that you crossed out "my". Are you saying that by experiencing red, Mary thereby gains instant knowledge of everybody's experience of red?

Important question.
Or are you claiming we know nothing of brain states?
Since I have not said or implied anything remotely like it, why should you ask?

Do you claim that a blind person knows nothing of mental states associated with vision?
So, to get back to Mary's room: once the requirement that Mary adopt a particular brain state is gone, she can gain complete knowledge of the brain states and physical processes associated with color perception without changing her own brain.
Wasn't my requirement in the first place. Wasn't a physicalist or materialist requirement.

It was your definition of knowledge that stated that knowledge of a mental state actually requires being in that state.

I am happy with the definition that Mary knows X when she knows and understands every irreducible, falsifiable experimentally confirmed hypothesis about X.

So I am happy to run the Mary argument with your definition or with mine.

My only requirement is that we pick one and apply it consistently.


(By the way, if being in a mental state is a prerequisite to knowing about mental states then it would be literally impossible to have complete knowledge of colour vision).
 
Last edited:
Oh, and by the way Malerin, have you guessed the metaphysical position of the first reductionist?

Democritus, by the way, was not a reductionist - he completely rejected Empiricism.

Do you think the first reductionist could have been an Idealist?
 

Back
Top Bottom