The Hard Problem of Gravity

Given that the p-zombie stuff is an admission that we cannot tell even if a human is actually conscious or just pretending, I cannot see this as a dealbreaker with regard to consciousness. How would we falsify the hypothesis that humans are conscious, as opposed to merely pretending?

That's what human beings spend a large amount of their time doing. Art. Sex. Even spectator sports. It's a way to convince ourselves that the other person really exists the same way that I do.

Of course, it's not a falsifiable hypothesis, and hence it's not in the realm of science.
 
I think that trying to tell people who've actually worked with computers that computer "science" is an infinite and all encompassing realm is a lot harder than convincing people who've never written a program themselves.

Someone who understands even so much as a "Hello, world" will realise that before executing the "Hello world" statement, the computer/operating system/program has no idea that it is going to do it. When it's performed the operation, it has no idea what it's doing. When it does it, it doesn't know that it's done it.

Add a boolean variable to say "I've just said 'Hello world', and does it know what it's done? No. It doesn't even know what the value of the variable is until it looks at it. When it looks away, it forgets again. And no matter how complicated the program might become, that's how it works. It never knows anything. It never remembers anything.

Why don't you just take a look at the link nescafe posted.

After that, if you still want to argue about this, then ... you are likely "learning impaired."
 
Gate2501 said:
How do you know this to be true? It seems to me that this is an assumption you are making about the nature of reality. Not only is there no evidence that this is true, we have evidence that at it's most basic level - i.e. QM - reality is NOT deterministic.

On a quantum scale it is not. The instances where this affects macroscale reality are few and far between, are they not? I was under the impression that nearly all systems above the scale of QM operate in a causal manner. The illusion of "randomness" on these larger scales is based on a lack of information.
The lack of information prevents us from knowing whether the randomness is “illusion” and due to some specific cause or if it is truly unpredictable. While large scale systems appear to be causal, the fact that QM is not at the most fundamental level of reality we can discern tells us that we cannot truly predict anything other than probabilistically. It appears to me that it is more in line with what we know about reality to state that for seemingly causal systems, we have a probabilistic distribution with one outcome having a probability very very close to 1.0.
If macro reality was deeply affected by QM, on the level that it would cause systems to violate causality, then why only the brain? Why has causality been so fundamental to science? Why aren't rocks and trees and planetary orbits exhibiting probabilistic behavior due to quantum uncertainty? Why haven't I experienced quantum entanglement with my car door?
Rocks are moved by wind and eroded by weather. Planetary orbits are subject to sensitive dependence on initial conditions, not to mention being tugged about by others sojourners in the solar system. Trees are biological organisms. Whether the probabilistic behavior is due to QM or chaos or something else is not clear, but they do exhibit probabilistic behavior. You can insist, as most determinists do, that it's simply because we don't have perfect knowledge of all possible inputs, but that's a statement of belief, not of fact.
Yes. That's why I think that QM provides support for the concept of free will.

It provides support for probabilistic behavior on large scales, if we assume that it applies to large scales.
While QM does not apply at large scales, my understanding is that it has impact on large scale behavior.
You can not "choose" the end state of the cat(because it is impossible to determine if the cat is dead or alive), so even if the opening of the box was somehow not causal, or an act of free will, you wouldn't really be choosing anything at all, you would be throwing yourself unto the mercy of quantum uncertainty.
Choosing to open the box is an act of free will. How is that throwing yourself unto the mercy of quantum uncertainty?
I don't see any contradictions. The brain is 'you' or at least, a substantial part of what 'you' are. What mechanism, other than your brain, is needed for you to control yourself?
I think that both of us agree that your brain is the mechanism for input processing and the resulting output, we just disagree on how this works.

As I said in the earlier free will thread, both of us are arguing from unfalsifiable hypotheses. I think that the onus is not on me, to prove that free will doesn't exist. I feel that the burden of proof lies with you(to prove that it does), however, if you hold free will to be axiomatic, you can shift that burden back onto me.

I hold free will to be self-evident and agree with this quote from the other thread:
So what is the very good reason determinists have for telling me my observations (along with the observations of every other human being, including the determinists) are incorrect? It's that they can't conceive of a mechanism that would make them correct. To me, this is on a par with theism. Primitives say "we can't figure out why there's thunder, so it must be God." Then we figure out how that works, and later on, people say "we can't figure out how the variety of species came about, so it must have been God." And then we get Darwin, and figure that out. And then it'll be something else. To me, the determinists do something similar - they say "we can't figure out by what mechanism the individual makes a decision, so people must not really be making decisions." I'm sorry, but your failure to have a complete understanding of the workings of the human brain is not sufficient justification for me to throw out the shared observations of every member of the human race, including (most importantly from my point of view), me.

In other words, I might be mistaken about it, but I don't consider the arguments against it sufficient to overcome the observation that humans appear to have free will unlike, say, the explanation of the heliocentric solar system which is sufficient to overcome the observation that the earth appears to stand still and the sun to orbit us.

I think the burden of proof lies always with the individual who wishes to persuade another individual to a different point of view. In that regard, you have the burden of proof if you wish to persuade me that I am incorrect. I have the burden of proof if I wish to persuade you.

However, all I am trying to do is help you understand why QM, with it’s probabilistic impact on larger scale systems allows free will to emerge in a way that determinism doesn’t.
 
That's odd. I always thought that things are precisely the sum of their parts.

Err...

A phrasing more suited to your outlook would be "he is not clear how much greater a phenomenon can be than the apparent sum of it's parts."

Meaning an uneducated human observer would be likely to see only a small fraction of that sum.
 
How do YOU know that other humans are conscious at all, Beth ? Other than just assuming so ?
I think that's a very reasonable assumption.
It could very well be that some humans are NOT conscious at all, but all have the ability to act as though they are. You could never tell the difference, and so there is no difference at all, for all practical purposes.

Er, I understand that I'm assuming other humans are conscious. What I don't understand is how they could appear to be conscious without actually being conscious? How would an unconscious human pretending to be conscious act? How could a human 'pretend' to be anything without being conscious in order to do so?
 
What is an imaginative or novel thought but the confluence of several mundane ideas coexpressed? Why could we not program a computer to produce the same?

We generally do not because of the way that we use computers. They are tools that perform some of our mental labor instead of self-directed entities. We don't particularly want them to be self-directed entities; but I'm not sure I see the limitation in them that makes it impossible for them to be self-directed.

Just add a motivational/emotional type system and the ability to sift through competing claims/ideas and conjoin them in novel ways (and to decide which of these combinations are useful and which not) and we'd probably see something very similar to us.

It would be very easy to produce a program that said whether it was enjoying itself or not. It would even be possible to have it monitor something, just like a thermostat, and set values accordingly. I've written systems like that, and it would be trivial to rename variables to "m_iHappiness" or "Feelings_Of_Sadness_And_Hopelessness". The function of the program would be to maximise one variable and minimise the other. Would such a trivial program actually feel happiness or sadness? Indeed, could it be said to be trying to set the variables to any given value?

Whatever a program does, is exactly what it was trying to do. A program, unlike a person, cannot fail. Even if it crashes as soon as it's run, that's what it is meant to do. All actions and results are equivalent.
 
I do not think machine intelligence or machine consciousness are necessarily impossible. I just don't think SHRDLU and simple self-referential information processors meet that criteria, that's all.

I agree with this.

That's odd. I always thought that things are precisely the sum of their parts.

"Society of Minds" (can't remember the author offhand but I recommend the book highly) gives a nice example of things that are more than the sum of their parts. A box composed of 6 pieces of wood has properties that none of the individual parts do - i.e. mouse-tightness. A mouse can circumvent a single piece of wood relatively easily, but cannot escape when they are configured as a box.
 
The lack of information prevents us from knowing whether the randomness is “illusion” and due to some specific cause or if it is truly unpredictable.

This is just wrong. Without a mechanism for why these events on macro scales would be truly unpredictable, there is no reason to abandon causality for "spooky randomness". I will use my dice example yet again. The end result of a d6 is best explained by a range of probable outcomes, because human beings rolling a d6 cannot compute all of the data about the initial conditions of the roll coupled with environmental variables. Does this mean that we should assume that causality has been violated by this d6?

Absolutely not.

While large scale systems appear to be causal, the fact that QM is not at the most fundamental level of reality we can discern tells us that we cannot truly predict anything other than probabilistically.

On a quantum scale this is correct. Are you saying that all large scale systems only appear to be causal? Should science abandon causality in exploring these large scale systems in favor of quantum probabilistic distributions? I am not denying that quantum indeterminacy exists. I am trying to explain to you that it is terribly insignificant on these large scales, and that it is no reason to throw out causality, nor does it override it.

It appears to me that it is more in line with what we know about reality to state that for seemingly causal systems, we have a probabilistic distribution with one outcome having a probability very very close to 1.0.

Close enough to 1 to be entirely insignificant on macroscopic scales? Yes.

Rocks are moved by wind and eroded by weather. Planetary orbits are subject to sensitive dependence on initial conditions, not to mention being tugged about by others sojourners in the solar system. Trees are biological organisms. Whether the probabilistic behavior is due to QM or chaos or something else is not clear, but they do exhibit probabilistic behavior.

I'm sorry but this is perfectly clear. It is like your blood type example. Probabilistic behavior(due to lack of information) on macro scales is not the same as quantum indeterminacy. Should we just stop searching for more information about rocks/trees/orbits? Because if it is anything like quantum indeterminacy, then this information does not exist, and our search is pointless.

You can insist, as most determinists do, that it's simply because we don't have perfect knowledge of all possible inputs, but that's a statement of belief, not of fact.

I tend to not give rebuttals to arguments that apply to both sides of the debate. As I said before, we are both arguing from unfalsifiable hypotheses about the nature of macroscale reality.

Choosing to open the box is an act of free will. How is that throwing yourself unto the mercy of quantum uncertainty?

In this analogy, you can open the box, but you cannot determine the end state of the cat, that is indeterminable because it was directly based on an unmeasurable quantum state.

My point was, that if your brain works on the principles of QM, then you still would not be able to "choose" from this probabilistic range of end states, because the "you" that is determining which end state "you" would like to end up at(as an act of free will), would be presented with unmeasurable options.

I think the burden of proof lies always with the individual who wishes to persuade another individual to a different point of view. In that regard, you have the burden of proof if you wish to persuade me that I am incorrect. I have the burden of proof if I wish to persuade you.

This is not correct. The burden of proof lies with the positive claimant. In this case, that would be you, claiming that you possess freedom of will. However, like I said before, if you take free will to be axiomatic, then you can shift that burden back onto me.

I do not feel that free will is an axiom(obviously), so if you do(and I think you do), then we are simply not going to agree on who the burden of proof lies with.

However, all I am trying to do is help you understand why QM, with it’s probabilistic impact on larger scale systems allows free will to emerge in a way that determinism doesn’t.

No, what you are trying to do, is create a back door for free will/spooky consciousness, by cherry picking QM and applying certain aspects of it to the behavior of the brain. It certainly would help your case if it were true, but even if it were, it presents problems of its own that you clearly do not want to face down.
 
Says who?


Says who?

Come on, Pixy. GWT-based models predominate the field. Even Dennett goes for GWT.


What makes you think these "modules" are unconscious?

I'm not aware of their activity.

You might say that they are in some way conscious in their own right. Who knows?

Non-sequitur.

With your personal definition of "consciousness" that is the case. But you are claiming once more that consciousness necessarily involves a self-referencing loop. This is not a generally stated defining property of consciousness imo.

You are an AI man, Pixy, as I see you. You go for hard definitions and then that's the way it is and anything which goes against the definition is simply wrong to you. But this, I submit, is simply not how the majority of learned commentators see consciousness at all. I urged you before to read Blackmore's collection of interviews to get a more rounded perspective. I don't see hardly anyone else having these rigid definitions you come up with.

Then I'm sure that you - unlike AkuManiMani or Westprog or anyone else who has ever made this claim - can point out something that is actually different between human and machine consciousness.

Phenomenality may be a defining difference. We don't know yet. We don't know if a computer sees in the same way a human sees. Consciousness to you is an inherent property attributed to self-referencing data streams. But then you need to make the brain into a mass of individually conscious modules. This is what happens when you take these incredibly simplistic notions of consciousness drawn from AI and try and transpose them into the real world of human consciousness. It may turn out that it is like that, but we don't know that yet and I don't see the point in not investigating fully simply because guys like you can't bear to see their rigid definitions themselves investigated.

Nick
 
Why don't you just take a look at the link nescafe posted.

After that, if you still want to argue about this, then ... you are likely "learning impaired."

At the moment I'm dealing with the initial claim that SHRDLU is conscious - indeed, that the trivial AI programs of the 1970's solved the not-very-interesting problem of consciousness and that there's really nothing else fundamental to do.

I realise that that is not the same thing that all AI proponents are claiming, and I'll deal with the other claims in due course.
 
No, I disagree. If it behaves exactly like you, it's conscious exactly like you.

Well, if we take GWT as the model, then theoretically it should be possible to "re-wire" the brain with silicon such that intra-modular communication can take place without the need for global access (phenomenal consciousness). Hey ho, one p zombie is immediately created. If consciousness is just global access then you can replace consciousness with a cpu.

Nick
 

You know, when I was a bit wet behind the ears in this arguing-on-the-internet thing, I might have fallen for the this-website-proves-me-right argument. Let me see, what would be the appropriate inappropriate response -

No, I DEMAND that you explain your position in a single sentence of not more than ten one-syllable words.

Meanwhile, I'd be interested to see how you make a program prefer one outcome over another.
 
On a quantum scale this is correct. Are you saying that all large scale systems only appear to be causal? Should science abandon causality in exploring these large scale systems in favor of quantum probabilistic distributions? I am not denying that quantum indeterminacy exists. I am trying to explain to you that it is terribly insignificant on these large scales, and that it is no reason to throw out causality, nor does it override it.

The quantum laws explain the probability distribution of quantum events. These probabilistic laws do not insist that each quantum event has equal consequence.

For example, if Lee Harvey Oswald had had a device fitted to his rifle that used quantum indeterminacy to cause it to malfunction 50% of the time, then a single quantum event could have had a highly significant effect. There is
in quantum theory to deny this. All quantum theory demands is that the books balance at the end of the day. Whether this provides a mechanism for free will I don't know. It could possibly provide a genuine randomiser that gives us the feeling that we have free will.
 
The quantum laws explain the probability distribution of quantum events. These probabilistic laws do not insist that each quantum event has equal consequence.

For example, if Lee Harvey Oswald had had a device fitted to his rifle that used quantum indeterminacy to cause it to malfunction 50% of the time, then a single quantum event could have had a highly significant effect.

Bolding mine.

Wow, you attached an imaginary device to his rifle that made what is normally terribly insignificant, significant.

Are you suggesting that such devices are present in the brain? If not, then what the heck was I supposed to take from this post?
 
Someone who understands even so much as a "Hello, world" will realise that before executing the "Hello world" statement, the computer/operating system/program has no idea that it is going to do it. When it's performed the operation, it has no idea what it's doing. When it does it, it doesn't know that it's done it.
Someone who understands a whole lot more than "Hello, world" will tell you that every part of this is untrue.

Add a boolean variable to say "I've just said 'Hello world', and does it know what it's done? No. It doesn't even know what the value of the variable is until it looks at it. When it looks away, it forgets again. And no matter how complicated the program might become, that's how it works. It never knows anything. It never remembers anything.
In fact, biological memory works exactly like dynamic RAM in computers; recalling a memory erases that memory, and it has to be written back again, a error-prone process, so the very act of remembering something can change your memories.

Researchers have demonstrated this, being able to erase specific memories in rats and remove particular learned responses without affecting anything else.
 
You know, when I was a bit wet behind the ears in this arguing-on-the-internet thing, I might have fallen for the this-website-proves-me-right argument. Let me see, what would be the appropriate inappropriate response -

Well, the website doesn't prove me right, it just proves that you are utterly clueless when it comes to state of the art computer science.

No, I DEMAND that you explain your position in a single sentence of not more than ten one-syllable words.

My position is that you cannot come up with a formal definition of something that people can do that computers cannot do when it comes to computation.

Meanwhile, I'd be interested to see how you make a program prefer one outcome over another.

Define 'prefer' and I would be happy to.
 

Back
Top Bottom