The Hard Problem of Gravity

Well, if we take GWT as the model, then theoretically it should be possible to "re-wire" the brain with silicon such that intra-modular communication can take place without the need for global access (phenomenal consciousness). Hey ho, one p zombie is immediately created. If consciousness is just global access then you can replace consciousness with a cpu.
No. The "Global Workspace" is a synthesis of those communications. If you re-wire the brain with silicon - or anything else - you still have those communications, and you still get that synthesis.

There is no global access. There's just communication between neurons.

And P-Zombies are always non-sensical. If you think you have an argument involving P-Zombies, it just means that you got something wrong somewhere.
 
Bolding mine.

Wow, you attached an imaginary device to his rifle that made what is normally terribly insignificant, significant.

Are you suggesting that such devices are present in the brain? If not, then what the heck was I supposed to take from this post?

What you are supposed to take is that single quantum events can have real world results of unlimited significance. It is not the case that quantum physics simply resolves itself into classical physics at larger scales. Quantum physics resolves itself into classical physics statistically. Individual quantum events are uncaused, and cannot be predicted.

Whether or not such quantum events are applicable to how the brain works and how consciousness arises is not known. But there's nothing "magical" or "mysterious" about quantum physics. Quantum physics is how the world actually works.
 
Well, the website doesn't prove me right, it just proves that you are utterly clueless when it comes to state of the art computer science.

Oh, one of those websites. Packed full of fantastic ideas that just happen to support you and refute me.

My position is that you cannot come up with a formal definition of something that people can do that computers cannot do when it comes to computation.

The fact that you think that only formal definitions should apply to this subject shows where the limitations lie. Human beings are not restricted to formal definitions. Computer programs are pretty much formal definitions.

Define 'prefer' and I would be happy to.

The dictionary is full of verbs like "prefer" and "enjoy" and "try". There are hundreds of them. Human beings treat them as if they have some kind of meaning. Perhaps they don't. They certainly don't have any meaning for computer programs.

Human beings do try to do things. They do enjoy certain outcomes, and others less so. They prefer one outcome over another. None of this applies to computer programs.

I sometimes think that the real aim of AI researchers is not to actually produce conscious beings, but to convince themselves that they are really meat machines.
 
Human beings do try to do things. They do enjoy certain outcomes, and others less so. They prefer one outcome over another. None of this applies to computer programs.

I see. So do explain the reasons why certain outcomes are enjoyed and others less so. What is the rationale behind preference? Why should this apply to a computer program? Do you know what a domain error is?
 
The fact that you think that only formal definitions should apply to this subject shows where the limitations lie. Human beings are not restricted to formal definitions.

No, but science is.

So apparently you think consciousness is forever outside the realm of science.

Good job for finally admitting it, now at least we all know where you stand.
 
Gate2501 said:
The lack of information prevents us from knowing whether the randomness is “illusion” and due to some specific cause or if it is truly unpredictable.

This is just wrong. Without a mechanism for why these events on macro scales would be truly unpredictable, there is no reason to abandon causality for "spooky randomness".
Whoa there Nellie! You don't have to abandon causality to recognize randomness exists. And when did randomness become 'spooky'.
I will use my dice example yet again. The end result of a d6 is best explained by a range of probable outcomes, because human beings rolling a d6 cannot compute all of the data about the initial conditions of the roll coupled with environmental variables. Does this mean that we should assume that causality has been violated by this d6?
I don't consider it abandoning causality to recognize that we can't predict the outcome without using a probabilistic distribution. What I was trying to get across is that because we cannot predict the outcome without using probability, we cannot be sure that causality is appropriate for every one of the many environment variables and initial conditions.

While large scale systems appear to be causal, the fact that QM is not at the most fundamental level of reality we can discern tells us that we cannot truly predict anything other than probabilistically.

On a quantum scale this is correct. Are you saying that all large scale systems only appear to be causal?
Only in the same sense that you stated earlier that randomness is "illusion".
Should science abandon causality in exploring these large scale systems in favor of quantum probabilistic distributions? I am not denying that quantum indeterminacy exists. I am trying to explain to you that it is terribly insignificant on these large scales, and that it is no reason to throw out causality, nor does it override it.
Did you mean to imply we should give us using probabilistic methods to make predictions? I didn't think so. We should use whatever tools and techniques are useful.
It appears to me that it is more in line with what we know about reality to state that for seemingly causal systems, we have a probabilistic distribution with one outcome having a probability very very close to 1.0.

Close enough to 1 to be entirely insignificant on macroscopic scales? Yes.
Not entirely insignificant for all macroscopic situations, but basically yes.
You can insist, as most determinists do, that it's simply because we don't have perfect knowledge of all possible inputs, but that's a statement of belief, not of fact.

I tend to not give rebuttals to arguments that apply to both sides of the debate. As I said before, we are both arguing from unfalsifiable hypotheses about the nature of macroscale reality.
Fair enough.

My point was, that if your brain works on the principles of QM, then you still would not be able to "choose" from this probabilistic range of end states, because the "you" that is determining which end state "you" would like to end up at(as an act of free will), would be presented with unmeasurable options.
What 'unmeasureable' options? And how does that mean no 'free will'? Your choice, among various available options, is determined solely by your intent. Choosing among various options is how I define free will.
I think the burden of proof lies always with the individual who wishes to persuade another individual to a different point of view. In that regard, you have the burden of proof if you wish to persuade me that I am incorrect. I have the burden of proof if I wish to persuade you.

This is not correct. The burden of proof lies with the positive claimant.
We will just have to disagree then. Positive or negative, you only convince someone else when you are willing to accept the burden of proof.
However, all I am trying to do is help you understand why QM, with it’s probabilistic impact on larger scale systems allows free will to emerge in a way that determinism doesn’t.

No, what you are trying to do, is create a back door for free will/spooky consciousness, by cherry picking QM and applying certain aspects of it to the behavior of the brain. It certainly would help your case if it were true, but even if it were, it presents problems of its own that you clearly do not want to face down.

Please don't presume motives for me and I'll grant you the same courtesy. I stated what I was trying to do. I have no need to create a back door for free will and/or consciousness. They are welcome to use the front entrance. :D

You previously indicated a desire to understand why QM helps the case for free will. I was simply trying to explain why I feel it works. You need not agree, but whatever problems you feel it presents, you have yet to present coherently and convincingly to me.
 
No, but science is.

So apparently you think consciousness is forever outside the realm of science.

Good job for finally admitting it, now at least we all know where you stand.

Where do you get "forever"? At present, consciousness (and all those difficult to define words that are involved with humanity) aren't in the realm of science. Maybe one day they will be.
 
I see. So do explain the reasons why certain outcomes are enjoyed and others less so. What is the rationale behind preference?
Preferences frequently are not rational. I think that's his point.
 
This is not to say that machine intelligence is impossible, far from it. Not my case at all - I do not agree with westprog and AMM, for the most part. I'm just trying to pull out some threads of the consciousness = self-reference and appearance-of-consciousness == consciousness positions.

Don't get me wrong. Current AI systems and computers are good examples of machine intelligence. I'm just far from convinced that they are conscious or that creating conscious machines is technically possible with today's know-how.

What instructs the DNA, is what I mean. The computer programme has something external feeding in a set of instructions. The biological system does not. It does not have a finite set of behaviours, IYSWIM. What a computer can do (when we're talking about information processes) is fixed IN ADVANCE. What a biological system can do (when we're talking about consciousness) is not, or at
least not in the same way.

The interesting thing about genetics is that, depending on the context, a cell will read and interpret genes in different ways. There are numerous epigenetic factors which can come into play that affect how and when genes are read. There is a certain degree of intelligence involved in this process. This is how somatic cells, which are essentially clones of each other, can express such radically different phenotypes.

I don't know what the answer is. That's why it's called the Hard Problem. I think it's a methodological rather than an ontological problem, but it's definitely a problem. Answers like Pixy's seem to me to be unsatisfyingly glib, somehow... which is why I'm trying to see if they really are glib, or just expressed glibly.

They are expressed glibly and they are glib. All they've done is redefine the problem as another simpler problem and declared it solved. Its essentially the argument of an alchemist who's convinced himself that hes successfully created homunculi or found the formula for turning lead into gold.
 
Last edited:
I think this can be used as further elaboration of my points in post #741.

It seems obvious that there has been an evolutionary advantage for a system to manifest potential complex behaviour. Thus we have ended up with brains with enormous complexity. But on the other hand, it would seem plausible that complexity alone isn't sufficient. Thus there would have to have been pressure for organization in a structured and hierarchical way. How else would the system behave in a "meaningful" way in regards to its internal and external environment?

Ultimately this would manifest in evolutionary pressure to organize in such a way as to highly restrict and regulate the mechanism of global access, and what could be accessed globally. Without restrictions there would not be behaviour at all, or it would be chaos all over the place, and the organism would not be able to respond in a particular way at all, or it could even die from lack of self-preservation on the spot. For instance, what would happened if there would be universal access in the whole of the brain at the same time? It would be a total disaster and we would probably die in a few minutes, or so I would assume at least. Access here meaning "conscious" access.

It would hence also seem plausible that due to the strict regulation and restriction of access, the organism would benefit from evolving even further complexity (as in connectivity etc.) in some "safe" and "specific" instances. Thus ultimately manifesting in abstract and linguistic reasoning etc. Even emotions and motivational manifestations could perhaps be considered as some kind of "representations" of elements that aren't "allowed" direct global access, but still being beneficial in terms of some other parts of the system being aware of them in an indirect way (and then connecting them to a circuit where global access would be both safe, beneficial and meaningful).


Yes, very good. I may have more to add tomorrow but am limited for time now.
 
I was certainly aware of SHRDLU, and Eliza, and all the other simple scripts from the youth of AI, when everything seemed possible. I'm still not aware of SHRDLU the conscious program aware of its own existence.
Then follow the link, and learn.

A JIT compiler might, at a pinch, rewrite its own code while its running, on the fly.
Certainly.

Does this imply some awareness of what being a compiler program is? Of course not.
And this is different from humans how?
 
Where do you get "forever"? At present, consciousness (and all those difficult to define words that are involved with humanity) aren't in the realm of science.
A lot of neuroscientists will be very surprised to find themselves out of a job.
 
It would be very easy to produce a program that said whether it was enjoying itself or not. It would even be possible to have it monitor something, just like a thermostat, and set values accordingly. I've written systems like that, and it would be trivial to rename variables to "m_iHappiness" or "Feelings_Of_Sadness_And_Hopelessness". The function of the program would be to maximise one variable and minimise the other. Would such a trivial program actually feel happiness or sadness? Indeed, could it be said to be trying to set the variables to any given value?

Whatever a program does, is exactly what it was trying to do. A program, unlike a person, cannot fail. Even if it crashes as soon as it's run, that's what it is meant to do. All actions and results are equivalent.


There is a difference between creating an automaton that said it was enjoying itself -- simple switch operation -- and a useful motivational system. I'm thinking more in terms of recreating the essence of a motivational system. We don't know enough of how these processes operate in the brain to simulate them very effectively, but simple weighting of inputs and outputs doesn't cover it.

As for feeling happiness or sadness, how could you possibly know but from shared behavior, same as with other humans? Think about what a motivational system is -- it has to show up somehow in order to motivate us. The feeling of happiness or sadness is the motivational system at play. So, I would have to say, yes, that a computer hooked up in the same way would feel it. It might not be exactly the same as our emotional system which occurs within a body, but it would essentially be the same thing.

As to a program doing what it was meant to do -- how does that differ from our nervous system? First, and I repeat, we cannot think in terms of single programs. To even approach what consciousness does requires us to think of several programs (or at least wildly divergent subroutines that often perform contradictory actions) that are controlled by one or more master programs that can refer both to themselves and to lower level systems -- as our mirror neurons respond both to internally generated movements and watching others perform the same action. Second, barring the trivial issue of mistakes, our nervous systems simply carry out computational objectives. They do what they are meant to do. It seems to me that if you want to suggest otherwise then you have already put a homonculus in the brain and are necessarily stuck in dualism.

You guys continue to describe what programs currently do in the absence of motivational systems and complex self-referential pathways. We aren't talking about the current status of programming but future possibilities.

If you want to argue that a computer will never be able to "be conscious" then this requires a better and more inclusive argument. Searle's Chinese Room tried to do it, but it suffered from the limitation of concentrating on lower level functioning and not finding consciousness of language function there. But I don't think that anyone working on this today thinks that consciousness of any lower level function exists precisely in that lower level pathway -- it is a higher level slef-referential issue. Brains do it with neurons; mirror neurons are one clue to the possibilites. Why can't computers do it with silicon chips?
 
If you define "conscious" simply in terms of "being awake," then I agree.

But that isn't what you, or any other HPC proponent, are doing.

You start with something that is self evident -- being awake, being aware, experiencing things, whatever. Then you extrapolate and include all sorts of other stuff that is not self evident. This is not logically valid. If you want to talk about something being self evident, you have to stick with only what is self evident.

And the reason you do this is because you lack a formal definition of "conscious." So you think "well, if I am awake I am conscious, and if I am conscious I must be able to experience qualia and subjectivity, and since it is self evident that I am awake, it must be self evident that qualia and subjectivity exist."

That is a fallacy.

Rocket, the rationale isn't "Oh, I'm awake so therefore I must be able to have qualia". The experience of being awake is qualitative; any subjective experience at all is qualia. Its not an additional property of being conscious and awake -- it is consciousness.

I'm literally stunned that you don't seem to be picking up on this in the slightest :confused:

AkuManiMani said:
Whats circular about saying "consciousness exists as a phenomenon; consciousness is a requisite of knowledge"? Its no more circular than saying "mass is a real property; mass is a requisite to weight."

Because any formal definition of consciousness must be predicated on the existence of knowledge.

If you disagree, just go ahead and try to define "consciousness" without somehow relying on the notion of "to know."

I already gave an example of such in the thought experiment I proposed in post #353. The subject of the thought experiment is conscious, in the physiological sense, but does not have knowledge of anything because they are sensorially cut off from their environment.

Consciousness does not have a tautological relationship with knowledge; it is merely the necessary requisite for it. Just as an object cannot register weight unless it has mass, so an entity cannot have knowledge unless it is conscious. There is absolutely no logical contradiction or circular reasoning in this statement. For the life of me, I cannot understand why you don't see this.

AkuManiMani said:
We know that consciousness exists as a phenomenon and that each of us experiences this state at various periods of the day; this is a given. What we don't know is what in physics necessitates or governs conscious [i.e. subjective] experience. This is the reason why we are stuck with informal, 'fuzzy' definitions. For reasons that I've already mentioned, its is evident that self-referential intelligence is not a sufficient requisite for conscious experience.

Another fallacy.

Are you seriously claiming that lack of knowledge of the mechanism causing a phenomenon necessarily prevents us from at least operationally defining the phenomenon?

That just the problem. Neither you, or anyone else has an operational definition of qualitative experience [i.e consciousness]. There are various methods of defining and modeling computational functions but absolutely nothing in the way of describing how such functions translate into conscious thought. The field of AI lives up to it's namesake: artificial intelligence. But intelligence and consciousness are not the same thing.


AkuManiMani said:
Of course.

Modeling in finer detail the exact physiological processes that give rise to said consciousness will require considerably more than simply stating consciousness as a given. The point of me assigning an "X" variable to consciousness is to serve as a conceptual placeholder until there is such a formal method of modeling what it is, exactly. There is no convincing evidence that we have such a formal system yet. My purpose here is to suggest possible avenues of investigation to determine a means of crafting such a system. My guess is that we need to study the physical process of instances that we do know are conscious [e.g. living brains] and work from there.

Well, perhaps we have been too harsh on you then -- you clearly know nothing about computer science and computation theory.

All the fundamentals we need to describe human consciousness are already known. We know exactly how an individual neuron behaves. The question, as with any complex problem, is how to arrange the fundamentals into something greater than the sum of it's parts.

Okay, so what is the difference between neurons of a conscious brain and an unconscious brain? What is it about the activity of some neurons that produces qualitative experiences? How do the contributions of all those neurons come together in the unified experience of being conscious? Is an organism simply having neurons sufficient for generating consciousness?

There have been great strides in computational theory over the passed century or so and the field of AI has produced a lot in a relatively short period of time. Even so, merely creating intelligent systems is not the same as producing conscious experience. Such processes are what underlies the most basic of biological systems and we know that, in and of themselves, they are not sufficient to produce conscious experience.

I feel like you aren't clear on just how much greater a phenomenon can be than the sum of it's parts. Let me make it clear just how much -- infinitely.

I feel really rotten pressing the issue like this, but I think that it is you who aren't appreciating the full depth of the problem. I don't think that is insoluble [or atleast I hope not] but there are a lot of unanswered questions that our present knowledge barely even begins to address. :-/
 
You guys continue to describe what programs currently do in the absence of motivational systems and complex self-referential pathways. We aren't talking about the current status of programming but future possibilities.
Where future = 1967. ;)

If you want to argue that a computer will never be able to "be conscious" then this requires a better and more inclusive argument. Searle's Chinese Room tried to do it, but it suffered from the limitation of concentrating on lower level functioning and not finding consciousness of language function there.
Right, fallacy of composition.

But I don't think that anyone working on this today thinks that consciousness of any lower level function exists precisely in that lower level pathway -- it is a higher level slef-referential issue. Brains do it with neurons; mirror neurons are one clue to the possibilites. Why can't computers do it with silicon chips?
Well, they can.
 
Add a boolean variable to say "I've just said 'Hello world', and does it know what it's done? No. It doesn't even know what the value of the variable is until it looks at it. When it looks away, it forgets again. And no matter how complicated the program might become, that's how it works. It never knows anything. It never remembers anything.

Well, there are programs that remember information.
 
Err...

A phrasing more suited to your outlook would be "he is not clear how much greater a phenomenon can be than the apparent sum of it's parts."

Meaning an uneducated human observer would be likely to see only a small fraction of that sum.

Gotcha!

We wouldn't want people to think that we're using dualistic language, would we ? ;)
 
What I don't understand is how they could appear to be conscious without actually being conscious?

Well, now you understand why we say that only behaviour counts in determining that. If you don't understand how a human could appear to be conscious without actually being so, why don't you apply it to computers ?
 
"Society of Minds" (can't remember the author offhand but I recommend the book highly) gives a nice example of things that are more than the sum of their parts. A box composed of 6 pieces of wood has properties that none of the individual parts do - i.e. mouse-tightness. A mouse can circumvent a single piece of wood relatively easily, but cannot escape when they are configured as a box.

But the box itself is only a function of all six pieces of wood and their configuration. There is no "new" property that emerges.
 
There is a difference between creating an automaton that said it was enjoying itself -- simple switch operation -- and a useful motivational system. I'm thinking more in terms of recreating the essence of a motivational system. We don't know enough of how these processes operate in the brain to simulate them very effectively, but simple weighting of inputs and outputs doesn't cover it.

As for feeling happiness or sadness, how could you possibly know but from shared behavior, same as with other humans? Think about what a motivational system is -- it has to show up somehow in order to motivate us. The feeling of happiness or sadness is the motivational system at play. So, I would have to say, yes, that a computer hooked up in the same way would feel it. It might not be exactly the same as our emotional system which occurs within a body, but it would essentially be the same thing.

As to a program doing what it was meant to do -- how does that differ from our nervous system? First, and I repeat, we cannot think in terms of single programs. To even approach what consciousness does requires us to think of several programs (or at least wildly divergent subroutines that often perform contradictory actions) that are controlled by one or more master programs that can refer both to themselves and to lower level systems -- as our mirror neurons respond both to internally generated movements and watching others perform the same action. Second, barring the trivial issue of mistakes, our nervous systems simply carry out computational objectives. They do what they are meant to do. It seems to me that if you want to suggest otherwise then you have already put a homonculus in the brain and are necessarily stuck in dualism.

You guys continue to describe what programs currently do in the absence of motivational systems and complex self-referential pathways. We aren't talking about the current status of programming but future possibilities.

If you want to argue that a computer will never be able to "be conscious" then this requires a better and more inclusive argument. Searle's Chinese Room tried to do it, but it suffered from the limitation of concentrating on lower level functioning and not finding consciousness of language function there. But I don't think that anyone working on this today thinks that consciousness of any lower level function exists precisely in that lower level pathway -- it is a higher level slef-referential issue. Brains do it with neurons; mirror neurons are one clue to the possibilites. Why can't computers do it with silicon chips?

The question about more complex programming systems is - can this multi-processor, multi-level extremely complicated programming intelligence system be emulated - even in principle - as a series of single instructions on one big program. If it can, then that is what it is, and we have no reason to believe that such a program would develop into something else simply by complicating it.
 

Back
Top Bottom