The Hard Problem of Gravity

So there are things that can't be described mathematically?

I've noticed that a lot of replies that begin "So..." have nothing to do with the question to which they are replying. That appears to be the case here.

So... do you agree that if a program running on multiple processes can be emulated on a single processor to give the same results, it is essentially equivalent to a single processor program?

Can you name a few? Because that would be quite a move for your philosophical career, to invalidate all monisms. You might even get a few books out of it.
 
We can explain subjective experience -- it is simply what it is like to be something.

So where is the HPC in all of this? If "qualia" are merely subjective experience then they aren't a mystery at all.


So what is the mathematical description of subjective experience? Or is it magic?
 
westprog said:
This is absurdity. If I say there is no mathematical model that describes consciousness, then that's the same as believing in magic? So if if physicists claim that there's no reconciliation at present between quantum theory and the General Theory of Relativity, they can be classed with Harry Potter and Gandalf?

Reconciliation between theories is not the same thing though. The question is more like if consciousness is described sufficiently or not in the first place.

Which sort of leads to a more fundamental question: How can we tell either way? We have only access to that which we are "conscious" of, i.e. behaviour.
 
And I'm saying such a thing must be acausal.



They are equivalent things. Having a reason for doing something is much the same as saying the reasons for doing something are why I did something.

No. A rationale is the "known" reason for doing something. Peole frequently are not aware of why they prefer what they prefer. My favorite color is blue. Why? I've no clue for why I like blue best. I just do. That doesn't mean there is no cause, it just means that I'm not aware of what that cause is. Perhaps it's coded into my genes. Perhaps blue got associated with other nice things when I was a very small child. Whatever the cause, I have no rationale for why I prefer it.
 
Well, I didn't know your definition of "conscious" was so broad.

We can explain waking behavior. We can explain subjective experience -- it is simply what it is like to be something.

So where is the HPC in all of this? If "qualia" are merely subjective experience then they aren't a mystery at all.

The mystery isn't whether or not there is qualia. The question is why are there qualitative experiences at all and what physical principle give rise to specific qualitative states.

Alright. But if you are going to define consciousness such that such a creature is conscious -- even though it has zero knowledge of itself -- then there isn't anything to pursue. Your definition is equivalent to mere existence, and thus is utterly useless.

To once gain use the mass analogy: An object can have mass but, under certain conditions, it may not register a weight. A person on earth may weight 198 lbs.; that same person in a space station may weight nothing. Regardless of their context they still have mass. What I'm saying is that consciousness has the same relation to knowledge as mass has to weight.


You are dead wrong on both points.

Pixy, for example, has a very simple operational definition of consciousness. You disagree with it. So what. That doesn't mean he doesn't have a definition.

And I am very capable of describing how computational functions translate into conscious thought -- under my definition of consciousness.

So basically, you're ignoring the original problem, giving a much simpler problem its name, and then, after you solve the simpler problem you claim to have solved the original. Like I've emphasized earlier, you won't make any headway in solving the problem if you keep averting your eyes from it.



So we come back to the HPC, apparently -- Pixy and I are entirely able to describe what we are talking about while you sit there and shake your head and say "No, you are still missing something. What it is, I cannot put my finger on, but it is something."


I've stated to you exactly what it is you're missing. You just choose to believe that co-opting the label 'consciousness' is the same as solving it.


AkuManiMani said:
Okay, so what is the difference between neurons of a conscious brain and an unconscious brain?

The flow of information between them.

AkuManiMani said:
What is it about the activity of some neurons that produces qualitative experiences?

Self reference and reasoning.

AkuManiMani said:
How do the contributions of all those neurons come together in the unified experience of being conscious?

Any system that references itself and reasons can be said to be conscious, under various definitions of "conscious."[/QUOTE]

There is information flow between neurons whether an individual is conscious or not. There is self-referential feedback not only between all somatic cells but within them, via autocrine signaling. These kinds of computational processes go on all the time in every living organism and they are not sufficient, in and of themselves, to produce the subjective experience of consciousness.

By your definition even a person in a dreamless, non-REM, sleep is conscious. In such states a person is clearly unconscious even tho, biologically, they meet your criteria for what constitutes consciousness. For this reason, it is obvious that your definition of consciousness, while relatively simple, is inaccurate.

If you want to know how neurons come together to form human consciousness, be prepared to spend a few years with your head in books -- and that is just to learn what we haven't figured out yet. Talk to Nick227, he seems to be an expert on human consciousness theories.

I'd be glad to talk more on this subject with Nick. I do not think consciousness is a human specific phenomenon, but its clear that you and I aren't referring to the same thing when we speak of 'consciousness'.
 
By your definition even a person in a dreamless, non-REM, sleep is conscious. In such states a person is clearly unconscious even tho, biologically, they meet your criteria for what constitutes consciousness. For this reason, it is obvious that your definition of consciousness, while relatively simple, is inaccurate.

I agree with this. Until the Rocket, PM, et al have a definition for consciousness which distinguishes between an awake and aware human and one in a coma, I don't think they have a sufficient definition for consciousness.
 
I had such hopes. I'd read a sentence that made perfect sense, and expect to see the next following sentence... but it would veer off in an unexpected direction. A couple examples:

The mystery isn't whether or not there is qualia. The question is why are there qualitative experiences at all and what physical principle give rise to specific qualitative states.
First sentence made sense--second sentence should have said "They don't exist." Instead, we have a sentence assuming the very thing that is under investigation.
There is information flow between neurons whether an individual is conscious or not. There is self-referential feedback not only between all somatic cells but within them, via autocrine signaling. These kinds of computational processes go on all the time in every living organism and they are not sufficient, in and of themselves, to produce the subjective experience of consciousness.
Agreed with everything, with the exception of "the subjective experience of", which is utterly unnecessary.
By your definition even a person in a dreamless, non-REM, sleep is conscious. In such states a person is clearly unconscious even tho, biologically, they meet your criteria for what constitutes consciousness. For this reason, it is obvious that your definition of consciousness, while relatively simple, is inaccurate.
Ah, but the definition of consciousness as a fuzzy set of public behaviors is once more precisely what you are using. And is the only definition that is available to you. And is considerably simpler, and yet more useful and explanatory, than any definition that hinges on the unobservable.

(BTW, even in the person you describe, there are processes that absolutely are conscious, in a meaningful way, even if the person is unconscious.. For instance, that person has not lost bladder control. Someone knocked unconscious may well lose bladder control, but not heartbeat and respiration, which do have feedback processes. Someone who loses those, is dead.)
 
I agree with this. Until the Rocket, PM, et al have a definition for consciousness which distinguishes between an awake and aware human and one in a coma, I don't think they have a sufficient definition for consciousness.

We (our language community) are using "conscious" for several related but separate things. A conscious person and a conscious process are both meaningful terms, but the meaning is not identical. A conscious person is defined by engaging in a set of observable behaviors (whether you agree with this or not, every example here has followed this definition).
 
We (our language community) are using "conscious" for several related but separate things. A conscious person and a conscious process are both meaningful terms, but the meaning is not identical. A conscious person is defined by engaging in a set of observable behaviors (whether you agree with this or not, every example here has followed this definition).
Actually I do agree with that. And I think p-zombies are square circles. But I don't think that, as PM claims, self-referential processing is sufficient for consciousness. I think it is necessary but not sufficient as the human in the coma illustrates. In this discussion, AMM, Westprog and I have all been using the term in the sense of a conscious person (or animal) while the AI folks seem to be insisting that a conscious process is equivalent to that.
 
Actually I do agree with that. And I think p-zombies are square circles. But I don't think that, as PM claims, self-referential processing is sufficient for consciousness. I think it is necessary but not sufficient as the human in the coma illustrates. In this discussion, AMM, Westprog and I have all been using the term in the sense of a conscious person (or animal) while the AI folks seem to be insisting that a conscious process is equivalent to that.
And, I'll note, none of you is able to define the term you keep using.

I have a definition. It doesn't fit with the way you use the term, because you (and AMM, and Westprog) use it for several different things. That's not my fault, and it's not my problem, but it is why you are making no progress.

Define - operationally - what you mean by consciousness, and then we can have a fruitful discussion. Until you do that - until you have an idea of what you mean - you won't get anywhere.
 
(BTW, even in the person you describe, there are processes that absolutely are conscious, in a meaningful way, even if the person is unconscious.. For instance, that person has not lost bladder control. Someone knocked unconscious may well lose bladder control, but not heartbeat and respiration, which do have feedback processes. Someone who loses those, is dead.)
Exactly. Consciousness isn't just feedback, but feedback from the information process to itself, but that's certainly going on in the autonomic nervous system, and indeed, throughout the brain. So the consciousness that you think of as you is only one of a great number that exist and interact inside your head.
 
We [by which I mean people in general] can't define what consciousness is "operationally", because we don't know how it operates. The entire problem is that we don't know how consciousness operates, so creating an "operating" definition is far too premature.

Like AAM said, you're defining yourself out of a problem. You're not solving it, not at all.

[By the way - no comment on the fact that the Game of Life, which you hold to be examples of an imaginative program, robotically produces the same output for the same input, every single time?]
 
This is absurdity. If I say there is no mathematical model that describes consciousness, then that's the same as believing in magic?

No.

Saying "we don't have a mathematical model that fully describes consciousness" is very different from saying "there is no mathematical model that fully describes consciousness."

The former implies that we simply haven't figured one out, the latter implies that we will never figure one out.

If you actually meant the former then I apologize... but I don't think you did.
 
I've noticed that a lot of replies that begin "So..." have nothing to do with the question to which they are replying. That appears to be the case here.

So... do you agree that if a program running on multiple processes can be emulated on a single processor to give the same results, it is essentially equivalent to a single processor program?

Absolutely.

And that reply had everything to do with your question. In mathematics there is no concept of things occuring "in parallel." In this context, every nondeterministic finite automata (state machine) can be converted into a determinisitc finite automata (state machine) that recognizes the same language.

In other words, if consciousness is the result of computation, then it can occur just the same by simulating neurons one at a time using a monastery full of monks writing on paper with feather-quill pens. So could any other process we know of, by the way. That is a fact.

Unless you think some things can't be described by mathematics...
 
Last edited:
[By the way - no comment on the fact that the Game of Life, which you hold to be examples of an imaginative program, robotically produces the same output for the same input, every single time?]

So the exact same environmental conditions select for the exact same behavior. Ok. And similar conditions select for similar behaviors. And systematic differences in conditions (sometimes very subtle differences) result in systematic differences in behaviors (sometimes surprisingly big differences). It's a fascinating science, actually.

By virtue of the fact of individual and unique points of view (literally, no two people see through the same set of eyes, nor have the same set of social interactions), the Game of Life objection does not apply to living organisms.
 
By virtue of the fact of individual and unique points of view (literally, no two people see through the same set of eyes, nor have the same set of social interactions), the Game of Life objection does not apply to living organisms.

Well, that is entirely my point, really.

I stated that "computers only do what they're told to do". As a counter-argument, Pixy suggested that Game of Life proved that computers could go beyond their deterministic programming, because the resultant patterns were not programmed into the source-code. I'd argue that the opposite is true - that these patterns are predictable and deterministic given the limited rule-set and initial breeder pattern shows that it's not a great example of a computer program doing something it was not programmed to do. As a counter-argument, it fails - because the Game of Life does exactly and only what it's supposed to to, in a deterministic fashion, every single time. If consciousness does this, it does not do so self-evidently, which means further discussion and thought is required.

If there are examples of such programs (I don't know), I'd be interested to learn of them.
 
Well, that is entirely my point, really.

I stated that "computers only do what they're told to do". As a counter-argument, Pixy suggested that Game of Life proved that computers could go beyond their deterministic programming, because the resultant patterns were not programmed into the source-code. I'd argue that the opposite is true - that these patterns are predictable and deterministic given the limited rule-set and initial breeder pattern shows that it's not a great example of a computer program doing something it was not programmed to do. As a counter-argument, it fails - because the Game of Life does exactly and only what it's supposed to to, in a deterministic fashion, every single time. If consciousness does this, it does not do so self-evidently, which means further discussion and thought is required.

If there are examples of such programs (I don't know), I'd be interested to learn of them.

It would be quite possible to sit down with pencil and paper and work out the outcome for every step in Life. It would be possible to produce exactly the same output. As Life is a fairly simple program, this wouldn't be too difficult. For other programs, it would be more difficult, but not impossible in principle.
 
Absolutely.

And that reply had everything to do with your question. In mathematics there is no concept of things occuring "in parallel." In this context, every nondeterministic finite automata (state machine) can be converted into a determinisitc finite automata (state machine) that recognizes the same language.

In other words, if consciousness is the result of computation, then it can occur just the same by simulating neurons one at a time using a monastery full of monks writing on paper with feather-quill pens. So could any other process we know of, by the way. That is a fact.

Unless you think some things can't be described by mathematics...

So we are left with the simple assertion that consciousness is produced by computation, and that if this is the case, then any way of performing a computation will generate consciousness in the same way, whether it's a high speed computer or rolling balls down tubes.

One problem here is that we need to describe real things both in terms of mathematics and physics. It isn't possible to describe real-world things in terms of mathematics alone. How would "computation" be described in physical terms?
 
No.

Saying "we don't have a mathematical model that fully describes consciousness" is very different from saying "there is no mathematical model that fully describes consciousness."

The former implies that we simply haven't figured one out, the latter implies that we will never figure one out.

If you actually meant the former then I apologize... but I don't think you did.


If I mean to say that we will never figure one out, then I'll say so. It's possible that we will figure one out. It's possible that there is a model but that we'll never figure it out. It's possible that there is a model which it is actually impossible for us to figure out. It's possible that there is no model.

Is that clear enough?
 

Back
Top Bottom