The Hard Problem of Gravity

Irrelevant. Dodger didn't say the apple was spherical, but round. The exact shape of a particular apple can be described mathematically.

So we have three things - spherical (a mathematical term), round, and the particular shape of one particular apple, as mathematically described. To what degree of precision? For some purposes, a cube big enough to enclose the apple would be good enough. Can the apple be described with total precision? Not in this universe.
 
Would you argue that if someone created a program that was more 'complex' than a human it should be entitled to more legal rights than an actual human?
What more legal rights?

Is complexity your criteria for ethics?
It's part of everyone's criteria for ethics.

Not only is it not unethical to kill, say, a brick, we don't even consider the question meaningful.
 
I explained the two potential meanings. I didn't just trot out "define terms," like some kind of bloody chatbot. If you understand the question, why didn't you just answer it? I don't see what the problem is.
So which of those meanings did you mean? I can't answer the question if you refuse to say.

I don't want to spend my day coming up with definitions for terms as basic as "aware," no doubt only to have those definitions questioned.
Tough. If you won't define your terms, you can't have a meaningful conversation. This is true no matter what the subject of discussion.

I don't see the point.
You don't see the point of definitions?

I submit that in this context the meaning is clear.
Well, then, it's easy for you to tell me what you mean, isn't it?

You said you understood the question.
Grammatically, sure. It's easy to parse.

Semantically? It's your question; you have to provide your definitions.

Can you see a clear alternative meaning?
Not one that makes any sense. But I can't rely on you making sense. That's why I ask you to define your terms.

If not, why not just answer it?
Why not just define your terms?

If you mean the one Hof writes with Wino using a pseudonym, I flicked through it fairly slowly.
This one.

I can regard you as a self. I can regard myself as a self.
So?

If shrd isn't programmed to create internal representations of self, why would it happen?
That's exactly what Godel, Escher, Bach is about.
 
I listened. I mean it's a little complex because he's also trying to distinguish love from emotions, but he does say that you can't really wake up in the morning and feel love.
That isn't what he says. Listen again.

It's interesting because to me love, in the sense Wolfe doesn't really believe in, does tend to occur as self-awareness deepens, along with a tendency for identification with thought to decrease.
What is any of that even supposed to mean?

This is another of those things that is going to be problematic for Strong AI at some point, I expect.
You expect a lot of things to be problematic for "Strong AI". So far, you have provided no justification for any of these expectations. Or, for the most part, even definitions of what these expectations mean.
 
We actually don't know anywhere near enough about human feelings to give them to machines or to know if that is possible.
Church-Turing thesis.

If you take a behaviourist perspective that feelings are the executors of evolutionary logic then it is clearly possible to programme the behaviour of, say, empathy or sorrow or happiness, into a robot. But it will be (in almost certain liklihood) an f-zombie.
Why do you think that f-zombies are any more logically coherent a concept than p-zombies? If a robot acts sad, and thinks it's sad, in what way is it not sad?

For sure you can programme a computer with the behaviour of empathy.
Or it can demonstrate empathy without being programmed for it.

But I find it grossly unlikely (to say the least) that it experiences the feelings involved.
Yeah, you keep saying things like this.

Why?

Of course it's hard to definitively prove either way, given that a human cannot know what a computer feels.
Sure we can.

You don't agree?
Most of your assertions aren't even logically coherent.
 
If the two mathematicians ask different questions then they will get different answers. The ambiguity lies in the language, not the mathematics. Given the same axioms, and the same relationships, then the same results will emerge. Always, without fail.

That 1+1=2 is far more fundamental than the existence of the moon. We can be pretty sure that the moon exists. We can be absolutely certain that 1+1=2.

I agree but again that is only in external reference. The same axioms is crucial.
 
Depends what you mean by "in reality" I guess.

So, I take it, not all mathematical statements are "in reality" in any meaningful sense?
Correct.
rocketdodger said:
Yes, that is exactly what I mean.
Alright--that's fair. There are still a few holes to patch up, however. I can describe, for example, a number bigger than the number of particles in the universe... that's not a terribly big issue. But I can also describe thing such as "the smallest counterexample to the Goldbach conjecture". There may or may not actually be one, but I can describe it. More simply, I can describe fairies and square circles.

So, even though I can describe things that may be isomorphic to mathematical entities, there's something about the description that is in itself not quite the whole story. In order for "the smallest counterexample to the Goldbach conjecture" to actually be naming something, there has to be a particular relationship--a logical extension of relationships on the integers. If the Goldbach conjecture is true, then "the smallest counterexample ..." is but a very complicated "square circle".

It's not whether or not I can describe the entity that makes it mathematical--it's whether or not there is such a logical extension to the relationships I'm talking about.
 
porch's roundup

I followed this thread from the start. I read every post in this thread up until around page 40, as it was being broadcast. After that I checked in less and less frequently, and skipped over more and more segments of posts. I'd just like to share a few thoughts on the thread as a whole.

Well, what an epic ****storm. As terrible and painful as it was at times, it was also great. I think I can say I learned a fair deal, so I'd like to thank all the actors involved.

As far as the arguments go - here is the overarching storyline as I see it. (If I can be so crass in my generalizations as to divide the debate into two teams - Team HPC vs. Team AI?):

-Team AI asserts that there is no HPC.
-They further assert that consciousness is self-referential information processing.
-Team HPC says that AI's theory is wrong by virtue of being incomplete.
-Team HPC fails to coherently define what is alleged to be missing in the first place.
-To me, this consistent failure only serves to reinforce the point made by the OP - the HPC is no more meaningful than the HPG.

As I said, I ended up not reading a lot of posts in the second half, so fill me in if I'm missing anything major. Team HPC, sorry to come down on you so hard, but I'm trying to call it like I see it. You know, I know squat about programming and math and neurology, so I can't say for sure that Team AI's story checks out. But they've given me something to go on, something that I can continue to test against reality. Because they've described something. Team HPC, you haven't described anything but doubt to me. It is NOT about petty semantics, it's about knowledge that you can use.

Anyhow, a hard fought battle by all, kept fairly clean despite the normal emotional flare-ups and the expected language frustrations. The main topic is now more than played out for me. Wake me up if anything happens. Currently the exchanges on math and stuff seem interesting.

thanks again, porch
 
It's not whether or not I can describe the entity that makes it mathematical--it's whether or not there is such a logical extension to the relationships I'm talking about.

Wouldn't such a "logical extension" constitute an isomorphism?

I mean, if the extension of the relationships is indeed logical, then it can be made using some algorithm. And if that is the case, then it is isomorphic to some mathematical statement.

In other words, since the Goldbach Conjecture scans in formal English, we know it is isomorphic to some mathematical statement.
 
Last edited:
I mean, if the extension of the relationships is indeed logical, then it can be made using some algorithm. And if that is the case, then it is isomorphic to some mathematical statement.

In other words, since the Goldbach Conjecture scans in formal English, we know it is isomorphic to some mathematical statement.
The GC is already a statement. But if the GC is true, there's no smallest counterexample (call it SCGC). The question isn't whether or not GC is mathematical (GC itself falls under the umbrella of mathematical concerns)--it's whether or not there is such a "thing" as SCGC--that is, whether SCGC conveys a real relationship, or is simply a contradictory concept.

If you interpret this in the general--there's absolutely no guarantee that GC, or some GC-ish statement, can be proven to be true if it is true. As such, there's nothing to guarantee that SCGC like relationships, given that they don't hold, can be shown not to hold with an algorithm that runs in finite time.

Now, you can describe the SCGC--in fact, merely by talking about it, we are describing it. But the description of the number is not the interesting thing with respect to mathematics. The interesting thing is whether or not there is such a relationship--whether there is a smallest counterexample. SCGC conveys something "real" if and only if (and in the sense that and only in the sense that) the GC is false.

So it doesn't help us at all to be able to describe the SCGC. What we want to know, in terms of mathematics, is whether or not the relationship holds. We want to know if there's a finite counterexample.

The problem is, there are an infinite number of candidates, and there's nothing that gives us any guarantees that if GC is true, we should be able to in principle prove it. Now if GC is false, we're lucky (at least in principle). And if it's true, we might could prove it. But it's not guaranteed.

Even the fact that it's a logical extension of relationships isn't sufficient to give us this guarantee.
 
Last edited:
So which of those meanings did you mean? I can't answer the question if you refuse to say.

What happened, as I recall was....

* I asked you if you were "aware of being aware."
* You asked me to define "aware"
* I said it was unnecessary as the context was clear and accused you of avoidance
* You said I'd just done the same thing in an earlier post
* I pointed out that in that instance I had articulated the 2 possible meanings and not simply asked for a definition of terms. You had already stated that you understood the question and I asked you why then did you not answer it.

Can you start to understand this basic principle in human communication? If the question has a single clear meaning to you, which you stated it did, then you answer it. There is no need to "just check" that there might be some other meaning that the other has in mind. If the question is clear...it's clear.

Please, go back through the dialogue, because you might learn something about how you communicate and why you are getting feedback that it is often perceived as being inadequate.

For me, it is grossly unsatisfying to communicate with you because you seem incapable of holding the context of the communication in your brain for sufficient time for one subject to be discussed meaningfully.

As it happens, I had already chosen which of the two meanings I'd previously offered...my question was one of the two. I pointed out that being "aware one is aware" is not the same as being "aware of being aware."

Tough. If you won't define your terms, you can't have a meaningful conversation. This is true no matter what the subject of discussion.

This is precisely what I mean. You circumvent self-examination by constantly shifting context and applying rules no one, in normal conversation, would. I can only imagine that you do this consciously, as avoidance; unconsciously, through fear; or that your brain simply cannot hold a context in mind for the same length of time as brains usually can.



Thanks. I'll check it out.

That's exactly what Godel, Escher, Bach is about.

Good book. I'm enjoying reading it.

Nick
 
I agree but again that is only in external reference. The same axioms is crucial.

But maths is just axioms and relationships. Different axioms, different results. Same axioms, different symbols, same results. Same axioms, different cultural modes of expression, same results.
 
What the hell else would you expect?

Why is it surprising that the same thing is the same thing? I want to know.
 
Why do you think that f-zombies are any more logically coherent a concept than p-zombies?

Interesting question. For a start there are human precedents. A human may develop the behaviour of empathy but not have conscious access to the feelings which underly it. In response to visual and emotive signals we can learn to alter the tone of our voice and our body and face posture to try and "give empathy." Those working in counselling professions, for example, do this all the time. They may or may not be capable of consciously accessing the depth of feeling the other is experiencing, but either way they can adopt the behaviour of empathy.

The human being has potential conscious access to a vast inner sensorium of feeling, and whilst AFAIK neurologists haven't tracked down just how this happens in the body and brain it is, I submit, clear that it's complex, may be dependent on being made of flesh-and-blood, and reasonable to assume that a computer isn't doing it.

Personally, being quite a fan of Strong AI, I'm happy to say that computers may feel. Emotional response may be innate to processing, I don't know.

If a robot acts sad, and thinks it's sad, in what way is it not sad?

It may well not feel sad. It may have the behaviour of sadness. It may have the thought "I am sad." This is imo insufficient to conclude that it feels sadness.

Nick
 
I think in reality it's more likely that if, in say 50 years time, the scientists still really aren't getting there (which I personally don't think will happen) then they will be forced to change tack. Objectivity is all very well and fine. But if perchance it isn't getting the job done then things inevitably change.

Huh ?

Are you claiming that objectivity may be insufficient to explain consciousness ? So what do we need ? Go back to trusting our gut-feelings ? Yeah that sure moved humanity forward in the past!
 

Back
Top Bottom