The Hard Problem of Gravity

I think one has to be careful when intellectualising our gut responses in situations like this. There can be a wisdom to feelings that transcends the intellect. This aside, for sure we can develop empathy towards dogs easily and it can "feel wrong" they should be killed.

I suppose you're right. Even still, I can't help but try to make logical sense of everything xD


Prior to Strong AI being recognised as the truth, someone somewhere is going to have to think of a whole heap of clever answers to a million ethical questions that will no doubt arise in its wake.

Nick

eta: I mean, it's a bit crazy all of it really, if you ask me. Anyway, even if the foetus feels pain it doesn't have a self to place the pain onto. I don't know, it seems to me that only in America are people so mad as to want to get lawyers for embryos. No doubt I'll get hassle for that one. But I do think it would be more reasonable to put energy into stopping wars first.

Yea, my bad for even bringing it up. We don't want this to degenerate into one of those threads >_<
 
Yes. This is what makes it formally undecidable. This is the reduction of finding GC to the Halting Problem.
No it isn't. The Halting Problem is one of finding a machine that can tell, in a finite number of steps, whether or not any given machine will halt. There's no problem with building a machine, for example, that tells that particular machines will not halt, or that particular machines will. In fact, it's quite easy to build both--given that you get to specify "I don't know", you can even build one that's always correct, that says things other than "I don't know".
If you could write a function to know if your machine would halt or not we could find GC.
Right. But what has to be true for GC to have no truth value is that there'd be no answer to whether or not the machine would halt, not that you can't tell. I can confirm GC up to any arbitrary number--I can thus confirm it for an infinite number of numbers (since there are an infinite number of finite arbitrary numbers I can confirm it for).
Since we can't[haven't] we can't[?do you mean that?] and your proposed machine doesn't change that.
Nope, doesn't follow.
You do not know if it will halt you do not therefore know how many steps it would take to find a counterexample to the GC
Doesn't matter. I know it's finite, if GC is false. It's only when GC is true that I can't tell.
and if the GC is true you cannot compute that it is true.
Exactly. But then it's true!

Remember--you were arguing that it doesn't have a definite truth value. Your proof outline is severely insufficient if you have to assume it's true to show that I can't prove it. The way it would have to work is that you would have to show that it could be true, or could be false, if I couldn't prove it. Then it would have no truth value.

Neither Godel's Incompleteness Theorems nor the Halting Problem have anything to do with my machine. Neither of them place any restrictions on particular machines--neither of them refute my argument that I can confirm GC up to any arbitrary finite number (of which there are an infinite number).

ETA: Perhaps it would help if you told me what Godel's First Incompleteness Theorem actually states.
 
Last edited:
When I say that computation is not a physical theory, I just mean that it's not a theory in physics. I hope that is clear enough.

Ideas in economics and poems and favourite chatup lines can all be modelled in the physical world, but that doesn't make them part of physics.

So what happens when, for instance, a state machine as described by a computational/mathematical model is instantiated with physical electronic components?

I fail to see how such a thing is any different than, for example, instantiating certain mathematical models of physics with an apple falling from a tree.

Both models describe the behavior of a physical thing. Why is one "physical" and the other not?
 
I don't believe anyone much cares about complexity of consciousness. I identify myself as a human being, experience empathetic response with other humans, and it becomes hard to hurt them. I create social rules to protect myself and others. Somewhere way down the line complexity could be argued as a factor but I figure it's a long way down. Humans just don't think about this stuff so much. Our rules come more from the gut. Certainly when we look at other species a far more important factor than complexity is how fluffy they are.

Nick

:teddybear:bearbiggrin:

Yes, "fluffy" is what counts at the end of the day.
 
If there were a artificial construct created with operational complexity comparable to, or greater than, that of a human would you consider it to be have 'greater' value than the life of a human?
That depends.

For example, would it be justified to kill, or otherwise harm, a human to prevent harm from being done to said construct merely on the basis of its complexity?
Merely on the basis of complexity? No. Do we take complexity into account in such decisions? Certainly.

Really? I thought the capacity to experience suffering was the basis for giving an entity ethical consideration.
It's one consideration. But very simple systems have no capacity to experience suffering, so it's really just a loose measure of complexity.

Ofcourse. Because a brick is not alive and, as far as can be told, bricks cannot experience anything, let alone suffering.
Right. They are insufficiently complex.

The thing is, if one were to arrange bricks or their materials in a more 'complex' way this would not change.
Fallacy of division. Quite a staggering example, in fact, since you make the assertion not only for bricks but for their materials.

Clearly there is more to ethical concerns than mere complexity.
Clearly you haven't thought this through.
 
There's no problem with building a machine, for example, that tells that particular machines will not halt, or that particular machines will.

How do you know?

Right. But what has to be true for GC to have no truth value is that there'd be no answer to whether or not the machine would halt, not that you can't tell.

The point is: these are equivalent statements.

If you cannot compute it then - computationally speaking - there is no answer.

I can confirm GC up to any arbitrary number--

I can confirm the halting problem up to any arbitrary number too. The finite cases are not the important thing here.

I can thus confirm it for an infinite number of numbers (since there are an infinite number of finite arbitrary numbers I can confirm it for).

Okay sure. You do that an infinite number of times and get back to me when you're done.

Doesn't matter. I know it's finite, if GC is false. It's only when GC is true that I can't tell.

This is the same as the halting problem:

I know if it halts it halts. It's only when it doesn't halt that I can't tell.

Exactly. But then it's true!

But you cannot compute that! Hence it's UNCOMPUTABLE!

Remember--you were arguing that it doesn't have a definite truth value.

Actually I'm arguing that if you're going to try using a brute force search then you're pretty much screwed if you don't expect the theorem to be false because you can't tell that it's not undecidable.

Neither of them place any restrictions on particular machines--neither of them refute my argument that I can confirm GC up to any arbitrary finite number (of which there are an infinite number).

I never said that this wasn't the case - why would I? It applies to anything finite - you just run the appropriate number of steps.

It's proving the infinite that's the issue: you can't just say, "compute an infinite number of steps to get the answer." That's the whole damn point about what is or isn't computable.

Perhaps it would help if you told me what Godel's First Incompleteness Theorem actually states.

Basically: you cannot be complete and consistent - in a sufficiently powerful formal language there will be some statement you can make whose truth value cannot be decided in that language.
 
That depends.


Merely on the basis of complexity? No. Do we take complexity into account in such decisions? Certainly.

Whilst humans are undoubtedbly complex beings, large numbers of them seem to live lives that actually probably could be replicated (in terms of numbers of processing transactions) on one of those early 386 computers. So, is actual complexity the factor here or how much actual processing is going on?

Nick
 
:teddybear:bearbiggrin:

Yes, "fluffy" is what counts at the end of the day.

As we push on into the 21st century, the outlook for non-cute animals has never been bleaker.

eta: the coat of the Common Seal prolongs its species life expectancy not just by keeping out the icy waters of the North Sea. New factors introduce themselves into natural selection process. You need cute offspring and ideally furry.

Nick
 
Last edited:
Whilst humans are undoubtedbly complex beings, large numbers of them seem to live lives that actually probably could be replicated (in terms of numbers of processing transactions) on one of those early 386 computers.

Utter nonsense.

Do you have any idea how much computation is required to do something as simple as eat a cookie or get out of a chair and walk to the toilet?

An early 386 wouldn't be able to control a single human limb correctly.
 
I don't believe anyone much cares about complexity of consciousness. I identify myself as a human being, experience empathetic response with other humans, and it becomes hard to hurt them. I create social rules to protect myself and others. Somewhere way down the line complexity could be argued as a factor but I figure it's a long way down. Humans just don't think about this stuff so much. Our rules come more from the gut. Certainly when we look at other species a far more important factor than complexity is how fluffy they are.
Well fortunately what you believe doesn't significantly affect what other people care about. Your intuition about what humans think about and how we look at other species is strange but interesting. If fluffiness is a more important criterion for you to care about another species than complexity of consciousness, I wonder why you're debating this thread at all. Where are the all the fluffy AIs?
 
How do you know?
Because it's trivial.
Code:
GET S
IF S = "WHILE TRUE; DO {}" PRINT "DOESN'T HALT"
There is a program that tells if particular programs will halt. Note how its existence does not violate the Halting Problem, since the HP doesn't say what you think it says. Think a bit more carefully about what the HP actually states, and what it actually implies. The HP states that it's not possible to build a machine that can tell for any input whether the machine halts or not.

See also WP (Direct link to section, "Recognizing partial solutions"), if the above trivial example doesn't do it for you.
The point is: these are equivalent statements.
The point you objected to is that for the GC, there is a definite truth value. The point you're trying to raise is irrelevant.

What you're trying to say is not implied by GFIT, just like what you're trying to say about the halting problem is not implied by the HP. You're confused because they kind of look like the same thing, but GFIT is talking about the existence of theories that cannot be proven within a system (true theories at that), and not the non-existence of theories that can. Just like HP is talking about being able to come up with a general algorithm to tell that any machine would halt, and not that it's impossible to tell if a given machine would halt.

Your confusion leads you to think that Godel is saying that if you don't know whether or not a theorem is true, you can't show that it has a definite truth value. Godel's Incompleteness Theorem says nothing of the sort--not even close.
If you cannot compute it then - computationally speaking - there is no answer.
That's not the case. There's an entire field of mathematics where the existence of some kind of way to do something can be proven even though you can't find out the particular way to do it. There's absolutely no principle, or no theory, that says you can't do that. GFIT is not such a theory either--GFIT simply says that there are statements that cannot be proven (analogously, GFIT says there are rabbits, not that all animals are rabbits, nor that if you don't know what an animal is, it must be a rabbit).
I can confirm the halting problem up to any arbitrary number too.
No you can't. To do this, you would have to be able to play the other side of the same kind of game I'm playing with you. I pick a number, and you have to show what all Turing machines less than that number will do (halt or not halt), and do it in a finite amount of time. You can't do that.
Okay sure. You do that an infinite number of times and get back to me when you're done.
For the same reason I don't have to write down every infinite number, of Godel doesn't have to write down all possible mathematical systems, I don't have to do this an infinite number of times.

Think of it this way. If I can pick S given your arbitrarily large M, that means that you cannot pick an M for which I can't pick that S. This is much more powerful than you are giving it credit for.

Of course, the one thing it doesn't prove is the one thing I'm not claiming--that you can prove the GC true if it is.
But you cannot compute that!
Doesn't matter. Look at what GC says. If I can't prove it, and if, given that it's false, I necessarily could prove it false, it follows that the only way I can't prove it is if it's true. If it's undecideable, even in 72 point font that blinks, is bolded, and is scrolling in marquee playing parade music, that still means GC is true.

The catch is that I wouldn't necessarily be able to prove that I can't prove it in that case, so I wouldn't necessarily be able to know.
Actually I'm arguing that if you're going to try using a brute force search then you're pretty much screwed
Right, but what you objected to initially was whether or not it had a definite truth value. That's still wrong--it can have a definite truth value even if you don't know what it is... GFIT doesn't say otherwise. And it's possible that for certain theories, you can show it has a definite truth value, without knowing what it is--GFIT again doesn't say otherwise. All GFIT says is that there are things you can't prove--in fact, GC can be something I can't prove, and be true! I would just have to not be able to know I can't prove it due to what GC actually is.
Basically: you cannot be complete and consistent - in a sufficiently powerful formal language there will be some statement you can make whose truth value cannot be decided in that language.
Right. Anything in there about not knowing if it has a definite truth value? The point you objected to is that for the GC, there is a definite truth value. The point you're trying to raise is irrelevant.

What you're trying to say is not implied by GFIT, just like what you're trying to say about the halting problem is not implied by the HP. You're confused because they kind of look like the same thing, but GFIT is talking about the existence of theories that cannot be proven within a system (true theories at that), and not the non-existence of theories that can. Just like HP is talking about being able to come up with a general algorithm to tell that any machine would halt, and not that it's impossible to tell if a given machine would halt.

Your confusion leads you to think that Godel is saying that if you don't know whether or not a theorem is true, you can't show that it has a definite truth value. Godel's Incompleteness Theorem says nothing of the sort--not even close.
If you cannot compute it then - computationally speaking - there is no answer.
That's not the case. There's an entire field of mathematics where the existence of some kind of way to do something can be proven even though you can't find out the particular way to do it. There's absolutely no principle, or no theory, that says you can't do that. GFIT is not such a theory either--GFIT simply says that there are statements that cannot be proven (analogously, GFIT says there are rabbits, not that all animals are rabbits, nor that if you don't know what an animal is, it must be a rabbit).
I can confirm the halting problem up to any arbitrary number too.
No you can't. To do this, you would have to be able to play the other side of the same kind of game I'm playing with you. I pick a number, and you have to show what all Turing machines less than that number will do (halt or not halt), and do it in a finite amount of time. You can't do that.
Okay sure. You do that an infinite number of times and get back to me when you're done.
For the same reason I don't have to write down every infinite number, of Godel doesn't have to write down all possible mathematical systems, I don't have to do this an infinite number of times.

Think of it this way. If I can pick S given your arbitrarily large M, that means that you cannot pick an M for which I can't pick that S. This is much more powerful than you are giving it credit for.

Of course, the one thing it doesn't prove is the one thing I'm not claiming--that you can prove the GC true if it is.
But you cannot compute that!
Doesn't matter. Look at what GC says. If I can't prove it, and if, given that it's false, I necessarily could prove it false, it follows that the only way I can't prove it is if it's true. If it's undecideable, even in 72 point font that blinks, is bolded, and is scrolling in marquee playing parade music, that still means GC is true.

The catch is that I wouldn't necessarily be able to prove that I can't prove it in that case, so I wouldn't necessarily be able to know.
Actually I'm arguing that if you're going to try using a brute force search then you're pretty much screwed
Right, but what you objected to initially was whether or not it had a definite truth value. That's still wrong--it can have a definite truth value even if you don't know what it is... GFIT doesn't say otherwise. And it's possible that for certain theories, you can show it has a definite truth value, without knowing what it is--GFIT again doesn't say otherwise. All GFIT says is that there are things you can't prove--in fact, GC can be something I can't prove, and be true! I would just have to not be able to know I can't prove it due to what GC actually is.
Basically: you cannot be complete and consistent - in a sufficiently powerful formal language there will be some statement you can make whose truth value cannot be decided in that language.
Right. Anything in there about not knowing if it has a definite truth value?" target="_blank">WP
 
Whilst humans are undoubtedbly complex beings, large numbers of them seem to live lives that actually probably could be replicated (in terms of numbers of processing transactions) on one of those early 386 computers.
I can't even count the number of logical fallacies you built in to that one sentence.

So, is actual complexity the factor here or how much actual processing is going on?
What?
 
I can't even count the number of logical fallacies you built in to that one sentence.


What?

My point is that whilst humans may be very complex creatures, many lead acutely sedentary lives and make active use of little of their potential complexity. In behavioural terms they could get by with a great deal less complexity than they have.

So, does this make them less worthy of life?

Nick
 
Well fortunately what you believe doesn't significantly affect what other people care about. Your intuition about what humans think about and how we look at other species is strange but interesting. If fluffiness is a more important criterion for you to care about another species than complexity of consciousness, I wonder why you're debating this thread at all. Where are the all the fluffy AIs?

My money's still on the bush babys!

A ramification of Strong AI is that, if it gets more accepted (and I still dispute Pixy's inference that pretty much every scientist alive agrees with it), humans are going to have to think about how they ascribe value to life.

Nick
 
A ramification of Strong AI is that, if it gets more accepted (and I still dispute Pixy's inference that pretty much every scientist alive agrees with it), humans are going to have to think about how they ascribe value to life.

Uh no, we've already being doing that just fine before AI was even thought about. It'll just change the nature of the debate just like any other discovery about humanity might do.
 
Uh no, we've already being doing that just fine before AI was even thought about. It'll just change the nature of the debate just like any other discovery about humanity might do.

True. But Strong AI takes things potentially to a whole new level. There is such a disparity between how our brains typically conceive of self and how they actually manifest self that human culture inevitably exists on a continual existential precipice.

Value systems which reflect evolution-acquired biological needs will have to adapt to the reality of our computational nature.

Nick
 
My point is that whilst humans may be very complex creatures, many lead acutely sedentary lives and make active use of little of their potential complexity. In behavioural terms they could get by with a great deal less complexity than they have.
No.

So, does this make them less worthy of life?
No, it just makes you wrong.
 
True. But Strong AI takes things potentially to a whole new level. There is such a disparity between how our brains typically conceive of self and how they actually manifest self that human culture inevitably exists on a continual existential precipice.
Irrelevant, untrue, and a logical fallacy. Good work!

Value systems which reflect evolution-acquired biological needs will have to adapt to the reality of our computational nature.
Why?
 

Back
Top Bottom