The Hard Problem of Gravity

Here's a chance for you then - post the funniest computer generated joke. That's a simple enough application for AI, surely?

Writing jokes is not simple in any sense of the word. There are plenty of genuinely intelligent people who can't write jokes. However, I think this is pretty funny in an absurdist kind of way:

The pig go. Go is to the fountain. The pig put foot. Grunt. Foot in what? ketchup. The dove fly. Fly is in sky. The dove drop something. The something on the pig. The pig disgusting. The pig rattle. Rattle with dove. The dove angry. The pig leave. The dove produce. Produce is chicken wing. With wing bark. No Quack.

(from The Daily WTF, some years ago).
 
You've actually come close to the real reason for assuming that consciousness exists in more than one person. Human interpretation. It's also human interpretation that leads us to suspect that dogs might well be conscious, that bacteria might be, and that thermostats almost certainly aren't.
This is not coherent. The "interpretation" that let's you insert consciousness into canines is the same that let's anyone else insert it into thermostats.

At least you should be consistent.

FTR: Dennett who first used the analogy never intended to suggest that thermostats where conscious the way humans are conscious and he went out of his way to disabuse people of that notion. His premise was that thermostats where "aware".
 
:) Brings back memories. I remember these arguments. I used to make them.

No computer ever made has ever understood binary. Nothing in the behaviour of computers indicates understanding. Computers don't use binary to communicate with each other any more than planets use gravity to communicate with each other.
False. Computers communicate with each-other daily. One computer might communicate with a server to tell the server that it needs to be backed up. No human is involved with the communication. No human is cognizant that the communication is even taking place. Only the computer is aware that it has reached a state that requires a backup.

If we were to say that binary has meaning to computers, then we have to say that every force in the universe has meaning to every object in the universe. And "meaning" ceases to have meaning.
I've no problem with this. You are trying to make "meaning" a transcendent property that can only be equivalent to human capability of understanding. It's not.

These are all entities which may to some degree be conscious. It's possible that whale cries have actual meaning to whales, in the way that binary has no meaning to computers.
You are simply decreeing by fiat that binary has no meaning to computers. You do so without foundation. And no, your thermostat discussion does not in any way give you the foundation you want. You don't know that a whale is capable of anymore than you know what a computer is capable of. You simply decree them to be different.

Here's a chance for you then - post the funniest computer generated joke. That's a simple enough application for AI, surely?
Humor is not as simple as you assert but there is absolutely zero reason to suppose that because computers don't write humor that they can't. One might have argued prior to the Wright brothers that because no human could fly, aided or unaided, that it was impossible.

You are arguing from ignorance.
 
Writing jokes is not simple in any sense of the word. There are plenty of genuinely intelligent people who can't write jokes. However, I think this is pretty funny in an absurdist kind of way:



(from The Daily WTF, some years ago).

Absolutely fantastic!

Another wonderful thing learned here -- this thread has been quite productive in some respects.
 
Sense... you are not making it.


Excuse me.... WHO said ANYTHING about "things we cannot detect"? This is a big problem in this forum, as I have always stated. The world offers far more variety than "black and white" (read it materialists and some form of immaterialists), and it is a shame that in here many are so biased in to this poor dichotomy when there are all kind of colors, smells and flavors.

Ok, so tell me. If something is not material, it is... ?

By "material" I mean what I meant, but it is difficult to see (I reckon) if you do not know my thinking. Nothing to be worried about, as it is not precisely something you have encounter before.

Thanks for explaining your position! It is much clearer now.

Let's examine your way of thinking, I will show you why is naive:

"I assume that every X is Y, because all X I can detect are Ys"

Not a sound syllogism, if you ask me. And it gets worse, because "Y" is an assumption, a projection used to explain "X", not "X" itself, "Y" has, also, changed its meaning throughout history, to accommodate itself in the new theoretical accounts about reality. Finally, the equivalence principle itself is weak, at best.

Yes. Logic is naive.:rolleyes: If all X that I can detect is Y, then NECESSARILY all X are Y. Any X that I can not detect may as well not exist.

In all reality, we do not need matter at all, all we need are repeatable facts and theories that let's us connect the facts in an orderly way. One of such theories is, indeed, materialism, but as soon as we fall in an ontological commitment with (any) theory (ascribing it reality beyond our thinking) we start to talk nonsense.

Ah I see. What are we going to use to measure and record these facts, if we "don't need matter at all"?
 
:) Brings back memories. I remember these arguments. I used to make them.

False. Computers communicate with each-other daily. One computer might communicate with a server to tell the server that it needs to be backed up. No human is involved with the communication. No human is cognizant that the communication is even taking place. Only the computer is aware that it has reached a state that requires a backup.

I've no problem with this. You are trying to make "meaning" a transcendent property that can only be equivalent to human capability of understanding. It's not.

Well, I'm trying to make meaning have its normal everyday sense.

What you are trying to do is to make computers - operating entirely according to the laws of physics - have some kind of understanding of what they are doing denied to other objects. They don't. They don't exchange information in a different sense to other objects, and they don't understand it at all.

Nothing a computer does when it "interprets" a TCP/IP packet or displays a JPEG is in any fundamental way different to what a planet does when it orbits the sun. The exchange of information means exactly the same. If you want to claim that computers communicate with each other, you have to accept that the picture of Hannah Montana communicates with your fridge door via magnetism. There is no difference in principle. If you are dead set on insisting that the computers are doing something special, you'll need to come up with a proper theory to demonstrate it.

You are simply decreeing by fiat that binary has no meaning to computers. You do so without foundation. And no, your thermostat discussion does not in any way give you the foundation you want. You don't know that a whale is capable of anymore than you know what a computer is capable of. You simply decree them to be different.

Humor is not as simple as you assert but there is absolutely zero reason to suppose that because computers don't write humor that they can't. One might have argued prior to the Wright brothers that because no human could fly, aided or unaided, that it was impossible.

You are arguing from ignorance.

There are two ways to demonstrate that Y is theoretically possible. One is to work out a sound theory and show that if X is possible, then Y must be. Another is to actually do Y. So far, hard AI has been lamentably lacking on both counts. The idea that computers are doing something new and different from what goes on elsewhere in the universe is a particularly unconvincing bit of anthropomorphism.
 
Writing jokes is not simple in any sense of the word. There are plenty of genuinely intelligent people who can't write jokes. However, I think this is pretty funny in an absurdist kind of way:



(from The Daily WTF, some years ago).

I doubt if Leno will be using the program any time soon though.

As I pointed out earlier, as jokes can be encoded in digital form, and they are of finite length, all possible jokes exist in potentia, and it's a matter of selecting them. A computer would find it very simple (if tediously time-consuming) to print out every possible joke. Placing them in order of humourousness would be another matter.
 
What you are trying to do is to make computers - operating entirely according to the laws of physics - have some kind of understanding of what they are doing denied to other objects. They don't. They don't exchange information in a different sense to other objects, and they don't understand it at all.

I know this has been gone over (and over, and over) in the thread already, but when you substitute the word "people" for "computers" in that paragraph, an interesting thing happens.
 
I know this has been gone over (and over, and over) in the thread already, but when you substitute the word "people" for "computers" in that paragraph, an interesting thing happens.

Yes, it ceases to be true.

I think if you read over this thread, you'll find that even the people most determined not to give people a privileged position in the universe continue to use words which are only applicable to human beings.

Human beings do understand stuff. Hence the fact that the word "understand" exists. If someone believes that something other than a human being is capable of understanding, then he needs to demonstrate how that might be possible.
 
Last edited:
Well, I'm trying to make meaning have its normal everyday sense.
Whatever the hell that means. No really, I think that is entirely the problem. Your "meaning" is slippery and poorly defined.

What you are trying to do is to make computers - operating entirely according to the laws of physics - have some kind of understanding of what they are doing denied to other objects. They don't. They don't exchange information in a different sense to other objects, and they don't understand it at all.

  1. You've zero evidence that whales or yeast understand anything in a way that computers don't.
  2. You are comparing human perception to what computers do right now and drawing unwarranted conclusions from the facts.
Nothing a computer does when it "interprets" a TCP/IP packet or displays a JPEG is in any fundamental way different to what a planet does when it orbits the sun.
Simply asserted but that's fine. I don't see it as significant. So what if it is true?

The exchange of information means exactly the same. If you want to claim that computers communicate with each other, you have to accept that the picture of Hannah Montana communicates with your fridge door via magnetism. There is no difference in principle. If you are dead set on insisting that the computers are doing something special, you'll need to come up with a proper theory to demonstrate it.
This is wrong and rather disappointing. The Hannah Montana picture doesn't cause a change in the state of the refrigerator the way data transmitted from one computer to another changes states. Yes, there is something different in principle. The state of one system is interacting with another system in a dynamic way and there is an interchange of information.

There are two ways to demonstrate that Y is theoretically possible. One is to work out a sound theory and show that if X is possible, then Y must be. Another is to actually do Y. So far, hard AI has been lamentably lacking on both counts. The idea that computers are doing something new and different from what goes on elsewhere in the universe is a particularly unconvincing bit of anthropomorphism.
Yeah, I used to make this argument. I know what it is like to have this perception.

You are arguing from ignorance and there is nothing demonstrate that AI is theoretically impossible anymore than flight could be demonstrated to be theoretically impossible before we understood aerodynamics.

If and when you can get the connection between AI and aerodynamics I think you might move your understanding forward the way I did.

Consciousness, on the level of human cognition is an extremely complex thing. No one is claiming that it is easy to understand or that we don't lack a fundamental understanding of it the way humans lacked a fundamental understanding of flight prior to aerodynamics. Humility is a good thing and admitting our ignorance in the area of neuroscience and cognition is fair but inserting claims because of ignorance is in and of itself presumptuous and arrogant. As arrogant as declaring that aided human flight prior to the Wright brothers was impossible.

Oh, and BTW, many at the time did think flight impossible.
 
Last edited:
Whatever the hell that means. No really, I think that is entirely the problem. Your "meaning" is slippery and poorly defined.



  1. You've zero evidence that whales or yeast understand anything in a way that computers don't.
  2. You are comparing human perception to what computers do right now and drawing unwarranted conclusions from the facts.
Simply asserted but that's fine. I don't see it as significant. So what if it is true?

This is wrong and rather disappointing. The Hannah Montana picture doesn't cause a change in the state of the refrigerator the way data transmitted from one computer to another changes states. Yes, there is something different in principle. The state of one system is interacting with another system in a dynamic way and there is an interchange of information.

As is the Hannah Montana magnet when it is placed or removed. As is everything in the universe in relation to everything else.

Yeah, I used to make this argument. I know what it is like to have this perception.

The "I was once as you are" wise old man bit gets old quite quickly.

However, the fact that you do know what it's like to have a perception is at least one better than Belz, who doesn't accept such things.

You are arguing from ignorance and there is nothing demonstrate that AI is theoretically impossible anymore than flight could be demonstrated to be theoretically impossible before we understood aerodynamics.

If and when you can get the connection between AI and aerodynamics I think you might move your understanding forward the way I did.

Consciousness, on the level of human cognition is an extremely complex thing. No one is claiming that it is easy to understand or that we don't lack a fundamental understanding of it the way humans lacked a fundamental understanding of flight prior to aerodynamics. Humility is a good thing and admitting our ignorance in the area of neuroscience and cognition is fair but inserting claims because of ignorance is in and of itself presumptuous and arrogant. As arrogant as declaring that aided human flight prior to the Wright brothers was impossible.

Oh, and BTW, many at the time did think flight impossible.

It might be that artificial consciousness is possible, but that doesn't mean that it will come about by executing algorithms.
 
As is the Hannah Montana magnet when it is placed or removed. As is everything in the universe in relation to everything else.
Ok, I'll grant the premise. A small interaction and little change in state. Very little information is exchanged and the state doesn't change much. FWIW, there is good reason to suppose this is all that is happening with human cognition. You want us to axiomatically assume there is something special going on as compared to a magnet on a fridge when there is no evidence for to suppose so.

The "I was once as you are" wise old man bit gets old quite quickly.
I just want you to know that I passionatly held and argued your position. I understand your point of view.

However, the fact that you do know what it's like to have a perception is at least one better than Belz, who doesn't accept such things.
I don't mean to speak for Belz but we should most certainly be careful in how much credence we give to things such as perception.

It might be that artificial consciousness is possible, but that doesn't mean that it will come about by executing algorithms.
Sure, but there's no reason to think that it won't either.
 
Last edited:
Yes, it ceases to be true.

Ah. I think you failed to detect the interesting thing. Or you detected a different interesting thing than I did.

Let's have that again:

What you are trying to do is to make computers - operating entirely according to the laws of physics - have some kind of understanding of what they are doing denied to other objects. They don't. They don't exchange information in a different sense to other objects, and they don't understand it at all.

When exactly does this cease to be true if we change "computers" to "people?" Not in the first sentence. People operate entirely according to the laws of physics, don't they? And what you're trying to do is to say that people have a kind of understanding that is denied to other objects, isn't it?

It's not in the second sentence. That's just a flat denial of the premise described in the first sentence. So if the truth value of that premise hasn't changed, the truth value of the second one can't change either. That's just logic.

So it must in that pesky third sentence. Either people do exchange information in a different sense to other objects or they do understand it at least a little bit. Or both.

But if the first sentence remains true and the third one becomes false, then that means that the third sentence cannot be a logical consequence of the first one, or vice versa. That's the interesting thing I was talking about. The whole "They're just objects -- all they can do is obey the laws of physics" bit is a big old smoke screen.

If someone believes that something other than a human being is capable of understanding, then he needs to demonstrate how that might be possible.

Did you happen to follow the link I posted earlier that explains how the "No Quack" story was written?
 
The idea that computers are doing something new and different from what goes on elsewhere in the universe is a particularly unconvincing bit of anthropomorphism.

And the idea that humans are doing something new and different from what goes on elsewhere in the universe isn't?
 
I doubt if Leno will be using the program any time soon though.

I doubt if Leno will be using "just any old human" anytime soon either. People have varying degrees of a sense of humor. Some couldn't produce a joke if their life depended on it. So what?

What you have been ignorant of from the beginning of this thread is that every "magic bullet" behavior you come up with to support your argument is in fact not something the average human would insist be a requisite for consciousness. An entity has to have a sense of humor to be conscious now? That is news to me.

As I pointed out earlier, as jokes can be encoded in digital form, and they are of finite length, all possible jokes exist in potentia, and it's a matter of selecting them. A computer would find it very simple (if tediously time-consuming) to print out every possible joke. Placing them in order of humourousness would be another matter.

And as I pointed out earlier, you have no idea what you are talking about anymore.

Nobody is talking about chinese rooms here. The computer in that link utilized a neural network to produce that "joke." It did not just "select" the joke from some abstract list of all potential jokes. That isn't how neural networks function -- it isn't even close. If you were as familiar with this subject as you like to pretend you are, you would know that.

The computer that produced that "joke" definitely knew the meaning of the words it used -- in the context of the text it was trained on. It knew the relationships between them as far as it could learn from text only. It knew the meaning as well as any human would know the meaning of a world they were exposed to through text only.

Of course it didn't know what a pig looks like, or a bird sounds like, because it doesn't have visual or auditory input it can learn from. But it isn't that hard to imagine that a faster, more powerful system would learn all that other stuff if it did have visual or auditory input to learn from.

And, of course, the fact that the human brain is an extremely fast and powerful group of biological neural networks with access to all sorts of data about the world that it can learn from has nothing to do with any of this. The fact that the "joke" the computer produced sounds like something a young child might write in a second language has nothing to do with the similarities in structure between a child's brain and a computer neural network. Nothing at all.
 
It did not just "select" the joke from some abstract list of all potential jokes. That isn't how neural networks function -- it isn't even close.
I just have issue with this part. That is exactly what the neural network did.

Assume we use periods, commas, quotes, spaces, and the letters A through Z in upper case, to produce a joke that is 8000 characters long or less. If the joke ends early, right pad with spaces. An extremely gracious ceiling on said jokes would be 30^8000. There's no difference between creating a joke in this set and selecting a joke from this set.
 
There's no difference between creating a joke in this set and selecting a joke from this set.

You don't see the difference between creating an algoritihm for jokes and rolling a 30^8000 sided die?
 

Back
Top Bottom