Even on South Park?
I must plead ignorance there; I have not, myself, heard it (thus my qualifiers of "I have heard").
Even on South Park?
I must plead ignorance there; I have not, myself, heard it (thus my qualifiers of "I have heard").
Here's a chance for you then - post the funniest computer generated joke. That's a simple enough application for AI, surely?
The pig go. Go is to the fountain. The pig put foot. Grunt. Foot in what? ketchup. The dove fly. Fly is in sky. The dove drop something. The something on the pig. The pig disgusting. The pig rattle. Rattle with dove. The dove angry. The pig leave. The dove produce. Produce is chicken wing. With wing bark. No Quack.
This is not coherent. The "interpretation" that let's you insert consciousness into canines is the same that let's anyone else insert it into thermostats.You've actually come close to the real reason for assuming that consciousness exists in more than one person. Human interpretation. It's also human interpretation that leads us to suspect that dogs might well be conscious, that bacteria might be, and that thermostats almost certainly aren't.
False. Computers communicate with each-other daily. One computer might communicate with a server to tell the server that it needs to be backed up. No human is involved with the communication. No human is cognizant that the communication is even taking place. Only the computer is aware that it has reached a state that requires a backup.No computer ever made has ever understood binary. Nothing in the behaviour of computers indicates understanding. Computers don't use binary to communicate with each other any more than planets use gravity to communicate with each other.
I've no problem with this. You are trying to make "meaning" a transcendent property that can only be equivalent to human capability of understanding. It's not.If we were to say that binary has meaning to computers, then we have to say that every force in the universe has meaning to every object in the universe. And "meaning" ceases to have meaning.
You are simply decreeing by fiat that binary has no meaning to computers. You do so without foundation. And no, your thermostat discussion does not in any way give you the foundation you want. You don't know that a whale is capable of anymore than you know what a computer is capable of. You simply decree them to be different.These are all entities which may to some degree be conscious. It's possible that whale cries have actual meaning to whales, in the way that binary has no meaning to computers.
Humor is not as simple as you assert but there is absolutely zero reason to suppose that because computers don't write humor that they can't. One might have argued prior to the Wright brothers that because no human could fly, aided or unaided, that it was impossible.Here's a chance for you then - post the funniest computer generated joke. That's a simple enough application for AI, surely?
Writing jokes is not simple in any sense of the word. There are plenty of genuinely intelligent people who can't write jokes. However, I think this is pretty funny in an absurdist kind of way:
(from The Daily WTF, some years ago).
Excuse me.... WHO said ANYTHING about "things we cannot detect"? This is a big problem in this forum, as I have always stated. The world offers far more variety than "black and white" (read it materialists and some form of immaterialists), and it is a shame that in here many are so biased in to this poor dichotomy when there are all kind of colors, smells and flavors.
By "material" I mean what I meant, but it is difficult to see (I reckon) if you do not know my thinking. Nothing to be worried about, as it is not precisely something you have encounter before.
Let's examine your way of thinking, I will show you why is naive:
"I assume that every X is Y, because all X I can detect are Ys"
Not a sound syllogism, if you ask me. And it gets worse, because "Y" is an assumption, a projection used to explain "X", not "X" itself, "Y" has, also, changed its meaning throughout history, to accommodate itself in the new theoretical accounts about reality. Finally, the equivalence principle itself is weak, at best.
In all reality, we do not need matter at all, all we need are repeatable facts and theories that let's us connect the facts in an orderly way. One of such theories is, indeed, materialism, but as soon as we fall in an ontological commitment with (any) theory (ascribing it reality beyond our thinking) we start to talk nonsense.
Brings back memories. I remember these arguments. I used to make them.
False. Computers communicate with each-other daily. One computer might communicate with a server to tell the server that it needs to be backed up. No human is involved with the communication. No human is cognizant that the communication is even taking place. Only the computer is aware that it has reached a state that requires a backup.
I've no problem with this. You are trying to make "meaning" a transcendent property that can only be equivalent to human capability of understanding. It's not.
You are simply decreeing by fiat that binary has no meaning to computers. You do so without foundation. And no, your thermostat discussion does not in any way give you the foundation you want. You don't know that a whale is capable of anymore than you know what a computer is capable of. You simply decree them to be different.
Humor is not as simple as you assert but there is absolutely zero reason to suppose that because computers don't write humor that they can't. One might have argued prior to the Wright brothers that because no human could fly, aided or unaided, that it was impossible.
You are arguing from ignorance.
Writing jokes is not simple in any sense of the word. There are plenty of genuinely intelligent people who can't write jokes. However, I think this is pretty funny in an absurdist kind of way:
(from The Daily WTF, some years ago).
What you are trying to do is to make computers - operating entirely according to the laws of physics - have some kind of understanding of what they are doing denied to other objects. They don't. They don't exchange information in a different sense to other objects, and they don't understand it at all.
I know this has been gone over (and over, and over) in the thread already, but when you substitute the word "people" for "computers" in that paragraph, an interesting thing happens.
Stop.What you are trying to do is to make computers - operating entirely according to the laws of physics
Whatever the hell that means. No really, I think that is entirely the problem. Your "meaning" is slippery and poorly defined.Well, I'm trying to make meaning have its normal everyday sense.
What you are trying to do is to make computers - operating entirely according to the laws of physics - have some kind of understanding of what they are doing denied to other objects. They don't. They don't exchange information in a different sense to other objects, and they don't understand it at all.
Simply asserted but that's fine. I don't see it as significant. So what if it is true?Nothing a computer does when it "interprets" a TCP/IP packet or displays a JPEG is in any fundamental way different to what a planet does when it orbits the sun.
This is wrong and rather disappointing. The Hannah Montana picture doesn't cause a change in the state of the refrigerator the way data transmitted from one computer to another changes states. Yes, there is something different in principle. The state of one system is interacting with another system in a dynamic way and there is an interchange of information.The exchange of information means exactly the same. If you want to claim that computers communicate with each other, you have to accept that the picture of Hannah Montana communicates with your fridge door via magnetism. There is no difference in principle. If you are dead set on insisting that the computers are doing something special, you'll need to come up with a proper theory to demonstrate it.
Yeah, I used to make this argument. I know what it is like to have this perception.There are two ways to demonstrate that Y is theoretically possible. One is to work out a sound theory and show that if X is possible, then Y must be. Another is to actually do Y. So far, hard AI has been lamentably lacking on both counts. The idea that computers are doing something new and different from what goes on elsewhere in the universe is a particularly unconvincing bit of anthropomorphism.
Whatever the hell that means. No really, I think that is entirely the problem. Your "meaning" is slippery and poorly defined.
Simply asserted but that's fine. I don't see it as significant. So what if it is true?
- You've zero evidence that whales or yeast understand anything in a way that computers don't.
- You are comparing human perception to what computers do right now and drawing unwarranted conclusions from the facts.
This is wrong and rather disappointing. The Hannah Montana picture doesn't cause a change in the state of the refrigerator the way data transmitted from one computer to another changes states. Yes, there is something different in principle. The state of one system is interacting with another system in a dynamic way and there is an interchange of information.
Yeah, I used to make this argument. I know what it is like to have this perception.
You are arguing from ignorance and there is nothing demonstrate that AI is theoretically impossible anymore than flight could be demonstrated to be theoretically impossible before we understood aerodynamics.
If and when you can get the connection between AI and aerodynamics I think you might move your understanding forward the way I did.
Consciousness, on the level of human cognition is an extremely complex thing. No one is claiming that it is easy to understand or that we don't lack a fundamental understanding of it the way humans lacked a fundamental understanding of flight prior to aerodynamics. Humility is a good thing and admitting our ignorance in the area of neuroscience and cognition is fair but inserting claims because of ignorance is in and of itself presumptuous and arrogant. As arrogant as declaring that aided human flight prior to the Wright brothers was impossible.
Oh, and BTW, many at the time did think flight impossible.
Ok, I'll grant the premise. A small interaction and little change in state. Very little information is exchanged and the state doesn't change much. FWIW, there is good reason to suppose this is all that is happening with human cognition. You want us to axiomatically assume there is something special going on as compared to a magnet on a fridge when there is no evidence for to suppose so.As is the Hannah Montana magnet when it is placed or removed. As is everything in the universe in relation to everything else.
I just want you to know that I passionatly held and argued your position. I understand your point of view.The "I was once as you are" wise old man bit gets old quite quickly.
I don't mean to speak for Belz but we should most certainly be careful in how much credence we give to things such as perception.However, the fact that you do know what it's like to have a perception is at least one better than Belz, who doesn't accept such things.
Sure, but there's no reason to think that it won't either.It might be that artificial consciousness is possible, but that doesn't mean that it will come about by executing algorithms.
Yes, it ceases to be true.
What you are trying to do is to make computers - operating entirely according to the laws of physics - have some kind of understanding of what they are doing denied to other objects. They don't. They don't exchange information in a different sense to other objects, and they don't understand it at all.
If someone believes that something other than a human being is capable of understanding, then he needs to demonstrate how that might be possible.
The idea that computers are doing something new and different from what goes on elsewhere in the universe is a particularly unconvincing bit of anthropomorphism.
I doubt if Leno will be using the program any time soon though.
As I pointed out earlier, as jokes can be encoded in digital form, and they are of finite length, all possible jokes exist in potentia, and it's a matter of selecting them. A computer would find it very simple (if tediously time-consuming) to print out every possible joke. Placing them in order of humourousness would be another matter.
I just have issue with this part. That is exactly what the neural network did.It did not just "select" the joke from some abstract list of all potential jokes. That isn't how neural networks function -- it isn't even close.
There's no difference between creating a joke in this set and selecting a joke from this set.