• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial Intelligence thinks mushroom is a pretzel

If you don't read Chinese and respond to written questions in Chinese characters someone else directs you to write, are you understanding Chinese?

No, but the system is understanding Chinese.

I'm not sure what this question is supposed to reveal. We know that non-thinking components can be combined into thinking systems. The only weirdness here is using thinking systems as non-thinking components in a thinking system. Which is definitely fun-weird, but I don't see that it tells us anything we didn't already know about thinking systems.
 
No, but the system is understanding Chinese.

I'm not sure what this question is supposed to reveal. We know that non-thinking components can be combined into thinking systems. The only weirdness here is using thinking systems as non-thinking components in a thinking system. Which is definitely fun-weird, but I don't see that it tells us anything we didn't already know about thinking systems.

I was referencing an old thought experiment. A person who can't read Chinese gets written questions. He has a vast series of reference books in which he can look up the characters, but the books don't explain what they mean, they simply direct him "if you see this character and that character in this order, respond with this, that, this". He will write the response as instructed and deliver it, but he doesn't understand either the question or the response. He can't. He's just carrying out preset instructions without comprehending any meaning. That's what computers do. And barring an evolutionary leap into sentience that's all AI can do.
 
I was referencing an old thought experiment. A person who can't read Chinese gets written questions. He has a vast series of reference books in which he can look up the characters, but the books don't explain what they mean, they simply direct him "if you see this character and that character in this order, respond with this, that, this". He will write the response as instructed and deliver it, but he doesn't understand either the question or the response. He can't. He's just carrying out preset instructions without comprehending any meaning. That's what computers do. And barring an evolutionary leap into sentience that's all AI can do.
Thanks, I got the reference, but hadn't thought it through yet when I made my reply. It's been a while since I've considered it.

Well, except the computer effectively is understanding Chinese.
Yes, but there are a couple interesting conundrums in this thought experiment.

First, the system cannot think about its own understanding of Chinese. It is applying rote rules. It can't review and reprocess any of its own processing. It can't evaluate its own rules, nor can it modify or extend them. It didn't even produce its own rules in the first place. This appears to be very unlike the way we function as Chinese-understanding thought machines.

Second... I forget what the second conundrum was supposed to be. Probably I combined the first and second conundrums into the first paragraph. Yeah. Let's go with that.

Anyway, you put a thinking person into this system, and over time they're going to learn a lot of Chinese. Not the semantic content of the symbols, of course. But a lot of the grammar, a lot of the conventional phrases, stuff like that. Give a person ten years on the job, especially if they start young, and then tell them that this character means "house". Suddenly a lot of other meanings will start to fall into place. If that means house, then these phrases probably mean house-related stuff. Fill in a few more gaps and they'll start piecing the semantics together pretty quickly. I'm not sure what - if anything - this has to do with AI, though.
 
Ask a person a question in a language they speak but using words that aren't in their vocabulary. The result will be the same.

Except a person can attempt to glean from context, make guesses, seek out more information. A person operating from a set of instructions they've been given can only follow the script.
 
What does this tell us?

I understand a ton of English, but I'm still stumped by English words I've never seen before.

But if you know the words around them you can use context. You can make educated guesses from etymology, similar words. Someone following preprogrammed rules can't do that. Because they can't really think about it, they can only follow instructions. Making the instructions vastly elaborate to cover more potential inputs still doesn't actually teach them Chinese.

I think one of the measures of real intelligence would be coming up with an answer that wasn't provided beforehand and can't be reached using the rules given.
 
More complex instruction sets are also possible.

The more photorealistic a painting is doesn't make it more the thing depicted. Elaborate scripts being followed means the thinking was done by whoever wrote the scripts. The thing following them isn't thinking until it chooses to deviate from the script.
 
The more photorealistic a painting is doesn't make it more the thing depicted. Elaborate scripts being followed means the thinking was done by whoever wrote the scripts. The thing following them isn't thinking until it chooses to deviate from the script.

You seem to be suggesting that "thinking" is some other process than matter following the laws of physics (i.e. an instruction set). Is that really what you believe?

Dave
 
You seem to be suggesting that "thinking" is some other process than matter following the laws of physics (i.e. an instruction set). Is that really what you believe?

I'm actually arguing the opposite. Thinking is an action living brains can do. It's not something that's being done when a piece of machinery follows rules programmed into it. Nor is it some magical exception to the universal truth that illusion doesn't become real just because it's a very detailed illusion.
 
Unless you have a holodeck a simulation is still just a simulation. A simulated intelligence would simulate thought, but not actually think.
I don’t have a personal motto, but this, picked up during my career in engineering might be it, if I had one, which I don’t.

The pleasurement of measurement outweighs the stimulation of simulation.
 
Just because you can't perceive a difference doesn't mean it's not there. Is that cat over there slumbering peacefully or just brilliantly taxidermied? You can't tell but it makes a hell of a difference to the cat.
If there is a difference that isn't perceptible, is it really a difference at all?
 
I don't know that pain, or it's analogue, would be such a bad idea, but there are also positive rather than negative incentives.
The trouble is that if it really was pain and the machines really were intelligent then it would surely only be a matter of days before the situation was reversed :)
 
maxresdefault.jpg
 
Try swapping out original artworks for forgeries at your local museum and see what the judge says on that question!
If the difference between the original and the forgery is imperceptible - that means there is a 100% match to all features detectable by any method - then there would be no way for them to tell that I'd done it.
 

Back
Top Bottom