AI Advice-- Not ready for Prime Time?

That's not what happened, though.

"Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one.

The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect."
 
Except you're just going into your uninformed flights of fantasy again, based on no more than just wanting really really hard to believe BS about AI. The computer wasn't deciding to "hide" anything, or "cheat", or anything of the kind. It didn't even have the notions to decide something like that. It optimized exactly the functions it was supposed to optimize. It had to fiddle with the parameters of two functions, and that's exactly what it did. No more, no less. It didn't even come up with a new function or anything. It just turned out that the most successful combination was one where the first function was passing through some more data to the other function than the guy who came up with them thought of.
 
Except you're just going into your uninformed flights of fantasy again, based on no more than just wanting really really hard to believe BS about AI.

Of course. Totally plausible.

Also plausible, your intelligence might be challenged.


The computer wasn't deciding to "hide" anything, or "cheat", or anything of the kind. It didn't even have the notions to decide something like that. It optimized exactly the functions it was supposed to optimize. It had to fiddle with the parameters of two functions, and that's exactly what it did. No more, no less. It didn't even come up with a new function or anything. It just turned out that the most successful combination was one where the first function was passing through some more data to the other function than the guy who came up with them thought of.

I agree.

And do you have any proof that you aren't hiding anything?

Why should I consider your tactics more "pure" than those of an AI?
 

Then you have not, in fact, met your burden of proof for the claim that, "It's not unheard for an AI to do something it wasn't taught to do." Since the AIs in all your irrelevant examples did not, in fact, do anything else than what they were programmed to do: fiddle with the parameters of functions supplied by humans. You know, exactly what those humans programmed them to, and at that, only the parameters they were in the list of stuff they were supposed to play with, and only for the functions those humans gave them to play with.

And do you have any proof that you aren't hiding anything?

Why should I consider your tactics more "pure" than those of an AI?

Utterly irrelevant.
 
Last edited:
"Although it is very difficult to peer into the inner workings of a neural network’s processes, the team could easily audit the data it was generating. And with a little experimentation, they found that the CycleGAN had indeed pulled a fast one.

The intention was for the agent to be able to interpret the features of either type of map and match them to the correct features of the other. But what the agent was actually being graded on (among other things) was how close an aerial map was to the original, and the clarity of the street map.

So it didn’t learn how to make one from the other. It learned how to subtly encode the features of one into the noise patterns of the other. The details of the aerial map are secretly written into the actual visual data of the street map: thousands of tiny changes in color that the human eye wouldn’t notice, but that the computer can easily detect."

The reporter is using sensationalist anthropomorphic language to misrepresent what happened. Don't let yourself be fooled. This was a simple GIGO human error.
 
Something is fishy, here. Unless the AI was trained by world renowned and widely acclaimed moral reasoners (do any such people even exist?), its opinions aren't going to be any better than the average type of human who might want advice.

So either the scientists and the reporter are total idiots, or the scientists are cynical scumbags and the reporter is an idiot, or the scientists and the reporter are all cynical scumbags. Or the scientists are prototyping something without pretending it's already fit for purpose, and the reporter is a cynical scumbag.

Whatever the actual, I don't think anyone can go wrong by assuming that reporters are total idiots, or cynical scumbags, or both.

Yeah, it was an experiment, a toy, trained on advice columns and stuff like that. Not far from the ones where you give them a cookbook and ask them to generate a recipe. Other people writing about the same AI report that it’s chock full of nonsensical responses because it seems to be overweighting phrases like ‘without apologizing’ or ‘but you said you’re sorry’ to give an ‘it’s bad’ if you walked through a door without apologizing and ‘it’s okay’ if you shot a man but you said you’re sorry.

In short no it’s not supposed to be fit for purpose.

But the actual op article does say so if you bloody read it. The author just also bloviates for a while and then proposes that it could be dangerous because not everyone knows AI’s are still very stupid and they might mistake its statements as authoritative.
 
Last edited:
Then you have not, in fact, met your burden of proof for the claim that, "It's not unheard for an AI to do something it wasn't taught to do." Since the AIs in all your irrelevant examples did not, in fact, do anything else than what they were programmed to do: fiddle with the parameters of functions supplied by humans. You know, exactly what those humans programmed them to...

Humans exactly programmed that AI to create encoded messages.

Sure, buddy.

Everyone knows you are smart, and random computer programs will never be as smart as you.
 
They programmed it to try to reach a goal and the things it was allowed by its abilities to do to interact with the data allowed it to do so in that way. That’s literally all any of them can do.

Some earlier AIs would reach goals by deleting databases, until the programmers figured out they had to disallow that. But the AI didn’t go out of its way to do that in a ‘not programmed to do that’ kind of way, the AI just brute forced the problem, and since one of the things it could do was delete the database, it turned out that was the fastest, most efficient way to get its output (empty) to match the database (also empty).
 
Last edited:

Back
Top Bottom