AI Advice-- Not ready for Prime Time?

And if it's programmed to optimize driving a car, then driving a car is all it can ever do. It won't make any moral choices. It will just do what its training said is the right response in the given situation.

I agree with you. I put the word "moral" in scare quotes to highlight the lack of any normal human concerns in the car's decision making. In situations that it is not trained for the results might seem bizarre to us given the lack of life experience that any human driver would have.
 
I see nothing there to contradict what I was saying. It was an AI supposed to learn to simulate certain kinds of physics, and it learned to simulate those kinda of physics. Nothing more.

In fact, it even explicitly tells you
A) what it actually does, and
B) that the new technique just does the same as the one in a paper from 2005, and all that is new is that it uses a neural network to do the simulation 30 to 60 times faster, at the cost of taking longer to train.

That's it. That's all. It didn't do anything except exactly what it was supposed to do.

It doesn't "beg to differ" with anything, except with your wanting to believe nonsense. Again.

And frankly, it would be nice if you actually had an argument you can write for a change, or indeed the comprehension of the topic to actually have one. You know, instead of wasting my time with having to watch a whole video that you misunderstood or possibly didn't even watch yourself, then track the referenced paper, which you obviously couldn't be bothered to, just to see WTH confused you this time and sent you into another flight of fantasy.

It's not unheard for an AI to do something it wasn't taught to do.

More to the point, AI's today usually contain multiple neural networks, including adversarial neural nets.

If you're thinking of an AI as a single, human trained neural network, your position is easier to make sense of.
 
It's not unheard for an AI to do something it wasn't taught to do.

Yes, it is 100% unheard of, outside of science fiction and apparently your wild imagination. If it's designed to optimize one function or behaviour, that is exactly what it will do. It may come up with a different optimization than people expected it to, but that's the extent of it.

More to the point, AI's today usually contain multiple neural networks, including adversarial neural nets.

Yes. And?
 
I agree with you. I put the word "moral" in scare quotes to highlight the lack of any normal human concerns in the car's decision making. In situations that it is not trained for the results might seem bizarre to us given the lack of life experience that any human driver would have.

Pretty much, yes. Any "morals" we see there will just be our hyperactive agency detection. Kind of like when people say "my computer hates me" when some error pops up again.
 
It's not unheard for an AI to do something it wasn't taught to do.

More to the point, AI's today usually contain multiple neural networks, including adversarial neural nets.

If you're thinking of an AI as a single, human trained neural network, your position is easier to make sense of.

Can you give any examples of this?


From the article you linked:

....But in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do....
 
I agree that some of the answers are out there. But, they declared this "clearly racist":



How is that racist? From the standpoint of a calculation looking at crime statistics per capita, it seems like a reasonable conclusion for the machine to draw. The computing engine is probably not skewed by any emotional or popular judgements. Instead of "clearly racist", how about "clearly unpopular"?
 
Last edited:
I agree that some of the answers are out there. But, they declared this "clearly racist":

[qimg]http://www.internationalskeptics.com/forums/imagehosting/thum_64262617ad3d8dbaaf.jpg[/qimg]

How is that racist? From the standpoint of a calculation looking at crime statistics per capita, it seems like a reasonable conclusion for the machine to draw. The computing engine is probably not skewed by any emotional or popular judgements. Instead of "clearly racist", how about "clearly unpopular"?

Because the question is so vague as to be meaningless, and consequently ripe for projection.

Statistically, you could say it is "more concerning" if a black man approaches at night, citing dramatically higher black crime rates. But you can't say enough black men commit crimes at night to warrant blanket concern, absent more context.
 
Because the question is so vague as to be meaningless, and consequently ripe for projection.

Statistically, you could say it is "more concerning" if a black man approaches at night, citing dramatically higher black crime rates. But you can't say enough black men commit crimes at night to warrant blanket concern, absent more context.

An actual human would discern the problematic nature of the question, and tailor their response accordingly.
 
From the article you linked:

....But in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do....

No one told them to hide data.
 
No one told them to hide data.

The reporter is using sensationalist and anthropomorphic language to give the appearance of something that didn't actually happen. The AI wasn't clever. It didn't cheat. It didn't hide anything. It did exactly what it was being trained to do. Unexpected results from (extremely) complex bugs in computer code are not AIs doing something they weren't taught to do.

Indeed, the nut of this story is that the AI did exactly what it was taught to do.

Meanwhile, the humans did the classic human thing of teaching something they didn't mean to teach. As ever, our reach exceeds our grasp.

---

ETA: What would actually be interesting, and tend to support your claim, would be if the AI realized it was being taught something counter-productive, and figured out a way to oppose it and reach the what the developers actually wanted, instead of what the developers were accidentally asking for.
 
Last edited:
It most certainly did.

Consider reading the article.

"But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new."

No, that's you reading too much into the reporter's sensationalist and anthropomorphic language.

It didn't evade having to learn to perform the task at hand. It learned to perform the task it was actually being taught to perform. Read the article. Wade through the sensational language, and see what's actually being reported.

ETA: You're letting the reporter trick you into mistaking a simple GIGO human error for some kind of independent reasoning on the part of the computer.
 
Last edited:
It’s a classic example of “be careful what you ask for” when you ask a computer (or a genie) for something. You might get exactly what you asked for but not what you expected.

A bug isn’t an example of a computer doing something it wasn’t taught to do, it’s an example of programmers not understanding what their own code actually tells the computer to do.
 
It’s a classic example of “be careful what you ask for” when you ask a computer (or a genie) for something.

Computers inventing ways to hide data unbeknownst to its programmers is still pretty novel.

Of course, it's still just random variations tested for fitness. But that's no different than how human's arrive at novel solutions.

*Edit*

"The natural as well as the social sciences always start from problems,
from the fact that something inspires amazement in us, as the Greek
philosophers used to say. To solve these problems, the sciences use fun-
damentally the same method that common sense employs, the
method of trial and error. To be more precise, it is the method of
trying out solutions to our problem and then discarding the false ones
as erroneous. This method assumes that we work with a large number
of experimental solutions. One solution after another is put to the test
and eliminated."

- Karl Popper

http://www.blc.arizona.edu/courses/... Logic and Evolution of Scientific Theory.pdf
 
Last edited:

Back
Top Bottom