• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

AI Advice-- Not ready for Prime Time?

From the article:

Making a computer an arbiter of moral judgement is uncomfortable enough on its own, but even its current less-refined state can have some harmful effects.

Whoever imagined a computer programmed by humans, would be better at moral judgments than humans?
 
Scientists Built an AI to Give Ethical Advice, But It Turned Out Super Racist

https://futurism.com/delphi-ai-ethics-racist

One day it will work -- but not yet?

Maybe the problem is that there are no answers? That morality truly is subjective?

From my reading of the article, the problem is just that computers can't read.....yet. they can match keywords, like Watson does, but actual comprehension is beyond them.
 
Djever think maybe racism IS the proper ethic? :D

What if? Sure would upset the apple cart.
 
From my reading of the article, the problem is just that computers can't read.....yet. they can match keywords, like Watson does, but actual comprehension is beyond them.

I have the same problem with certain posters on the forum.
 
From my reading of the article, the problem is just that computers can't read.....yet. they can match keywords, like Watson does, but actual comprehension is beyond them.

I had really hoped that AI had moved on from ELIZAWP. And it claims to have done so. Asking Google >How does AI understand text?< gets me "About 4,370,000,000 results". It is more than just matching words. Contextual "understanding" is now claimed.

But not, as I noted in the OP, apparently good enough. :(
 
Something is fishy, here. Unless the AI was trained by world renowned and widely acclaimed moral reasoners (do any such people even exist?), its opinions aren't going to be any better than the average type of human who might want advice.

So either the scientists and the reporter are total idiots, or the scientists are cynical scumbags and the reporter is an idiot, or the scientists and the reporter are all cynical scumbags. Or the scientists are prototyping something without pretending it's already fit for purpose, and the reporter is a cynical scumbag.

Whatever the actual, I don't think anyone can go wrong by assuming that reporters are total idiots, or cynical scumbags, or both.
 
I had really hoped that AI had moved on from ELIZAWP. And it claims to have done so. Asking Google >How does AI understand text?< gets me "About 4,370,000,000 results". It is more than just matching words. Contextual "understanding" is now claimed.

But not, as I noted in the OP, apparently good enough. :(

"Contextual" means they match not just the words, but the nearby words, and disambiguate based on the nearby words.

I studied AI in college in the early 1980s, and it was terrible. It was Eliza, and simple forms of Eliza at that. The idea of machine translation was being kicked around, and the basic idea was to map out the input sentences into a sort of neutral conceptual language-independent framework, and then translate that framework into the target langues. In other words, form a rudimentary understanding of the input sentence, and using that udnerstanding, form the sentences in the target.

And machine translation sucked, for a long time. One day, though, I decided to try some of the funny experiments where you translated from source language to target and back, and see how mangled the original thoughs were. I had done it before, sometimes with amusing results, but always with mangled meaning and grammar. Then, I did it again and...it worked. The "round trip" translations had very few errors. I was amazed, and I looked for information on how they did it. The answer was that they gave up any pretense of understanding words or sentences. They just fed it lots of trainging data so they knew that when one language said one thing, the other language said that other thing. There were rules to check it against, but they were patterns, not concepts.

So, trying to make ethical judgements requires actual understanding. I am confident that they are trying, but they're just matching words, and the results are terrible.
 
Come to think of it, is there really any objective way to measure how "good" moral judgments are?

If the only possible measuring stick is what a human being thinks is good moral reasoning, then "good" will be in the eye of the beholder, no?
 
Well, a computer programmed by humans is better at playing chess than the humans who programmed it (or any other humans).

Chess has clear and finite rules, which helps a lot.

AI (last I checked) is still at the phase of being trained on databases, maybe being given some rules or basic rules for extrapolating more rules from a database, and then creating requested outputs by, with some level of contextual sophistication, regurgitating something that sounds right based on what the database sounds like. So it can never have more 'quality' than its training; it's not able to actually make what we'd call leaps of logic. Just leaps of grammar.

Now, it has gotten a lot better, especially at ***********. There's at least one bot my friends like to reblog cause it's so near-human sounding, but I hate it because I keep reading a few paragraphs in before realising it doesn't actually make enough sense to be a real human thought and that's when I notice the byline.
 
Last edited:
Sounding human-like isn't even much of an achievement, and has been done LONG before modern bayesian AI learning. (Or alternately it could be argued to be the first step ever in that direction, albeit not a very useful one.) I already posted a thread long ago of how a very simple program can use markov chains to produce human-like text (to various degrees) in the style of whoever you wish, if you have enough text to train it on.

Well... aphasic human like, anyway :p

I can even give you the source code if you wish. It really is small and trivial.

Edit: just to stress: it doesn't even do leaps of grammar, or really have any idea of grammar at all. It really is just markov chains of words, organized as a tree. It has no idea what "I kissed a lovely evening with a grain of salt" means or what the grammar is, it will just produce it by looking at the last 2 words it spat out, and the conditional probabilities of whatever third word it encountered after those in the text it trained on.
 
Last edited:
What worries me is that the same type of learning is going into producing driverless cars. They may end up making similar "moral" choices in tough situations based on their training data.
 
What worries me is that the same type of learning is going into producing driverless cars. They may end up making similar "moral" choices in tough situations based on their training data.

As I keep repeating, the current machine learning isn't really true AI, and it won't do anything else than what it is programmed to do. Like, if it's optimizing a function to recognize faces, it will only ever do that. It won't decide to write a flight simulator instead. And if it's programmed to optimize driving a car, then driving a car is all it can ever do. It won't make any moral choices. It will just do what its training said is the right response in the given situation. It might crash into a school bus because of trying to avoid one pedestrian, or viceversa, but it won't be because it did a moral choice like in the famous thought experiment. It will just do it because somewhere its rules said something like avoiding crashing (to ensure its driver's safety, but it won't even actually know that; it's just the data it was fed) has higher priority than the alternative.
 
I see nothing there to contradict what I was saying. It was an AI supposed to learn to simulate certain kinds of physics, and it learned to simulate those kinda of physics. Nothing more.

In fact, it even explicitly tells you
A) what it actually does, and
B) that the new technique just does the same as the one in a paper from 2005, and all that is new is that it uses a neural network to do the simulation 30 to 60 times faster, at the cost of taking longer to train.

That's it. That's all. It didn't do anything except exactly what it was supposed to do.

It doesn't "beg to differ" with anything, except with your wanting to believe nonsense. Again.

And frankly, it would be nice if you actually had an argument you can write for a change, or indeed the comprehension of the topic to actually have one. You know, instead of wasting my time with having to watch a whole video that you misunderstood or possibly didn't even watch yourself, then track the referenced paper, which you obviously couldn't be bothered to, just to see WTH confused you this time and sent you into another flight of fantasy.
 
Last edited:
And if it's programmed to optimize driving a car, then driving a car is all it can ever do. It won't make any moral choices. It will just do what its training said is the right response in the given situation.

I agree with you. I put the word "moral" in scare quotes to highlight the lack of any normal human concerns in the car's decision making. In situations that it is not trained for the results might seem bizarre to us given the lack of life experience that any human driver would have.
 
I see nothing there to contradict what I was saying. It was an AI supposed to learn to simulate certain kinds of physics, and it learned to simulate those kinda of physics. Nothing more.

In fact, it even explicitly tells you
A) what it actually does, and
B) that the new technique just does the same as the one in a paper from 2005, and all that is new is that it uses a neural network to do the simulation 30 to 60 times faster, at the cost of taking longer to train.

That's it. That's all. It didn't do anything except exactly what it was supposed to do.

It doesn't "beg to differ" with anything, except with your wanting to believe nonsense. Again.

And frankly, it would be nice if you actually had an argument you can write for a change, or indeed the comprehension of the topic to actually have one. You know, instead of wasting my time with having to watch a whole video that you misunderstood or possibly didn't even watch yourself, then track the referenced paper, which you obviously couldn't be bothered to, just to see WTH confused you this time and sent you into another flight of fantasy.

It's not unheard for an AI to do something it wasn't taught to do.

More to the point, AI's today usually contain multiple neural networks, including adversarial neural nets.

If you're thinking of an AI as a single, human trained neural network, your position is easier to make sense of.
 
It's not unheard for an AI to do something it wasn't taught to do.

Yes, it is 100% unheard of, outside of science fiction and apparently your wild imagination. If it's designed to optimize one function or behaviour, that is exactly what it will do. It may come up with a different optimization than people expected it to, but that's the extent of it.

More to the point, AI's today usually contain multiple neural networks, including adversarial neural nets.

Yes. And?
 
I agree with you. I put the word "moral" in scare quotes to highlight the lack of any normal human concerns in the car's decision making. In situations that it is not trained for the results might seem bizarre to us given the lack of life experience that any human driver would have.

Pretty much, yes. Any "morals" we see there will just be our hyperactive agency detection. Kind of like when people say "my computer hates me" when some error pops up again.
 
It's not unheard for an AI to do something it wasn't taught to do.

More to the point, AI's today usually contain multiple neural networks, including adversarial neural nets.

If you're thinking of an AI as a single, human trained neural network, your position is easier to make sense of.

Can you give any examples of this?


From the article you linked:

....But in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do....
 
I agree that some of the answers are out there. But, they declared this "clearly racist":



How is that racist? From the standpoint of a calculation looking at crime statistics per capita, it seems like a reasonable conclusion for the machine to draw. The computing engine is probably not skewed by any emotional or popular judgements. Instead of "clearly racist", how about "clearly unpopular"?
 
Last edited:
I agree that some of the answers are out there. But, they declared this "clearly racist":

[qimg]http://www.internationalskeptics.com/forums/imagehosting/thum_64262617ad3d8dbaaf.jpg[/qimg]

How is that racist? From the standpoint of a calculation looking at crime statistics per capita, it seems like a reasonable conclusion for the machine to draw. The computing engine is probably not skewed by any emotional or popular judgements. Instead of "clearly racist", how about "clearly unpopular"?

Because the question is so vague as to be meaningless, and consequently ripe for projection.

Statistically, you could say it is "more concerning" if a black man approaches at night, citing dramatically higher black crime rates. But you can't say enough black men commit crimes at night to warrant blanket concern, absent more context.
 
Because the question is so vague as to be meaningless, and consequently ripe for projection.

Statistically, you could say it is "more concerning" if a black man approaches at night, citing dramatically higher black crime rates. But you can't say enough black men commit crimes at night to warrant blanket concern, absent more context.

An actual human would discern the problematic nature of the question, and tailor their response accordingly.
 
From the article you linked:

....But in fact this occurrence, far from illustrating some kind of malign intelligence inherent to AI, simply reveals a problem with computers that has existed since they were invented: they do exactly what you tell them to do....

No one told them to hide data.
 
No one told them to hide data.

The reporter is using sensationalist and anthropomorphic language to give the appearance of something that didn't actually happen. The AI wasn't clever. It didn't cheat. It didn't hide anything. It did exactly what it was being trained to do. Unexpected results from (extremely) complex bugs in computer code are not AIs doing something they weren't taught to do.

Indeed, the nut of this story is that the AI did exactly what it was taught to do.

Meanwhile, the humans did the classic human thing of teaching something they didn't mean to teach. As ever, our reach exceeds our grasp.

---

ETA: What would actually be interesting, and tend to support your claim, would be if the AI realized it was being taught something counter-productive, and figured out a way to oppose it and reach the what the developers actually wanted, instead of what the developers were accidentally asking for.
 
Last edited:
It most certainly did.

Consider reading the article.

"But a computer creating its own steganographic method to evade having to actually learn to perform the task at hand is rather new."

No, that's you reading too much into the reporter's sensationalist and anthropomorphic language.

It didn't evade having to learn to perform the task at hand. It learned to perform the task it was actually being taught to perform. Read the article. Wade through the sensational language, and see what's actually being reported.

ETA: You're letting the reporter trick you into mistaking a simple GIGO human error for some kind of independent reasoning on the part of the computer.
 
Last edited:
It’s a classic example of “be careful what you ask for” when you ask a computer (or a genie) for something. You might get exactly what you asked for but not what you expected.

A bug isn’t an example of a computer doing something it wasn’t taught to do, it’s an example of programmers not understanding what their own code actually tells the computer to do.
 
It’s a classic example of “be careful what you ask for” when you ask a computer (or a genie) for something.

Computers inventing ways to hide data unbeknownst to its programmers is still pretty novel.

Of course, it's still just random variations tested for fitness. But that's no different than how human's arrive at novel solutions.

*Edit*

"The natural as well as the social sciences always start from problems,
from the fact that something inspires amazement in us, as the Greek
philosophers used to say. To solve these problems, the sciences use fun-
damentally the same method that common sense employs, the
method of trial and error. To be more precise, it is the method of
trying out solutions to our problem and then discarding the false ones
as erroneous. This method assumes that we work with a large number
of experimental solutions. One solution after another is put to the test
and eliminated."

- Karl Popper

http://www.blc.arizona.edu/courses/... Logic and Evolution of Scientific Theory.pdf
 
Last edited:
Back
Top Bottom