AI Advice-- Not ready for Prime Time?

Yeah, this kind of ‘farther outside the box than we wanted’ stuff still just comes from the AI faithfully trying to reach goals where the programmers didn’t manage to consider all the things the AI would think of or have the ability to affect.

A blog about the popular 2018 paper on the subject: https://www.aiweirdness.com/when-algorithms-surprise-us-18-04-13/amp/

I love the fact that there is a blog called aiweirdness.com.

And of course I looked at it, and two entries in, found some discussion of the same paper linked in the OP, giving other sxamples of strange output.

By the way, none of the examples in the blog post have anything whatsoever to do with race or racism. The program is totally inept at any judgement of moral values in any situations whatsoever.

Which makes one wonder why the article authors decided that the racism angle needed to be emphasized.
 

ETA read back on the thread, and see I misread you. Please ignore my little outburst.

ETA2: That said, I think you're interpreting what Mike is saying too narrowly. He's not saying "humans do whatever they're told, just like computers". He's saying "sure, computers follow their programming, but that often leads to complex, even unpredictable behavior. And maybe the complex, unpredictable behavior in humans is arising from a similar process"
 
Last edited:
Then there is this: The World’s Largest Computer Chip

https://www.newyorker.com/tech/annals-of-technology/the-worlds-largest-computer-chip

People in A.I. circles speak of the singularity—a point at which technology will begin improving itself at a rate beyond human control. I asked de Geus if his software had helped design any of the chips that his software now uses to design chips. He said that it had, and showed me a slide deck from a recent keynote he’d given; it ended with M. C. Escher’s illustration of two hands drawing each other, which de Geus had labelled “Silicon” and “Smarts.” When I told Feldman that I couldn’t wait to see him use a Cerebras chip to design a Cerebras chip, he laughed. “That’s like feeding chickens chicken nuggets,” he said. “Ewww.”

Is there something "magical" about the human brain that means it just can't be replicated by technology?
 
Is there something "magical" about the human brain that means it just can't be replicated by technology?

I don't think anyone proposed that there's anything "magical" about it. In fact, in another thread or two I argued that the brain is basically just a massive self-rewiring FPGA.

What I'm saying is just that we're not yet at the point where it's actually possible. More specifically:

- we don't yet really know how. Machine learning is not really a general purpose AI at the moment.
- we don't have a fast enough machine to replicate a brain anyway. There are 86 billion neurons in a human brain, and more than 125 trillion synapses just in the cerebral cortex alone, each with its own different connections information (as in which other neuron it connects to) and synapse strength and all. Each of them is slow-ish on its own, but they all work in parallel, and are around a massive bandwidth hub. We're really nowhere near making a computer as powerful as simulating that in real time would require.
 
Last edited:
Here's an interesting article:

AI Generates Hypotheses Human Scientists Have Not Thought Of

https://www.scientificamerican.com/...otheses-human-scientists-have-not-thought-of/

That sounds like just a data mining exercise. As in, it just tries to look at some very large data sets and find correlations, such as between gland density and cancer. The only new thing is that they're now applying it to science stuff.

Plus, again, nowhere does it sound like it transcended its programming anyway.
 
...snip..

Is there something "magical" about the human brain that means it just can't be replicated by technology?


...snip.... We're really nowhere near making a computer as powerful as simulating that in real time would require.

It may help sciences such as psychology to be able to create such simulations, but it is not going to be especially useful if we want "AI". The brain does so much that would simply be irrelevant it would be very inefficient to use our brain as the root of such AI.
 
You could use Google to look it up, in this context it means the same as your earlier claim when you thought a computer was doing something it wasn't told to e.g. hide data.

Riddle me this.

If an neural network is exactly programmed to do something, why doesn't it do that when it first runs?

Why does it need to be ran millions of times for it to get the right answers?


When I write a program to do something, it runs basically the same way every time, from the first time to the millionth time.
 
Riddle me this.

If an neural network is exactly programmed to do something, why doesn't it do that when it first runs?

Why does it need to be ran millions of times for it to get the right answers?

Because that is how it was designed.
When I write a program to do something, it runs basically the same way every time, from the first time to the millionth time.

Not if you are designing a "neural net" type of application.
 
Because that is how it was designed.

Not if you are designing a "neural net" type of application.


Right.

So a traditional program can be exactly written to do something.

But, an AI, once its programmed, needs to be trained. It's training will cause its output to be (hopefully) what we want. How many generations and population sizes, what the training data is, and I would assume by randomly mutating the weights some element of luck, all effect what strategies the AI will have to available.

And in some cases, how it ends up doing that, comes as a shock to the programmers.

Is a human being programmed by its DNA to invent and use hammers?

No. But we did anyways.

Is that AI in the article programmed by its software to invent and use steganography?

No. But it did anyways.
 
Why does my Excel spreadsheet output different results every month? It's the same exact program, the same exact formulas. It does the same exact thing every week. But the value that gets stored in the SAVINGS variable just keeps getting bigger and bigger with every run. What the hell is going on?
 
The interesting thing is that the authors figured out a clever way to model arbitrary moral reasoning based on personal identity derived from a sufficiently sized training corpus, not the correctness of that moral reasoning in all circumstances.

I wonder if the people upset that it's sometimes unexpectedly racist bothered to read to the bit of the paper where there's a whole section devoted to unexpectedly racist results, concluding with c'mon people you get what you train for, it's not like you never see this **** out in the wild.
 
Why does my Excel spreadsheet output different results every month? It's the same exact program, the same exact formulas. It does the same exact thing every week. But the value that gets stored in the SAVINGS variable just keeps getting bigger and bigger with every run. What the hell is going on?

Obviously it has learned that you want a bigger figure every month so delivers you such a number!
 
Why does my Excel spreadsheet output different results every month? It's the same exact program, the same exact formulas. It does the same exact thing every week. But the value that gets stored in the SAVINGS variable just keeps getting bigger and bigger with every run. What the hell is going on?

That you think this is a good point shows you're over your head here.

Disk/Network access make for not-so-pure functions.

https://betterprogramming.pub/what-is-a-pure-function-3b4af9352f6f

Generating random numbers will do that too.

And then... there are genetic algorithms.

Have you ever wrote a genetic algorithm?
 
I don't think anyone proposed that there's anything "magical" about it. In fact, in another thread or two I argued that the brain is basically just a massive self-rewiring FPGA.

What I'm saying is just that we're not yet at the point where it's actually possible. More specifically:

- we don't yet really know how. Machine learning is not really a general purpose AI at the moment.
- we don't have a fast enough machine to replicate a brain anyway. There are 86 billion neurons in a human brain, and more than 125 trillion synapses just in the cerebral cortex alone, each with its own different connections information (as in which other neuron it connects to) and synapse strength and all. Each of them is slow-ish on its own, but they all work in parallel, and are around a massive bandwidth hub. We're really nowhere near making a computer as powerful as simulating that in real time would require.

The New Yorker article was primarily about chips made by Cerebras Systems.
Per their webpage at https://cerebras.net/ their latest chip,
The Wafer Scale Engine (WSE-2) is the largest chip ever built and powers the CS-2. The WSE-2 is 56 times larger than the largest GPU, has 123 times more compute cores, and 1000 times more high performance on-chip memory. The only wafer scale processor ever produced, it contains 2.6 trillion transistors, 850,000 AI-optimized cores, and 40 gigabytes of high performance on-wafer memory all aimed at accelerating your AI work.

Are we not getting close to the computer power need to simulate a brain? Particularly if the speed of an electronic circuit is some orders of magnitude great than the speed of a neuron?
 

Back
Top Bottom