• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

A new paper published in Nature this month, entitled Discovery of a structural class of antibiotics with explainable deep learning, in which a deep learning model was used to discover a new class of antibiotics.

https://www.nature.com/articles/s41586-023-06887-8

Explainable deep learning: concepts, methods, and new developments
Explainable AI (XAI) is an emerging research field bringing transparency to highly complex and opaque machine learning (ML) models. In recent years, various techniques have been proposed to explain and understand ML models, which have been previously widely considered black boxes (e.g., deep neural networks), and verify their predictions. Surprisingly, the prediction strategies of these models at times turned out to be somehow flawed and not aligned with human intuition, e.g., due to biases or spurious correlations in the training data.
Surprisingly? I would have thought it was expected.
 
I thought THE WHOLE POINT of A.I. is to have something that thinks differently - we have already billions of human-like minds.

Of course, if our brains are Turing Complete, they and a machine will always think alike on a basic level.
 
I thought THE WHOLE POINT of A.I. is to have something that thinks differently - we have already billions of human-like minds.

It could be a benefit, but it's not the whole point. If you had an artificial mind that worked like a human's it could drive a car or run a robot that cleans your room, or works in a factory, etc. Pretty valuable, in spite of not being capable of insights humans can't arrive at or doing jobs humans can't do.

And it would have other benefits, like being copyable and being compatible with hardware upgrades so it could run faster given newer hardware, etc.
 

I'm not sure if this is meant as a criticism of the paper.

They discovered a new class of antibiotic, and that antibiotic was effective against MRSA in mice. (mouse models doesn't mean simulated mice, it means actual mice whose skin was infected with MRSA and then treated with the new antibiotic as a model of what would happen if humans were infected with MRSA and then treated with the antibiotic). In general there are plenty of things that work in mice but not humans, but the effectiveness of a new antibiotic seems unlikely to be one of them.

New classes of antibiotics that are effective against bacterial strains that have developed resistance to our current crop of antibiotic are sorely needed. As far as I can see this is extremely good news, even if it's just a one off. If it signals a beginning of a series of discoveries using the same technique it's potentially revolutionary.
 
That's very cool, the antibiotic thing! ...And if it's been done once, then no need to assume it's a one-off. Chances are if it's been done once, then people can hone the thing further, so that this becomes a part of research going forward.

This thing's moving forward very fast, if it's already started to help discover new generics! At this rate, we might be living in an unrecognizably different sci-fi-future where AI's part of everything, not in some far far future date but actually well within our lifetime!
 
I'm not sure if this is meant as a criticism of the paper.
Not at all. The paper shows how effective AI can be when used properly - as a tool designed for the job, not some kind of general purpose 'intelligence' that people hope will magically appear if they throw enough data at it.

My comment was on previous researchers being 'surprised' that this lazy attempt to get more out than they put in backfired.

Meanwhile I am seeing more and more articles using AI generated images to illustrate them - totally worthless as they impart no information. It won't be long before there will be no point having images turned on in the web browser. Such progress! Might as well go back to using 1995 tech...
 
Meanwhile I am seeing more and more articles using AI generated images to illustrate them - totally worthless as they impart no information. It won't be long before there will be no point having images turned on in the web browser. Such progress! Might as well go back to using 1995 tech...

How is AI generated image different from illustration or stock photo ?
 
Last edited:
Dwarkesh Patel has a good post today about the question (often raised in this thread) of whether or not scaling alone can lead to AGI. That is, will bigger models with more compute but basically the same architecture continue to show gains in ability, or will those gains level off?

He structures it as a dialogue between a believer and skeptic, both of whose arguments are well thought out.

Anyway, here's the link:
https://www.dwarkeshpatel.com/p/will-scaling-work
 
Dwarkesh Patel has a good post today about the question (often raised in this thread) of whether or not scaling alone can lead to AGI. That is, will bigger models with more compute but basically the same architecture continue to show gains in ability, or will those gains level off?

He structures it as a dialogue between a believer and skeptic, both of whose arguments are well thought out.

Anyway, here's the link:
https://www.dwarkeshpatel.com/p/will-scaling-work

That was interesting. Thanks.

He links to this paper, which says that AI has been used to solve some "open problems" in mathematics theory:

Mathematical discoveries from program search with large language models
 
I thought THE WHOLE POINT of A.I. is to have something that thinks differently - we have already billions of human-like minds.

Of course, if our brains are Turing Complete, they and a machine will always think alike on a basic level.

No the field of AI has been trying to replicate human "thought" since it first began, going right back to when clockwork was the most sophisticated technology we had. The idea was to make it faster and more reliable than HI.

I think it's only comparatively recently we started to consider making AI which is meant to be unlike human "thought".
 
How is AI generated image different from illustration or stock photo ?

Or for those without the training and talent and time to produce artwork to a commercial level themselves commissioning a commercial artist to produce an illustration for an article etc?

I'd say to a good level of accuracy that outside the specialist niche of art training and art education books and articles 99% of all books and articles and videos are not illustrated with artwork created by the authors.

ETA: I've started to use AI to generate reference work for my own artwork, especially for composition given I now know my non typical neurology and it's not a lack of talent but a quirk of neurology. For those not into creating their own artworks using references is a typical approach, one that is taught in pretty much every art course. So say you are drawing a figure and you need to draw some feet, grabbing a ton of other artists' attempts at drawing feet, grabbing photos of feet, even looking at your own foot is all part of a normal artistic process.
 
Last edited:
I thought THE WHOLE POINT of A.I. is to have something that thinks differently - we have already billions of human-like minds.
I'm pretty sure getting something that thinks differently is a curiosity for future researchers to explore.

Computers already think differently from humans. That's why they're so much better at rote, brute force cognitive tasks. What we really want from AI is something that is good at rote tasks, but also able to apply abstract values and intuitive leaps in a way consistent with our expectations about how humans reach conclusions.

We don't want a lawyer-bot that understands and applies the law differently from how humans would do it. We want a lawyer-bot that practices the same kind of law as the best human lawyers, plus has a truly encyclopedic, knowledge of case law, statute law, common law, etc.

ChatGPT doesn't think like a human thinks. That's why it hallucinates so much. An AI that thinks differently from humans ends up being functionally the same as a schizophrenic - or a psychopath.
 
Happened probably within the first hours of captchas being used. It's nothing new.
Not only that, but the captchas themselves are used to produce training data, and that's the primary reason for their continued existence. They haven't worked particularly well for stopping bots in a long time, it's mostly about duping everybody into doing free labor these days.
 
Do Androids Know They're Only Dreaming of Electric Sheep?

A riff on the title of one of my favourite novels of all time, but this isn’t about the novel it’s about a paper into hallucinations of LLMs. It really is a most apt title for such a paper: https://arxiv.org/abs/2312.17249

The research into how the new AIs do what they do is as fascinating as the LLMs themselves.
 

Back
Top Bottom