• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial Intelligence thinks mushroom is a pretzel

William Parcher

Show me the monkey!
Joined
Jul 26, 2005
Messages
27,471
The mushroom that AI thinks is a pretzel

BBC News said:
Butterflies labelled as washing machines, alligators as hummingbirds and dragonflies that become bananas.

These are just some of the examples of tags which artificial intelligence system have given images.

Now researchers have released a database of 7,500 images that AI systems are struggling to identify correctly.

One expert said it was crucial to solve the issue if these systems were going to be used in the real world...

The researchers from UC Berkeley, and the Universities of Washington and Chicago, said the images they have compiled - in a dataset called ImageNet-A - have the potential to seriously affect the overall performance of image classifiers, which could have knock-on effects on how such systems operate in applications such as facial recognition or self-driving cars...

https://www.bbc.com/news/technology-49084796

Of course AI doesn't "think" that a mushroom is a pretzel because it doesn't think anything and it has no intelligence. Further, AI doesn't understand what it means to be wrong and that there are consequences for making mistakes. Artificial intelligence doesn't understand anything and what is troubling is that it can't care about anything.
 
The mushroom to pretzel is a bit of a stretch. The bullfrog to squirrel is entirely reasonable, and exactly the kind of mistake a human intelligence would make.

Besides, we're still not doing AI. We're still just advancing the precursor technologies that we hope will enable AI. This process of training pattern recognition is an important part of that advance. The BBC attaches far too much meaning about "AI" to the early results of this ongoing training process.

I'm sure that AI researchers are learning just as much about how to develop true AI by these "misses" as they would from the hits.

The goal of this work is not to get it right every time. The goal is to get good data about what's going on and what happens when the test subject is making its guesses. A wrong guess can give just as much good data about the process as a right guess - perhaps even much more.
 
Besides, we're still not doing AI.
Maybe intelligence is necessary for reliable pattern recognition.

The mushroom to pretzel is a bit of a stretch.
What does "stretch" even mean in this context? The computer doesn't know what mushrooms or pretzels are. They mean nothing to the computer and we might find out that computers can't really have intelligence or ability to recognize because nothing means anything to them.
 
Maybe intelligence is necessary for reliable pattern recognition.


What does "stretch" even mean in this context? The computer doesn't know what mushrooms or pretzels are. They mean nothing to the computer and we might find out that computers can't really have intelligence or ability to recognize because nothing means anything to them.

I think a p-zombie AI would be a successful result of AI research. But you're still getting ahead of things. We're still just trying to figure out whether it's even possible to get there on this route. You seem to be condemning the goal based on the outcomes of some of the early research. Which may not even be bad outcomes, depending on the research design and expected results at this stage.
 
Obligatory xkcd.

tasks.png
 
The mushroom to pretzel is a bit of a stretch. The bullfrog to squirrel is entirely reasonable, and exactly the kind of mistake a human intelligence would make.

Besides, we're still not doing AI. We're still just advancing the precursor technologies that we hope will enable AI. This process of training pattern recognition is an important part of that advance. The BBC attaches far too much meaning about "AI" to the early results of this ongoing training process.

I'm sure that AI researchers are learning just as much about how to develop true AI by these "misses" as they would from the hits.

The goal of this work is not to get it right every time. The goal is to get good data about what's going on and what happens when the test subject is making its guesses. A wrong guess can give just as much good data about the process as a right guess - perhaps even much more.



This. As has been said, the most exciting discoveries in science don’t start with “That’s what I expected”, but with “Hmm, that’s odd...”


Sent from my iPhone using Tapatalk
 
Isn't this about "simulated itelligence", or SI? Actual intelligence implies consciousness, hence AI would better be referred to AC or artificial consciousness, but I guess neither of those other acronym have the same cachet.
 
Isn't this about "simulated itelligence", or SI? Actual intelligence implies consciousness, hence AI would better be referred to AC or artificial consciousness, but I guess neither of those other acronym have the same cachet.

I don't quibble too much about such terminology. This isn't a technical discussion where the distinction really matters. We all know what we're getting at, in this conversation.
 
The mushroom to pretzel is a bit of a stretch. The bullfrog to squirrel is entirely reasonable, and exactly the kind of mistake a human intelligence would make.

Besides, we're still not doing AI. We're still just advancing the precursor technologies that we hope will enable AI. This process of training pattern recognition is an important part of that advance. The BBC attaches far too much meaning about "AI" to the early results of this ongoing training process.

I'm sure that AI researchers are learning just as much about how to develop true AI by these "misses" as they would from the hits.

The goal of this work is not to get it right every time. The goal is to get good data about what's going on and what happens when the test subject is making its guesses. A wrong guess can give just as much good data about the process as a right guess - perhaps even much more.
Well, you're right. AI is a goal not yet achieved. Sure, much is learned by the abject failures, but that is the point of such research. Find the failures and understand them. Not an easy task by any stretch.

While it is true that current "AI" is actually really good at some things, better than humans in many cases, the term "AI" is depressingly wheeled out far too often so, for example, describes the system that detects if your bag of cornflakes is correctly filled at the factory production line. That is not AI. That is sophisticated sensors.
 
Last edited:
The mushroom that AI thinks is a pretzel



https://www.bbc.com/news/technology-49084796

Of course AI doesn't "think" that a mushroom is a pretzel because it doesn't think anything and it has no intelligence. Further, AI doesn't understand what it means to be wrong and that there are consequences for making mistakes. Artificial intelligence doesn't understand anything and what is troubling is that it can't care about anything.

But it could be programmed to. :D
 
I don't think computers could care about anything if they don't experience emotions. They need to know happiness and pain and fear and suffering and security in order to have anything mean anything. But I don't know how that could ever happen and besides that they aren't born into any culture and without that emotion has no meaningful context.
 
they aren't born into any culture and without that emotion has no meaningful context.

I don't know that that's necessarily true. Animals don't have culture, but they do have emotions. Well, mammals do, anyway. Ever pissed off a cat? They definitely experience anger, rage, wrath, fury, the whole rainbow of emotion!
 

Back
Top Bottom