• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

Artificial Intelligence thinks mushroom is a pretzel

William Parcher

Show me the monkey!
Joined
Jul 26, 2005
Messages
26,853
The mushroom that AI thinks is a pretzel

BBC News said:
Butterflies labelled as washing machines, alligators as hummingbirds and dragonflies that become bananas.

These are just some of the examples of tags which artificial intelligence system have given images.

Now researchers have released a database of 7,500 images that AI systems are struggling to identify correctly.

One expert said it was crucial to solve the issue if these systems were going to be used in the real world...

The researchers from UC Berkeley, and the Universities of Washington and Chicago, said the images they have compiled - in a dataset called ImageNet-A - have the potential to seriously affect the overall performance of image classifiers, which could have knock-on effects on how such systems operate in applications such as facial recognition or self-driving cars...

https://www.bbc.com/news/technology-49084796

Of course AI doesn't "think" that a mushroom is a pretzel because it doesn't think anything and it has no intelligence. Further, AI doesn't understand what it means to be wrong and that there are consequences for making mistakes. Artificial intelligence doesn't understand anything and what is troubling is that it can't care about anything.
 
The mushroom to pretzel is a bit of a stretch. The bullfrog to squirrel is entirely reasonable, and exactly the kind of mistake a human intelligence would make.

Besides, we're still not doing AI. We're still just advancing the precursor technologies that we hope will enable AI. This process of training pattern recognition is an important part of that advance. The BBC attaches far too much meaning about "AI" to the early results of this ongoing training process.

I'm sure that AI researchers are learning just as much about how to develop true AI by these "misses" as they would from the hits.

The goal of this work is not to get it right every time. The goal is to get good data about what's going on and what happens when the test subject is making its guesses. A wrong guess can give just as much good data about the process as a right guess - perhaps even much more.
 
Besides, we're still not doing AI.
Maybe intelligence is necessary for reliable pattern recognition.

The mushroom to pretzel is a bit of a stretch.
What does "stretch" even mean in this context? The computer doesn't know what mushrooms or pretzels are. They mean nothing to the computer and we might find out that computers can't really have intelligence or ability to recognize because nothing means anything to them.
 
Maybe intelligence is necessary for reliable pattern recognition.


What does "stretch" even mean in this context? The computer doesn't know what mushrooms or pretzels are. They mean nothing to the computer and we might find out that computers can't really have intelligence or ability to recognize because nothing means anything to them.

I think a p-zombie AI would be a successful result of AI research. But you're still getting ahead of things. We're still just trying to figure out whether it's even possible to get there on this route. You seem to be condemning the goal based on the outcomes of some of the early research. Which may not even be bad outcomes, depending on the research design and expected results at this stage.
 
Obligatory xkcd.

tasks.png
 
The mushroom to pretzel is a bit of a stretch. The bullfrog to squirrel is entirely reasonable, and exactly the kind of mistake a human intelligence would make.

Besides, we're still not doing AI. We're still just advancing the precursor technologies that we hope will enable AI. This process of training pattern recognition is an important part of that advance. The BBC attaches far too much meaning about "AI" to the early results of this ongoing training process.

I'm sure that AI researchers are learning just as much about how to develop true AI by these "misses" as they would from the hits.

The goal of this work is not to get it right every time. The goal is to get good data about what's going on and what happens when the test subject is making its guesses. A wrong guess can give just as much good data about the process as a right guess - perhaps even much more.



This. As has been said, the most exciting discoveries in science don’t start with “That’s what I expected”, but with “Hmm, that’s odd...”


Sent from my iPhone using Tapatalk
 
Isn't this about "simulated itelligence", or SI? Actual intelligence implies consciousness, hence AI would better be referred to AC or artificial consciousness, but I guess neither of those other acronym have the same cachet.
 
Isn't this about "simulated itelligence", or SI? Actual intelligence implies consciousness, hence AI would better be referred to AC or artificial consciousness, but I guess neither of those other acronym have the same cachet.

I don't quibble too much about such terminology. This isn't a technical discussion where the distinction really matters. We all know what we're getting at, in this conversation.
 
The mushroom to pretzel is a bit of a stretch. The bullfrog to squirrel is entirely reasonable, and exactly the kind of mistake a human intelligence would make.

Besides, we're still not doing AI. We're still just advancing the precursor technologies that we hope will enable AI. This process of training pattern recognition is an important part of that advance. The BBC attaches far too much meaning about "AI" to the early results of this ongoing training process.

I'm sure that AI researchers are learning just as much about how to develop true AI by these "misses" as they would from the hits.

The goal of this work is not to get it right every time. The goal is to get good data about what's going on and what happens when the test subject is making its guesses. A wrong guess can give just as much good data about the process as a right guess - perhaps even much more.
Well, you're right. AI is a goal not yet achieved. Sure, much is learned by the abject failures, but that is the point of such research. Find the failures and understand them. Not an easy task by any stretch.

While it is true that current "AI" is actually really good at some things, better than humans in many cases, the term "AI" is depressingly wheeled out far too often so, for example, describes the system that detects if your bag of cornflakes is correctly filled at the factory production line. That is not AI. That is sophisticated sensors.
 
Last edited:
The mushroom that AI thinks is a pretzel



https://www.bbc.com/news/technology-49084796

Of course AI doesn't "think" that a mushroom is a pretzel because it doesn't think anything and it has no intelligence. Further, AI doesn't understand what it means to be wrong and that there are consequences for making mistakes. Artificial intelligence doesn't understand anything and what is troubling is that it can't care about anything.

But it could be programmed to. :D
 
I don't think computers could care about anything if they don't experience emotions. They need to know happiness and pain and fear and suffering and security in order to have anything mean anything. But I don't know how that could ever happen and besides that they aren't born into any culture and without that emotion has no meaningful context.
 
they aren't born into any culture and without that emotion has no meaningful context.

I don't know that that's necessarily true. Animals don't have culture, but they do have emotions. Well, mammals do, anyway. Ever pissed off a cat? They definitely experience anger, rage, wrath, fury, the whole rainbow of emotion!
 
I don't know that that's necessarily true. Animals don't have culture, but they do have emotions. Well, mammals do, anyway. Ever pissed off a cat? They definitely experience anger, rage, wrath, fury, the whole rainbow of emotion!
Mammals are raised by their parent(s) so there is a culture of association and familiarity and socialization. Mammals have intelligence that these computers do not, and may not ever have.

Even other sexually-reproducing animals that do not raise their offspring have a form of sociability needed at the minimum for reproduction.

Computers might need true desires and hatreds in order to care about and understand anything. How could we instill a sense of gain or loss to a computer?

One of the factors of children learning about the world is a sense of embarrassment and humility when they get things wrong. Imagine a child picking up a wild mushroom and telling their parent to look because they just found a pretzel.
 
I don't think computers could care about anything if they don't experience emotions. They need to know happiness and pain and fear and suffering and security in order to have anything mean anything. But I don't know how that could ever happen and besides that they aren't born into any culture and without that emotion has no meaningful context.
I think it's waaay too early to say that with any confidence.

Especially since the taboo on human experimentation means we're never going to do any kind serious exploration of human emotion and cognition as a function of genetics and upbringing.

We're basically trying to reverse engineer the human condition, without having proper access to the workings of the original article.

We may never get there. We may get exactly there. We may get somewhere else entirely. We may end up confronting the AI of our dreams, and saying, "it's desire, but not as we know it."

But I think we're still a couple centuries away from knowing which one of those to expect.
 
I think a lot of people here are missing the point, which is this: Artificial Intelligence doesn't mean a fully conscious sentient computer which is aware of its own existence and can reason and make decisions like a human can. That's Artificial General Intelligence (AGI) - a specific category of AI which, it is true, has not been achieved.

Artificial Intelligence is a term which encompasses a number of techniques by which computers perform tasks that emulate human cognitive function such as learning and problem solving. That does not require consciousness or sentience, and new techniques are making great progress in this area.

However, some of them still misidentify mushrooms as pretzels. That's okay - these systems are getting better all the time.
 
Good thing I don't have a lot of confidence, then.

You seem to have a ton. Where I'm saying, this is what I think, you're saying, this is how it is. What makes you so confident?
When you say it could take hundreds of years you seem confident that it won't take thousands of years.
 
I think the first problem with AI is that everyone is looking for a quick and dirty way out. The image processing in humans is staggeringly complex, including stuff like detection for scales built right into the retina. And there is evidence that there's some reconstruction of the 3d model, e.g., in why you can tell if a painting's eyes are looking right at you. Meanwhile, everyone seems to want some quick and dirty way, like that if the blob is this general shape it's a mushroom, and if it's that general shape, it's a deer.
 
That said, mandatory joke, humans aren't always better either: e.g., in the Europea middle ages a species of goose was identified as a plant, so it was ok for nobles to eat it during fasts :p
 
I think it's waaay too early to say that with any confidence.

No, I think it's true by definition. You can't want without some sort of emotion. Normally a machine wouldn't have those. Intelligent or not, they just do what they're told to do. Why we'd want to simulate emotions in our computers is beyond me. What if they don't feel like doing that Google search for you?
 
It might take thousands of years. I'm not confident it won't.

But I asked you a question. Will you please answer it?
Your question is about my confidence in what I say. You can go back and look at my posts and see that I use qualifiers for my opinions but then in addition I use some facts.

What causes my skepticism is my past reading on "how the mind works" and how emotion is probably necessary for even simple intelligence. Emotion may be necessary for learning including learning how a mushroom is never a pretzel and what each of them look like in a near-infinite number of contexts.

But my confidence level is really meaningless. I'm not going to stop the progress of technology and artificial thinking. Nobody is harmed if I am skeptical of the outcome of the quest for artificial thinking done by a computer in a way that mimics human thinking.

I'm not proposing to stop the work. I'm only expressing my skepticism and offering the opinion that there may be a sci-fi dream here that cannot be achieved. It wouldn't be the first time that humanity walked down a dead end.
 
Why we'd want to simulate emotions in our computers is beyond me. What if they don't feel like doing that Google search for you?
You get rid of that particular computer and get a different one that will reliably do the Google search for you. Maybe you interview it before taking it on board. Look for indicators of reasonable honesty and a motivation to achieve shared goals.
 
That said, mandatory joke, humans aren't always better either: e.g., in the Europea middle ages a species of goose was identified as a plant, so it was ok for nobles to eat it during fasts :p

Closer to home I think you've misidentified "fish" as "plant".
 
Assuming AI ever happens and machines become self-aware and self-programming we wouldn't have much to fear from them. Shooting them would disarticulate the machinery etc.

The killer robots in Terminator would be vulnerable to armor-piercing bullets and anti-tank-type weapons.
 
That said, mandatory joke, humans aren't always better either: e.g., in the Europea middle ages a species of goose was identified as a plant, so it was ok for nobles to eat it during fasts :p

Nonsense. That proves humans are better. They know when to misidentify objects, and don't do so randomly.
 
Nonsense. That proves humans are better. They know when to misidentify objects, and don't do so randomly.
One might be concerned about putting an AI in charge of the power grid, if it's capable of redefining fish as fowl for religious reasons. But we already have such intelligences in charge of the power grid.
 
I'm sure that AI researchers are learning just as much about how to develop true AI by these "misses" as they would from the hits.

The goal of this work is not to get it right every time. The goal is to get good data about what's going on and what happens when the test subject is making its guesses. A wrong guess can give just as much good data about the process as a right guess - perhaps even much more.

This. As has been said, the most exciting discoveries in science don’t start with “That’s what I expected”, but with “Hmm, that’s odd...”

Exactly, things working perfectly don't provide much insight into how they work. Think about all the interesting developments in neurology that were brought about by horrific accidents; Phineas Gage for example.
 
One might be concerned about putting an AI in charge of the power grid, if it's capable of redefining fish as fowl for religious reasons. But we already have such intelligences in charge of the power grid.

The problem there is not one of intelligence, but of incentive. The incentive structure in place in developed countries seems to work reasonably well.

If we ever do create true artificial intelligence, what incentive structure could we put in place to keep them in line? We would need to make them able to feel pain, and then we would need to inflict pain on them. I'm not very comfortable with that.
 
The problem there is not one of intelligence, but of incentive. The incentive structure in place in developed countries seems to work reasonably well.

If we ever do create true artificial intelligence, what incentive structure could we put in place to keep them in line? We would need to make them able to feel pain, and then we would need to inflict pain on them. I'm not very comfortable with that.

I don't know that pain, or it's analogue, would be such a bad idea, but there are also positive rather than negative incentives.
 
Back
Top Bottom