• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Artificial Intelligence thinks mushroom is a pretzel

I have a very strong rule against putting my dick in crazy.

I don't care how good the toast is.

That reminds me of a joke:

Bill has worked in a pickle factory for several years. One day he confesses to his wife that he has a terrible urge to stick his penis into the pickle slicer. His wife suggests that he see a therapist to talk about it, but Bill vows to overcome this rash desire on his own.

A few weeks later, Bill returns home absolutely ashen. His wife asks, "What's wrong, Bill?"

"Do you remember how I told you about my tremendous urge to put my penis into the pickle slicer?"

His wife gasps, "My God, Bill, what happened?"

"I got fired."

"No, Bill -- I mean, what happened with the pickle slicer?"

"Oh, um, she got fired, too."
 
If I have to spend time getting to know my toaster and caring about it, that's bad product design.

What I want from a toaster is something that spends time getting to know and care about me, without me having to pay any attention to it at all. I'm a human being. I'm going to put my energy into human relationships. I absolutely require that my robot servants demand as little attention from me as possible.

Well, think of it this way: without some input from you, it's never going to figure out exactly how toasted you want it. And at that point, it's easier and quicker to just set the setting yourself, rather than expecting the AI to figure out that this is too burnt, the other one is not toasted enough, etc.
 
Following the news on Youtube the last couple of days, I've been wondering when they'll be able to make artificial transcribed subtitles that read "M-u-e-l-l-e-r Report" instead of "M-o-l-a-r Report".
 
Well, think of it this way: without some input from you, it's never going to figure out exactly how toasted you want it. And at that point, it's easier and quicker to just set the setting yourself, rather than expecting the AI to figure out that this is too burnt, the other one is not toasted enough, etc.

That seems like an empirical question. I can see that it could be true of toast, but it certainly depends on how simple the input is. It's not easier to make a cake myself than it is to tell a chef how I like my cake and let him make it.

If your interaction with your toaster is that it asks you "How do you like your toast?" And you answer "lightly toasted", by speaking, that seems pretty easy, just as easy as adjusting a knob to "lightly toasted". It can then refine it's toasting based on subconscious reactions, facial expressions, etc, just as a human could. It could then try subtle (so as to avoid too large a change in the wrong direction) trial and error with feedback from those cues to improve it's toasting. It might add to this a model based not just on your behaviour but on observations of other people made by other toasters.

Of course, this is assuming some future much more advanced AI than is currently available, but I'm pretty sure that's what we're talking about.
 
That seems like an empirical question. I can see that it could be true of toast, but it certainly depends on how simple the input is. It's not easier to make a cake myself than it is to tell a chef how I like my cake and let him make it.

If your interaction with your toaster is that it asks you "How do you like your toast?" And you answer "lightly toasted", by speaking, that seems pretty easy, just as easy as adjusting a knob to "lightly toasted". It can then refine it's toasting based on subconscious reactions, facial expressions, etc, just as a human could. It could then try subtle (so as to avoid too large a change in the wrong direction) trial and error with feedback from those cues to improve it's toasting. It might add to this a model based not just on your behaviour but on observations of other people made by other toasters.

Of course, this is assuming some future much more advanced AI than is currently available, but I'm pretty sure that's what we're talking about.

You know you've described something that already exists and is crap, right?

Because you only need to look at Google's YouTube recommendations, to see how exactly what you describe goes horribly wrong. Because at some point Google quite conspicuously stopped giving a flying f-bomb about actually doing anything for the users, and just started using several of its services as testing grounds for its algorithms. Including stuff like giving everyone the same set of images of streetlights and street-side stairs in its captchas for a few weeks at one point, presumably because it was training its self-driving car AI, and who cares if it defeats the whole purpose of why a user might want to use a captcha.

Well, based on observing my behaviour and what it thought were clues from it, over the time it produced pretty much only the worst recommendations possible. E.g.,

- it kept recommending Turkish soap operas for years, until I switched my country from Germany to UK. Presumably because, yeah, there a lot more Turkish immigrants in Germany.

- it keeps recommending news -- granted, now about Brexit -- no matter how often I click the X in the corner to remove that. Presumably because, yeah, it looks at what other people are watching.

- it kept recommending wrestling videos for weeks, because ONCE I clicked on one video of Jeff Dunham doing a comedy show for the troops in an event that also featured a wrestling match. In fact, it's become an entertainment of its own to watch the recommendations go retarded if someone sends me a link to youtube.

- it can't seem to distinguish between playlists that are supposed to be watched in a sequence, like a TV series or a game Let's Play, and a random music list. Or at least I assume that's what happens when I watched episodes 0 to 9 of a Let's Play in order, and then it recommends that I watch one of episodes 19, 21, 13, 17, 8 again, or 12. Presumably because those got the most thumbs up. But conspicuously missing is the only one that actually makes sense to watch next, which is episode 10.

- it keeps recommending only the games I'm not interested in, but presumably the ones that kids these days play the most. Because, hey, they're all tagged as games. If you've watched strategy games before, surely you're interested in some twit bragging about using aim-bots to "pwn noobs" in games like Fortnite.

Etc.

Now all those could be solved MUCH easier by just letting me do the same thing as in Steam, namely let me set some options and tags/people I'm interested in, and ones which I don't. Especially the last one. Other than wanting to keep me as a lab rat in perfecting its learning algorithms, trying to guess what I'm interested instead of just letting me block certain twits, there is literally no advantage in doing the former instead of the latter.

And in fact, not only that, but its guessing game by now screws up even the search, where I can theoretically set some options. But for example, telling it to sort results by upload date is producing hilarious results that jump back and forth by months, while not showing at all more recent videos which I know exist.


And basically that's the problem with playing that kind of guessing game.
 
You know you've described something that already exists and is crap, right?

I am quite aware that it can't be done very well at present. That doesn't demonstrate that it's impossible.

ETA: Youtube does a pretty good job of recommending videos that I'd like to watch, often things I wouldn't have known to search for. It's very far from perfect, but I do get exposed to content that I wouldn't have been aware of if it didn't do some sort of recommending based on prior videos that I've watched. And this sort of technology is, what, a decade old?
 
Last edited:
And I'm saying that you could still get recommendations of what might interest you, if it let you set the parameters instead. In fact, you might get more of them, since there's a finite number of them on a page, and you'd get less of those you could have told it from the start to not include.

E.g., exactly what would I have lost if I could tell YouTube something like "only show me stuff in English and German"? Interesting and exciting as some Turkish soap opera might be -- or nowadays it's Russian people playing Fortnite -- the fact is that I can't understand a word of the language. If I could tell it to just stop showing me those, it would presumably fill those slots on the page with more useful stuff that does fit my interests more.

NB, I'm not saying it's IMPOSSIBLE to eventually figure it out. I'm saying that it doesn't bring any extra convenience anyway. Even in the ideal scenario, it would take many months to figure out exactly what my preferences are, but at that point it could have just let me tell it and saved me some time and Google some processing power.
 
General comment: I see here way, way too much anthropomorphisation. Human-created AI (actual one) will be like birds and airplanes. Both fly, but they do it in very different way. I think AI will be sufficiently different from human mind that I doubt concepts like "rights" for AI would be even applicable to them. I still will expect them to gain some rights due to moral hazards associated with declaring this or that is p-zombie or similar idiocy like p-zombie concept.

Basically, IMO most likely outcome is very inhuman AI. That's good thing. It will be able to examine problems in ways that human mind is incapable of. If we wanted human-like mind, well, homo sapiens is pretty good at making more of itself.

Maybe in uses that require contact with general population there will be interface (more fancy version of pattern matcher with memory and certain degree of self-modification) that does very good job at pretending to be human. Turing's test is no more.
 
General comment: I see here way, way too much anthropomorphisation. Human-created AI (actual one) will be like birds and airplanes. Both fly, but they do it in very different way. I think AI will be sufficiently different from human mind that I doubt concepts like "rights" for AI would be even applicable to them.
But Airplanes and birds generate thrust in fundamentally different ways, neural networks and brains function in the same way.
If you model an AI on a human brain it will do and feel what a human brain does.


At the moment NNs are the size of bug brains and mostly modeled on parts of the human visual system. They are designed and trained for specific tasks, such networks will never be intelligent (except in a very limited sense).
 
But Airplanes and birds generate thrust in fundamentally different ways, neural networks and brains function in the same way.

Haha, no. No. No. That's my entire point. Human brain and NN works in fundamentally different way.

One is nice, clean graph of nodes and connections with weights. That's it.

Other has something that can be called "node" only in most general sense - living cells called neurons that are connected to each other. They are feed, they are awashed in hormones and other stimulants, they can fail, they can degrade, they can change their state, break connections and create new ones. No neuron is exactly same.

You would basically have to simulate entire human brain to atomic level, all with blood circulation, pressure, temperature, trillions of chemical reactions occurring in every cell at once etc.

At the moment NNs are the size of bug brains and mostly modeled on parts of the human visual system.

They don't model "parts of the human visual system". They work very differently than human visual system. They are trained on images without using any knowledge about connections within human (or any for that matter) visual cortex.

Evidence is very simple - you can fool these algorithms by modifying images in way that humans either cannot notice (specifically constructed noise applied to image) or is blindingly obvious to human what is going on (like with prepared fake sticker). It is called adversarial attack and works only because neural networks basically crunch numbers. I mean, literal numbers, since any picture for them is just big bunch of numbers (values from raw RGB image).
 
There are no actual AIs yet, anyway.


But according to Drudge Report there is an A.I. army in China, A.I. Sex Bots are walking around Manhattan, and there is a sophisticated algorithm that tells me when my door is ajar.
 
You know you've described something that already exists and is crap, right?



Because you only need to look at Google's YouTube recommendations, to see how exactly what you describe goes horribly wrong. Because at some point Google quite conspicuously stopped giving a flying f-bomb about actually doing anything for the users, and just started using several of its services as testing grounds for its algorithms. Including stuff like giving everyone the same set of images of streetlights and street-side stairs in its captchas for a few weeks at one point, presumably because it was training its self-driving car AI, and who cares if it defeats the whole purpose of why a user might want to use a captcha.



Well, based on observing my behaviour and what it thought were clues from it, over the time it produced pretty much only the worst recommendations possible. E.g.,



- it kept recommending Turkish soap operas for years, until I switched my country from Germany to UK. Presumably because, yeah, there a lot more Turkish immigrants in Germany.



- it keeps recommending news -- granted, now about Brexit -- no matter how often I click the X in the corner to remove that. Presumably because, yeah, it looks at what other people are watching.



- it kept recommending wrestling videos for weeks, because ONCE I clicked on one video of Jeff Dunham doing a comedy show for the troops in an event that also featured a wrestling match. In fact, it's become an entertainment of its own to watch the recommendations go retarded if someone sends me a link to youtube.



- it can't seem to distinguish between playlists that are supposed to be watched in a sequence, like a TV series or a game Let's Play, and a random music list. Or at least I assume that's what happens when I watched episodes 0 to 9 of a Let's Play in order, and then it recommends that I watch one of episodes 19, 21, 13, 17, 8 again, or 12. Presumably because those got the most thumbs up. But conspicuously missing is the only one that actually makes sense to watch next, which is episode 10.



- it keeps recommending only the games I'm not interested in, but presumably the ones that kids these days play the most. Because, hey, they're all tagged as games. If you've watched strategy games before, surely you're interested in some twit bragging about using aim-bots to "pwn noobs" in games like Fortnite.



Etc.



Now all those could be solved MUCH easier by just letting me do the same thing as in Steam, namely let me set some options and tags/people I'm interested in, and ones which I don't. Especially the last one. Other than wanting to keep me as a lab rat in perfecting its learning algorithms, trying to guess what I'm interested instead of just letting me block certain twits, there is literally no advantage in doing the former instead of the latter.



And in fact, not only that, but its guessing game by now screws up even the search, where I can theoretically set some options. But for example, telling it to sort results by upload date is producing hilarious results that jump back and forth by months, while not showing at all more recent videos which I know exist.





And basically that's the problem with playing that kind of guessing game.
You're describing a very different problem. This isn't a problem of bad AI.

This is a problem of you thinking you're the customer, but really you're the product.
 
But according to Drudge Report there is an A.I. army in China, A.I. Sex Bots are walking around Manhattan, and there is a sophisticated algorithm that tells me when my door is ajar.

What's the difference between an A.I. that can't tell the difference between a mushroom and a pretzel and a sophisticated algorithm that can't tell the difference between a door and a jar!! Where does it end!!
 
Haha, no. No. No. That's my entire point. Human brain and NN works in fundamentally different way.

One is nice, clean graph of nodes and connections with weights. That's it.

Other has something that can be called "node" only in most general sense - living cells called neurons that are connected to each other.
A NN is made up of artificial neurons that functions like idealized biological neurons. The 'nodes' have one or more weighted inputs representing the excitatory or inhibitory synaptic inputs of a neuron. It produces an output representing a neuron's action potential which is transmitted to the next 'nodes'. They can be connected any way you want to.

We are only modelling small NNs, so they are cleaner idealized versions of what is happening in small portions of real brains.
They are feed, they are awashed in hormones and other stimulants, they can fail, they can degrade, they can change their state, break connections and create new ones. No neuron is exactly same.
Which gives artificial neurons the edge.
Your brain has may semi-independent NNs all with their own unique wants and needs. Hormones etc shift the relative balances of power between them handing more control or influence to those NNs that did the best job of inducing the appropriate behaviour for the situation over evolutionary time.


You would basically have to simulate entire human brain to atomic level, all with blood circulation, pressure, temperature, trillions of chemical reactions occurring in every cell at once etc.
No you don't.

They don't model "parts of the human visual system". They work very differently than human visual system. They are trained on images without using any knowledge about connections within human (or any for that matter) visual cortex.
You must have heard of CNNs (Convolutional Neural Networks), they are modeled on the way neurons are arranged and connected in the animal visual cortex.

Evidence is very simple - you can fool these algorithms by modifying images in way that humans either cannot notice (specifically constructed noise applied to image) or is blindingly obvious to human what is going on (like with prepared fake sticker).
Theses are single function NNs with only a few neurons, of course they can be fooled. Even humans with these HUGE brains are fooled by stuff all the time.
 
In Bletchley Park, Codebreakers build a machine to copy the function of the German Enigma machines. These copies could be set so that the coded input they got would decode the message exactly the way the Enigma did.
These two machines were nothing alike, mechanically.
But they were identical, functionally.

A NN model of the brain that can generate the same output as a brain given the same input is functionally the same as a brain.
 

Back
Top Bottom