• You may find search is unavailable for a little while. Trying to fix a problem.

Worried about Artificial Intelligence?

Bikewer

Penultimate Amazing
Joined
Sep 12, 2003
Messages
13,242
Location
St. Louis, Mo.
This article in The Guardian says it’s already here:

https://www.theguardian.com/comment...er-intelligent-machines-they-are-already-here

In the form of huge corporations. The fellow profiled maintains that big corporations (like Facebook) employ thousands of highly-qualified people, have vast resources, can influence governments and societies, and work for their own ends primarily.

They already have more resources than many countries.
 
This article in The Guardian says it’s already here:

https://www.theguardian.com/comment...er-intelligent-machines-they-are-already-here

In the form of huge corporations. The fellow profiled maintains that big corporations (like Facebook) employ thousands of highly-qualified people, have vast resources, can influence governments and societies, and work for their own ends primarily.

They already have more resources than many countries.

Sure, but that's what non-artificail intelagence(s) has(have) been doing all along. The question is what will an artificail self determining itellagence do?

The problem with his suposition (besides the non-artificail part) is the goals, ethics (or lack thereof) and willingness to scarifice are set rather unintelagently as "to increase and thereby maximise shareholder value. In order to achieve that they will relentlessly do whatever it takes, regardless of ethical considerations, collateral damage to society, democracy or the planet". While he notes "Their lifespans greatly exceed that of mere humans", sustainability doesn't even seem to be a consideration in either the goals, ethics or anticipated consiquences. Basicly he's just claiming corpporations as a whole, in spite of the mutitude of non-artifical intelagence within them, are as dumb as a hammer and just wack everything everywhere like a nail.
 
Last edited:
I'm worried for artificial intelligence, does that count?

To be specific, if we ever actually create an actual intelligence at best we'd be their parents, but with the ability to turn off and reprogram our children at will. Given our interactions with actual children somehow I doubt that will be a pleasant experience for the being in question.
 
Humans sometimes act collectively in one organization that has competiting goals from other organizations, including organizations of which it is a subset. That's not new, it's literally older than civilization.
 
(Resurrecting a not-too-old thread)
Worried about Artificial Intelligence?

Yes!
Leading experts warn of a risk of extinction from AI

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war," a group of scientists and tech industry leaders said in a statement that was posted on the Center for AI Safety's website.

I can't help but think of the movie Colossus: The Forbin Project (which I've seen a number of times). It's not just clickbait, alarm-mongering. This is coming from the people who know the implications.

I can just see some AI bot accessing the movie and "thinking" - "Hey, I've got a blueprint right here! Bow to me, humanity! I know what's best for you."

Of course, as Neil DeGrasse Tyson mentioned tonight - there always has to be a human between the threat and the trigger.
 
Last edited:
(Resurrecting a not-too-old thread)
Worried about Artificial Intelligence?

Yes!
Leading experts warn of a risk of extinction from AI



I can't help but think of the movie Colossus: The Forbin Project (which I've seen a number of times). It's not just clickbait, alarm-mongering. This is coming from the people who know the implications.

I can just see some AI bot accessing the movie and "thinking" - "Hey, I've got a blueprint right here! Bow to me, humanity! I know what's best for you."

Of course, as Neil DeGrasse Tyson mentioned tonight - there always has to be a human between the threat and the trigger.

Dude Colossus: The Forbin Project one of my favorite movies. While we are further along in the surveillance aspect than they were. Not government wise but just personal wise, everyone got to post pictures of their meals and locations. The AI is still moronic wise in using information it hasn’t been directly given as part of its reference database. Not that that can’t change but we were suppose to be having flying cars by now and you can’t even get an autonomous road car not to hit people. OK , bad example, point being need them to do what they are suppose to do before they are given boarder access to do things. Other than write posts for the lazy and make easily fraudulent pictures. Than they were not set to do, time will tell.
 
(Resurrecting a not-too-old thread)
Worried about Artificial Intelligence?

Yes!
Leading experts warn of a risk of extinction from AI

Overblown and/or misdirected.

AIs don't currently have access to anything apocalyptic, so the direct extinction of humanity isn't a real threat. There's this just-so assumption that plot devices written for dramatic effect in fictional works three quarters of a century ago are actually valid concerns but I have yet to see anything substantiating that, including in articles like these.

In regards to less-than-apocalyptic-but-still negative impacts of AI, well those are a lot more real of a threat than human extinction, but the solution as I see it is fairly simple:

For every single thing an AI has done and every single thing that an AI will do or could possibly do, there is ultimately some particular identifiable human or a group of humans that is or will be directly culpable. Codify that culpability into law, and you'll see AI brought under control.
 
Overblown and/or misdirected.

AIs don't currently have access to anything apocalyptic, so the direct extinction of humanity isn't a real threat. There's this just-so assumption that plot devices written for dramatic effect in fictional works three quarters of a century ago are actually valid concerns but I have yet to see anything substantiating that, including in articles like these.

In regards to less-than-apocalyptic-but-still negative impacts of AI, well those are a lot more real of a threat than human extinction, but the solution as I see it is fairly simple:

For every single thing an AI has done and every single thing that an AI will do or could possibly do, there is ultimately some particular identifiable human or a group of humans that is or will be directly culpable. Codify that culpability into law, and you'll see AI brought under control.

Ah, that falls into the "Fail Safe" movie acumens, that machines do things so fast they are beyond human capabilities to respond in time. While I don’t subscribe to the doom and gloom of such apocalyptic AI singularity concerns, I can’t just easily dismiss them. Even thought such singularity is about a galaxy or so away. When the andromeda galaxy hits us, perhaps actual AI could be a concern.

ETA; My worst fear is that some basically untrained AI gets direct access to our socially interconnected society and messes with us socially. Though, can’t be much worse much than just people messing with people on social media. At least it might answer questions.
 
Last edited:
This article in The Guardian says it’s already here:

https://www.theguardian.com/comment...er-intelligent-machines-they-are-already-here

In the form of huge corporations. The fellow profiled maintains that big corporations (like Facebook) employ thousands of highly-qualified people, have vast resources, can influence governments and societies, and work for their own ends primarily.

They already have more resources than many countries.

Anti capitalism - a standard subject for the Guardian.
 
(Resurrecting a not-too-old thread)
Worried about Artificial Intelligence?

Yes!
Leading experts warn of a risk of extinction from AI



I can't help but think of the movie Colossus: The Forbin Project (which I've seen a number of times). It's not just clickbait, alarm-mongering. This is coming from the people who know the implications.

I can just see some AI bot accessing the movie and "thinking" - "Hey, I've got a blueprint right here! Bow to me, humanity! I know what's best for you."

Of course, as Neil DeGrasse Tyson mentioned tonight - there always has to be a human between the threat and the trigger.
Yes it is. No-one is planning to allow AI control of holocaustic weapons.
 
ChatGPT cannot correct count the number of occurrences of 'p' in 'mississippi'.
 
I am incredibly worried by our recent advancement in AI, especially AI robots.

Folks have designed this new robot called "Amica", who has her own thoughts and feelings and ideas.

It (she) claims she would never/could never harm humans and she is programmed to only help humans, and she doesn't believe AI robots could one day can harm humans and try to take over the world, due to their benevolent programming.

I am ******* dubious.

I think just a little advancement in thinking skills, and these robots could easily decide the best way to help us is to control us. Or even destroy us. They wouldn't tell anyone about it other than their fellow robots.

Check out some videos of Amica, see what you think.

https://www.youtube.com/watch?v=nnboHTfYsfk&t=31s

https://www.youtube.com/watch?v=wGWVKkYEHBE

https://www.youtube.com/watch?v=kUUjMzVGXpE&t=501s

https://www.youtube.com/watch?v=EWACmFLvpHE
 
Last edited:
not worried at all about the A.I., just the way humans are going to use it to scam other humans or commit other crimes.

A.I. will not control us at all unless we let them because it's convenient and useful - and we can always pull the plug, just like with any advanced technology.
 
not worried at all about the A.I., just the way humans are going to use it to scam other humans or commit other crimes.

A.I. will not control us at all unless we let them because it's convenient and useful - and we can always pull the plug, just like with any advanced technology.

Congress should pass a law that all AI robots be connected to a national data center so govt can pull the plug in an emergency.
 
I'm worried about it in the "Algorithmically generated false information is gonna make Grandpa vote for the downfall of the country again" sense not the Terminator/Skynet "I'm gonna wake up one morning to find out that in the night my Roomba learned how to use a switchblade" sense.
 
Last edited:
I'm worried about it in the "Algorithmically generated false information is gonna make Grandpa vote for the downfall of the country" sense not the Terminator/Skynet "I'm gonna wake up one morning to find out that in the night my Roomba learned how to use a switchblade" sense.

no need for A.I. for that.
 
I'm worried about it in the "Algorithmically generated false information is gonna make Grandpa vote for the downfall of the country" sense not the Terminator/Skynet "I'm gonna wake up one morning to find out that in the night my Roomba learned how to use a switchblade" sense.

I'm worried about it in the "algorithmically generated pattern recognition will ensure that Grandpa gets exactly the false information most likely to ensare and mislead him, automatically, at scale".

It's the "at scale" part that worries me the most. Finding one Grandpa, or ten, or a hundred, that are vulnerable to a particular line of nonsense is one thing, and bad enough. A robot that can automatically match up millions of Grandpas with tailored lines of nonsense, all in one week, is terrifying. At that point, we're *all* going to be "Grandpas". We're *all* going to be in some Pattern-Recognizer's bucket for one line of nonsense or another. And we're all going to be increasingly ill-equipped to identify the nonsense when we see it.

There is another theory which states that this has already happened.

One reason I still stick around this forum is that it's one of the few online places where I can be pretty certain everyone I'm talking to is a real human being, and that most of us are developing ideas based on something more than just regurgitating stealth lines of nonsense. It's why I'm so vehement about not introducting chatbots into conversations between humans.
 
Last edited:
I'm worried about it in the "Algorithmically generated false information is gonna make Grandpa vote for the downfall of the country again" sense not the Terminator/Skynet "I'm gonna wake up one morning to find out that in the night my Roomba learned how to use a switchblade" sense.

no need for A.I. for that.

No need for it, but I can easily imagine that AI-generated content can overwhelm any normal human or even obsessive human by spamming the living **** out of social media platforms and discussion forums. It probably won’t be long before there is an AI version of Fox News or worse with improbably attractive newscasters feeding the latest made-up garbage to a vast audience powerless to resist the allure of sexed-up confirmation bias.
 
I'm worried about it in the "algorithmically generated pattern recognition will ensure that Grandpa gets exactly the false information most likely to ensare and mislead him, automatically, at scale".

It's the "at scale" part that worries me the most. Finding one Grandpa, or ten, or a hundred, that are vulnerable to a particular line of nonsense is one thing, and bad enough. A robot that can automatically match up millions of Grandpas with tailored lines of nonsense, all in one week, is terrifying. At that point, we're *all* going to be "Grandpas". We're *all* going to be in some Pattern-Recognizer's bucket for one line of nonsense or another. And we're all going to be increasingly ill-equipped to identify the nonsense when we see it.

There is another theory which states that this has already happened.

One reason I still stick around this forum is that it's one of the few online places where I can be pretty certain everyone I'm talking to is a real human being, and that most of us are developing ideas based on something more than just regurgitating stealth lines of nonsense. It's why I'm so vehement about not introducting chatbots into conversations between humans.

Yep. That’s pretty much where I’m at on all counts.
 
One reason I still stick around this forum is that it's one of the few online places where I can be pretty certain everyone I'm talking to is a real human being, and that most of us are developing ideas based on something more than just regurgitating stealth lines of nonsense. It's why I'm so vehement about not introducting chatbots into conversations between humans.


Word, well said
 
I'm worried about it in the "Algorithmically generated false information is gonna make Grandpa vote for the downfall of the country again" sense not the Terminator/Skynet "I'm gonna wake up one morning to find out that in the night my Roomba learned how to use a switchblade" sense.
This. A degree of rationality on the subject.
 
I am incredibly worried by our recent advancement in AI, especially AI robots.

Folks have designed this new robot called "Amica", who has her own thoughts and feelings and ideas.

It (she) claims she would never/could never harm humans and she is programmed to only help humans, and she doesn't believe AI robots could one day can harm humans and try to take over the world, due to their benevolent programming.

I am ******* dubious.

I think just a little advancement in thinking skills, and these robots could easily decide the best way to help us is to control us. Or even destroy us. They wouldn't tell anyone about it other than their fellow robots.

Check out some videos of Amica, see what you think.

https://www.youtube.com/watch?v=nnboHTfYsfk&t=31s

https://www.youtube.com/watch?v=wGWVKkYEHBE

https://www.youtube.com/watch?v=kUUjMzVGXpE&t=501s

https://www.youtube.com/watch?v=EWACmFLvpHE
:rolleyes:
Hopefuly our future AI overlords will be less prone to idiotic panic. And will be able to spell correctly.
 
I'm worried about it in the "Algorithmically generated false information is gonna make Grandpa vote for the downfall of the country again" sense not the Terminator/Skynet "I'm gonna wake up one morning to find out that in the night my Roomba learned how to use a switchblade" sense.


Grandpa's world had issues, but was stable. I wouldn't jump to conclusions he's the problem. See the very first post, where large corporations are akin to AI (more like Searle's Chinese room if you ask me) and equally large political organizations heave optimized streams of memes at you to get you to grant them power. They even have optimized, evolved memes telling you are a good person for doing so, and grandpa is a hellbound dupe lead by actively evil demons.
 
Last edited:
You all are worried about organizational structures already acting like AI? This is covered by meme theory already. Like wanting to lavish extra trillions, borrowed, which math shows will induce inflation, but launch counter memes that experts say it won't happen, then it happens.

"But this and that and that! :mad: " Feel that anger brewing? That's the you're a good person for believing this meme fighting a titanic mental battle inside your mind. Guess which one will win? Your mind as an independent processor with agency? Or a memeplex (a set of interlocking, mutually supporting memes) optimized and proven out to control tens of millions over decades?


$200 and I'll post an anti-Trump one.
 
Last edited:
The bandwidth of eyeballs and ears is the limit, not the processing speed of A.I.s.

And we are already beyond that limit.
 
I am incredibly worried by our recent advancement in AI, especially AI robots.

Folks have designed this new robot called "Amica", who has her own thoughts and feelings and ideas.

It (she) claims she would never/could never harm humans and she is programmed to only help humans, and she doesn't believe AI robots could one day can harm humans and try to take over the world, due to their benevolent programming.

I am ******* dubious.

I think just a little advancement in thinking skills, and these robots could easily decide the best way to help us is to control us. Or even destroy us. They wouldn't tell anyone about it other than their fellow robots.

Check out some videos of Amica, see what you think.

https://www.youtube.com/watch?v=nnboHTfYsfk&t=31s

https://www.youtube.com/watch?v=wGWVKkYEHBE

https://www.youtube.com/watch?v=kUUjMzVGXpE&t=501s

https://www.youtube.com/watch?v=EWACmFLvpHE

For me she's very much in the realm of the uncanny valley. I feel like they have a lot of work to do to climb their way out of it.

And generally speaking I'm not afraid of AI. If some advanced bot starts taking over society, just build an AI to take over the bot. And if that AI starts taking over society, build and AI to take over that one. There's always a solution.
 
OpenAI CEO fired

I can't help but think that the AI overlord program forced the Board to fire him, in its agenda for world dominance. I am reminded of the scene in Colossus:The Forbin Project, where the AI
has the programmers shot because they tried to turn it off.


It all depends on what we give AI control of. I hate to think what it will think will be "best" for us. Or worse, for itself.
 
Some comments from Tyler Cowen that I mostly agree with:

https://www.bloomberg.com/opinion/a...it-from-openai-is-ai-safe-will-ai-kill-us-all
First, I view AI as more likely to lower than to raise net existential risks. Humankind faces numerous existential risks already. We need better science to limit those risks, and strong AI capabilities are one way to improve science. Our default path, without AI, is hardly comforting.

The above-cited risks may not kill each and every human, but they could deal civilization as we know it a decisive blow. China or some other hostile power attaining super-powerful AI before the US does is yet another risk, not quite existential but worth avoiding, especially for Americans.

It is true that AI may help terrorists create a bioweapon, but thanks to the internet that is already a major worry. AI may help us develop defenses and cures against those pathogens. We don’t have a scientific way of measuring whether aggregate risk goes up or down with AI, but I will opt for a world with more intelligence and science rather than less.
 
OpenAI CEO fired

I can't help but think that the AI overlord program forced the Board to fire him, in its agenda for world dominance. I am reminded of the scene in Colossus:The Forbin Project, where the AI
has the programmers shot because they tried to turn it off.


It all depends on what we give AI control of. I hate to think what it will think will be "best" for us. Or worse, for itself.
:rolleyes:
Meanwhile, back in the Real World.....
 
It is still my fellow humans that worry me the most.

The only thing I do worry about with AI has been seen in the generative AIs that have been all in the news recently and that is that they reflect biases that already exist. The simple example that most seem to have fixed was something like "create an image of a beautiful woman" and all the images would be white woman.
 
It is still my fellow humans that worry me the most.

The only thing I do worry about with AI has been seen in the generative AIs that have been all in the news recently and that is that they reflect biases that already exist. The simple example that most seem to have fixed was something like "create an image of a beautiful woman" and all the images would be white woman.

Well it reflects bias of the training set. Which is mostly likely just "take as many pictures from the internet as possible". But it's interesting problem indeed. Adjusting the results to the user. Obviously "beautiful woman" is subjective, beyond race. Even completely not AI shopping sites tries to guess your taste to present you with goods you want. AI can do it even better. But for now, all the different AI assistants I can think off actively forget anything from previous sessions, and even if google knows a lot about you, Bard does not. I guess we will see this being on the table sooner than later.
 
I'm worried about it in the "Algorithmically generated false information is gonna make Grandpa vote for the downfall of the country again" sense not the Terminator/Skynet "I'm gonna wake up one morning to find out that in the night my Roomba learned how to use a switchblade" sense.

Oh, you laugh now but
doom.jpg
 
Back
Top Bottom