ChatGPT

Still no irony in the people responsible for gifting us AI saying:



https://www.stuff.co.nz/world/30089...risk-of-extinction-experts-say-in-new-warning


Haven't read the link, but I assume it's what's the latest talking point now.

Yep, it's jaw-dropping, the irony of their "concern".



...Still, better this, I suppose, far better, than no concern at all, which is what would have been more usual, right?


...And also, without having gone into the details of this thing, but if these guys are saying the risk is real, and dire, we'll then I'm prepared to believe them. Qualifiedly, sure; skeptically, sure; but still, believe them, yes. Why would they make something like this up?
 
Still no irony in the people responsible for gifting us AI saying:



https://www.stuff.co.nz/world/30089...risk-of-extinction-experts-say-in-new-warning

Haven't read the link, but I assume it's what's the latest talking point now.

Yep, it's jaw-dropping, the irony of their "concern".



...Still, better this, I suppose, far better, than no concern at all, which is what would have been more usual, right?


...And also, without having gone into the details of this thing, but if these guys are saying the risk is real, and dire, we'll then I'm prepared to believe them. Qualifiedly, sure; skeptically, sure; but still, believe them, yes. Why would they make something like this up?

Probably best on topic here rather than any other thread on AI.

https://www.aerosociety.com/news/hi...-future-combat-air-space-capabilities-summit/


<snip> It also creates highly unexpected strategies to achieve its goal.
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

This example, seemingly plucked from a science fiction thriller, mean that: “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you're not going to talk about ethics and AI” said Hamilton.

The highlighted - the first law of bureaucracy is that if you reward people (and now AIs) with something based on a metric, many will work to achieve the letter of the metric, and not what you intended.
 
Probably best on topic here rather than any other thread on AI.

https://www.aerosociety.com/news/hi...-future-combat-air-space-capabilities-summit/




The highlighted - the first law of bureaucracy is that if you reward people (and now AIs) with something based on a metric, many will work to achieve the letter of the metric, and not what you intended.


Wow, that's completely SF territory. So much so that I find myself reaching for the salt shaker.

On the other hand, perhaps this nightmare SF scenario, this cliche, is exactly what Altman etc have been shouting out about --- the irony notwithstanding?

(Heh, the comparison with bureaucracy was hilarious.)
 
Wow, that's completely SF territory. So much so that I find myself reaching for the salt shaker.

On the other hand, perhaps this nightmare SF scenario, this cliche, is exactly what Altman etc have been shouting out about --- the irony notwithstanding?

(Heh, the comparison with bureaucracy was hilarious.)

You were right to reach for the salt.

From my original link....

[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]
 
I don't think it's a "likely outcome" at all. I think it's pretty fantastical actually. Just in that one example, the idea that the creators of the described simulation would have even coded it that way - programming in a "killable operator" that the system could at any point unilaterally choose as a valid target - is asinine all by itself; but the story afterwards of the programmers then floundering trying and failing to thwart the oh-so-clever AI's outside-the-box bloodlust is also dumb considering a single "can only fire at valid target types" rule is all it would actually take.

I don't believe this sort of catastrophizing about AI is useful or responsible. People are essentially just uncritically regurgitating plot points from horror movies and half-century-old sci-fi novels and asserting they are "plausible" and "likely" scenarios, based on nothing tangible.
 
I don't believe this sort of catastrophizing about AI is useful or responsible. People are essentially just uncritically regurgitating plot points from horror movies and half-century-old sci-fi novels and asserting they are "plausible" and "likely" scenarios, based on nothing tangible.

The point of this scenario (which was theoretically described decades ago) is, that if you create simple metric, which the AI is trying to maximize, AI will try to go around any obstacles. Even those, which we will put in to make the system safer.
It's not problem of how AI is implemented. It is problem of task definition.
But since people manage to do it, at some stage AI will also be able to do it. Problem is the intermediate steps. AI with ability to kill, smart enough to do things we don't expect, but not wise enough to understand what we want.
And AI already does things we don't expect, more often than not. And we already have AI with power to kill. We put them into cars.
 
The point of this scenario (which was theoretically described decades ago) is, that if you create simple metric, which the AI is trying to maximize, AI will try to go around any obstacles. Even those, which we will put in to make the system safer.
It's not problem of how AI is implemented. It is problem of task definition.
But since people manage to do it, at some stage AI will also be able to do it. Problem is the intermediate steps. AI with ability to kill, smart enough to do things we don't expect, but not wise enough to understand what we want.
And AI already does things we don't expect, more often than not. And we already have AI with power to kill. We put them into cars.

In the current iterations of most of the AIs that are getting attention that type of task would have to be human assisted learning, so if it produced a scenario that killed its operators that would be weighted rather negative by the humans, which means as the patterns build up it becomes an option the AI won't even "consider".
 
Regardless...

"based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation"

... this is still concerning

"Likely" determined by who? Most of these scenarios were thought up a long time ago, look at Asimov's Robot stories for examples, if folk want to claim they are "likely scenarios" today they need to say why they are given the current state of the art. And I don't think they can.
 
The point of this scenario (which was theoretically described decades ago) is, that if you create simple metric, which the AI is trying to maximize, AI will try to go around any obstacles. Even those, which we will put in to make the system safer.

Okay but as you just said, that's nothing like a new idea, so why is it being invoked in this way - specifically, by way of a scenario where an AI "goes rogue" and deliberately kills its operators, and this in particular then being described as a "plausible" and even "likely scenario"?

The example is formulated to intentionally provoke fear, and that is irresponsible.

But larger-picture, as I've said before, I think the focus on the spectre of an AI "going rogue" and deciding to kill humans and thus presenting an "existential problem for the human race" is a disservice, generally speaking. It wastes time on sci-fi drama and completely ignores the actual dangers that AI presents as a tool that can be used improperly or even maliciously by humans.

I'm talking things like...AI's being used to unfairly exclude people from insurance, benefits programs, or employment, or unfairly including them for scrutiny by law enforcement, based on completely inscrutable pattern-matching. Or entire legions of college graduates who used language-models to do all of their school work but are themselves functionally incompetent. AI's falsely accusing people and organizations of crimes and misdeeds and being widely cited, causing irrecoverable reputational and financial damage. AI vehicles becoming confused and just completely shutting down in the middle of an intersection, deadlocking traffic for hours. These are the real dangers, and we know this because these are things that AI or pre-AI algorithmic systems that are highly likely to be replaced by AI are actually already, right now, doing or being used to do. And people will acknowledge these problems when they're mentioned, sure - but then they just kind of move on, because these kinds of problems aren't sexy-cool. People aren't holding conferences and delivering keynotes describing what they're doing to find solutions to them, when the real money comes from spooky campfire stories about an AI nuking humanity to get a higher score in Pong.
 
AI killing people on its own will is a possibility, but it's furthest away.
At the moment, most likely problems are with people using AI to tasks it's not good at. And AI can be creatively bad at doing things. It won't be taken seriously till people die though.
There is also problem of AI being used for evil by bad people. But I guess you can't get around that by regulation. It helped with nukes, but nuclear materials are much easier to control. It wouldn't work with AI.
 
Imagine a version of the Trolley Problem. You're one of many people tied to the main track. A switch can divert the otherwise out of control trolley onto the siding. The switch is controlled by an AI, which was developed by a billionaire and has been trained to maximize the billionaire's wealth. On the siding is a big pile of the billionaire's money, which will be destroyed if the trolley were to be diverted there.

This is allegory, rather than a realistic literal scenario. The point is, the problem isn't AIs per se. It's the social contexts in which AIs are likely to be occurring.
 
I actually wanted to add that apart from ChatGPT sometimes having a bad memory, it also has a strong wish to please, and like some people I have encountered, is willing to invent facts rather than disappoint. Or maybe inventing is not the right term: it optimistically presents vaguely remembered items as firm facts.

ChatGPT may not currently have the possibility of checking its own facts by looking up the links it presents, so to a degree it is excusable, but it should be given the ability that humans have of when they are sure, and when they are not so sure of something remembered.


It is not clear that humans have such an ability. The well documented unreliability of eyewitness testimony in criminal cases (by people who are not being bribed or intimidated, of course) is strong evidence that they do not. At least not to any dependable degree.
 
It is not clear that humans have such an ability. The well documented unreliability of eyewitness testimony in criminal cases (by people who are not being bribed or intimidated, of course) is strong evidence that they do not. At least not to any dependable degree.
Quite correct. When we are sure, we might still be wrong, but sometimes we know that we are not sure, and everybody know what that means.

I have yet to see ChatGPT saying it is not sure, even when it obviously should have been not sure.

Lately, I have experimented a bit with Bing, and every time it has presented a footnote with a link, it has been correct. ChatGPT is known to dream up nonexistent links, but Bing doesn’t seem to do so. On the other hand, Bing has refused to help me with inquiries that ChatGPT answered without problem, such as producing a bit of computer code,
 
Probably best on topic here rather than any other thread on AI.

https://www.aerosociety.com/news/hi...-future-combat-air-space-capabilities-summit/




The highlighted - the first law of bureaucracy is that if you reward people (and now AIs) with something based on a metric, many will work to achieve the letter of the metric, and not what you intended.
It's neither new not unexpected. More than thirty years ago there was a naval simulation where the computer started sinking damaged shops as they held up convoy speeds.....
 
I have yet to see ChatGPT saying it is not sure, even when it obviously should have been not sure.

? Even with the default GPT-3.5:

Me: How do I garreth a widget?

GPT-3.5: I'm sorry, but I'm not sure what you mean by "garreth" a widget. Could you please provide more information or clarify your question? I'll do my best to assist you once I have a better understanding of what you're looking for.

But yeah, 3.5 is prone to fantasy. With 4 it seems you have to get into more complex problems before it generates fantasy, usually for problems that probably don't have solutions. Though it does have its glaring blind spots.
 
Quite correct. When we are sure, we might still be wrong, but sometimes we know that we are not sure, and everybody know what that means.

I have yet to see ChatGPT saying it is not sure, even when it obviously should have been not sure.
It has no way of knowing if it's 'sure' or not. It doesn't understand the concept of facts.

Not sure if this has been posted in this thread yet.

https://www.theguardian.com/comment...hatgpt-guardian-technology-risks-fake-article

In response to being asked about articles on this subject, the AI had simply made some up. Its fluency, and the vast training data it is built on, meant that the existence of the invented piece even seemed believable to the person who absolutely hadn’t written it.

Huge amounts have been written about generative AI’s tendency to manufacture facts and events. But this specific wrinkle – the invention of sources – is particularly troubling for trusted news organisations and journalists whose inclusion adds legitimacy and weight to a persuasively written fantasy. And for readers and the wider information ecosystem, it opens up whole new questions about whether citations can be trusted in any way, and could well feed conspiracy theories about the mysterious removal of articles on sensitive issues that never existed in the first place.
 
I think GPT could be used to get reliable data .. if it was used as translator from general language question into decent google query. Because it would not be giving an answer .. google would find it, with the sources.
It might seem pointless to us skeptics, who usually are skilled googlists .. but for general public it's often not so easy as to simple google it.
Though usually the main issue is them not even trying. 99% hoaxes on internet can be reliably debunked by single google search.
 

Back
Top Bottom