ChatGPT

Its highly unlikely because it assumes that people would hand over total control over critical service like water and electricity to one singular decision-making entity. In reality, like they are today, it's almost certain any such public utilities would still be designed to avoid singular points of failure whether intentional sabotage or accidental ones.

You can design against accident .. but against sabotage ? How ? Anything you can do the saboteur can undo. And that's not even bringing the problem of saboteur being smarter than the designer.
 
Apparently, there are editors of crap magazines who want to replace all their journalists with AI. If you are a crap journalist, this is literally dangerous.

Incorrect - and that particular way of simplifying the issue is ignorant. It is literally dangerous for "good" journalists as much as it is for "crap" journalists because the business decision to replace a journalist with AI has nothing to do with the quality of the journalist's work, it's about how much money the company can save by not having to pay them; and it isn't just "crap" magazines that want to save money, but all magazines.
 
You can design against accident .. but against sabotage ? How ? Anything you can do the saboteur can undo. And that's not even bringing the problem of saboteur being smarter than the designer.

You can make it harder for saboteurs. Like if they destroy something, then it can easily be replaced. And the damaged part bypassed.

One big danger of AI is if an update is released that causes harm, such as major power failures.
 
There's actually tons of them already deployed.

https://www.google.com/search?q=ai+therapist

It mostly suffers from the old GPT issues. It can't talk truth, and it can't tell if it does or does not. Of course it has zero intended social skills .. but it read a lot of books (most of them), and it can act like someone who does.
I guess it could do over 50% of what real therapist would do .. ie. listen patiently and react meaningfully. But I doubt it could react well in the most important moments. And I wouldn't be surprised if it could spiral into right out recommending a suicide, as literature is full of that as well.


I'm probably misunderstanding this, because you seem to be saying that "literature" is full of examples of AI recommending suicide.

If I am, could you explain what you mean in different words?

If I'm not, could you give some examples of this "literature"?
 
I'm probably misunderstanding this, because you seem to be saying that "literature" is full of examples of AI recommending suicide.

If I am, could you explain what you mean in different words?

If I'm not, could you give some examples of this "literature"?

I mean .. literature is full of stories about suicide. For GPT style AI it's difficult to distinguish between fact and fiction. It can go from scientific theory to Romeo and Juliet inside a paragraph.
You can somewhat "program" it with prompt prefix. You basically add some "rules" in front of every query user inputs. Like "don't talk about suicide" .. it's used for customizing otherwise static models, so they can for example inform about a product .. or act like some specific person. But it's limited in scope, as prompt length is limited. And it's not real programming, it has exactly the same weight as the user query. So for example, with rule like this, the GPT can respond: "Since I can't talk about suicide, I recommend seeking professional help".
Large Language Models are simply made to create good text, nothing more. Talking with them is like talking with dictionary, even if they are really good dictionary.
 
Sabotage requires a want or desire to cause harm, which is something that AI doesn't have. It would need emotion to perform sabotage, which is more than an accident or error. I don't think AI could ever have emotion, that is my opinion.
 
Regarding the possibility of AI causing deaths, one might look to this cautionary tale:

https://apnews.com/article/arizona-heat-death-legacy-3fce53af423293d9fb15d7889dee9e13

Measures have been taken, but they're not strict, and not in black and white where an AI might need them to be.

This looks like the kind of problem that doesn't have to occur with AI, but one that very well could, if those in charge of regulations are complacent or fail to take all the possibilities into account.
 
I don't think we should let the fear of what we don't know we don't know dictate the course of future technological development.

I think that at a certain point we can acknowledge that though we might not know the consequences of a particular disruptive technology, the potential benefits are obvious and great, and we'll just deal with those consequences when we know what they are. I don't, personally, think it's likely that a consequence will be so great, and so sudden, that we won't be able to develop a means to mitigate it somehow.
 
Regarding the possibility of AI causing deaths, one might look to this cautionary tale:

https://apnews.com/article/arizona-heat-death-legacy-3fce53af423293d9fb15d7889dee9e13

Measures have been taken, but they're not strict, and not in black and white where an AI might need them to be.

This looks like the kind of problem that doesn't have to occur with AI, but one that very well could, if those in charge of regulations are complacent or fail to take all the possibilities into account.

How was AI implicated in that death? It says that her power was cut off over an unpaid bill, but that must have been a policy or a decision made by a person. If someone is blaming AI, that sounds like someone trying shift responsibility. Human beings decided that it is OK to turn off someone's power if they haven't fully paid their electric bill, consequences be damned. After this death they made a slight change to the rules to prevent it, but only on the hottest of summer days.
 
How was AI implicated in that death? It says that her power was cut off over an unpaid bill, but that must have been a policy or a decision made by a person. If someone is blaming AI, that sounds like someone trying shift responsibility. Human beings decided that it is OK to turn off someone's power if they haven't fully paid their electric bill, consequences be damned. After this death they made a slight change to the rules to prevent it, but only on the hottest of summer days.
Of course AI is not implicated in that death, nor is anyone blaming it. Did you read my post? Does the word "possibility" have meaning in your vocabulary? The point is that it was the result of a simple policy decision that required thought beyond the policy, something that seems likely to be a problem for an AI and the kind of thing that needs to be considered in advance. If the people who make the rules don't understand the consequence of the rules, an AI that runs on those rules is not likely to either, and who can intervene, when and how, seems a reasonable thing to keep in mind before handing the controls over.

I'm not saying AI must be dangerous, but I think the attitude of some here is. We need to understand and plan for the possible ways a new technology can go wrong before it happens, not after. To suggest that we shouldn't worry, and we can figure it out later seems a bit optimistic in the age of global warming and nuclear threat.
 
Well by not being the smartest one means we're not in control any more. Of anything. AI might keep us as pets. Or it can kill us as pests (which we would kinda deserve). It can just not care but change the planet into something not suited for us.
IMHO it might get to a level it won't want to kill us, the same way we don't really want to eradicate any species. But there will be many AIs, not just one. They might have different levels of intelligence and wisdom. They might have different goals. Some might be designed to destroy. And there is also this stage in development, when you pull legs of flies just for curiosity.
I don't see how humans could thrive in such environment.

What do you mean, "we". I am neither smartest, nor in control of anything.
Why should I worry who is?
 
Sabotage requires a want or desire to cause harm, which is something that AI doesn't have.

Yet. Maybe.

If our brains can give rise to desires of all sorts - for both good and ill - through billions of synapses, I see no fundamental reason a sufficiently complex computer/neural network might not give rise to “desires” as well.

Concerning designing in “safeguards”, consider what an AI an order of magnitude, or even several orders of magnitude, more intelligent than humans could not easily overcome any such safeguards. I think it’s beyond hubris to assume that would not be easily within an AI’s reach.
 
Last edited:
Sabotage requires a want or desire to cause harm, which is something that AI doesn't have. It would need emotion to perform sabotage, which is more than an accident or error. I don't think AI could ever have emotion, that is my opinion.

I disagree with this premise.

It would only need a reason, and that can be arrived at with utter dispassion.
 
Of course AI is not implicated in that death, nor is anyone blaming it. Did you read my post? Does the word "possibility" have meaning in your vocabulary? The point is that it was the result of a simple policy decision that required thought beyond the policy, something that seems likely to be a problem for an AI and the kind of thing that needs to be considered in advance. If the people who make the rules don't understand the consequence of the rules, an AI that runs on those rules is not likely to either, and who can intervene, when and how, seems a reasonable thing to keep in mind before handing the controls over.

[snip]


It certainly needs to be planned for in advance, but no AI is required. Such shut-offs could easily be automated with technology we've had for decades. And they would be equally problematic

People need to quit blaming their tools.

There was a little ditty that popped up way back when word processing systems first hit corporate offices. It was called;


The Secretary's Lament.​

I really hate this damned machine.
I wish that they would sell it.
It never does just what I want,
but only what I tell it.​


Things really haven't changed all that much in the ensuing half century.
 
Yet. Maybe.

If our brains can give rise to desires of all sorts - for both good and ill - through billions of synapses, I see no fundamental reason a sufficiently complex computer/neural network might not give rise to “desires” as well.

Concerning designing in “safeguards”, consider what an AI an order of magnitude, or even several orders of magnitude, more intelligent than humans could not easily overcome any such safeguards. I think it’s beyond hubris to assume that would not be easily within an AI’s reach.

Not just billions of synapses - it uses hormones.
 

Back
Top Bottom