ChatGPT

To kill humans, wouldn't AI have to be in the form of a robot? I mean murder. ChatGPT can't do that until it has something like a body. How would a computer program manage to kill me? It can't.
When you let AI run electricity or water supplies, it can kill you. I am not saying it will, but it can.
 
So then don't let it run those things.

Yeah, that's the kind of precautions some people call for.

Another issue which just came on my mind (so it might be a nonsense) .. digital security. It takes effort to design you computer systems to be secure against unauthorized manipulations. Ie. hackers. And hacking is one area where AI could be quite good, even today. It can keep up with all updates in all software used anywhere, and also all known exploits. It might even be able to apply known intrusion approaches and develop new exploits. Sooner or later anyway.
With AI attacking, it might show that different AI might be good for defense. Which would mean putting AI in every system.
Still those would be simple specialized system, I don't see much need for even anything like language models.

So far AI was also typically used for very specialized tasks. Processing power was limited, but also the way AI can "sense" the problem was limited. Typically some kind of image processing would be used.
But AIs could not be used to react on what is going in society for example. They did not understand human language. They did not understand human world. But with LLMs they do both. To an extent, sure. But now you can let LLM analyze news articles and let another AI see if there is pattern related to stock market. That was not possible before decent LLM. You could react to words and phrazes .. but new LLMs can understand it the article speaks about good news or bad news, it might also understand what commodities might be affected, because it understands the relations between them. It's whole another level of "seeing" the world.

That doesn't play into any AI revolution scenario .. but it could be another way to weaponize AI, or for AI to have unexpected major effects.
 
Top AI scientists claim AI is danger and has to be regulated.

Stating that "AI is danger" (your words) does not equal to claiming that AI will become omnipotent and destroy the world or that other insane scenarios taken from science-fiction will occur.

Basically all of them.

Evidence?

Best you can hear is it is a problem, but we will be able to solve it. Worst you can hear is it's too late already. Looking at how we handled Covid and how we are handling global warming, I think it's too late.

You've already said that in the future we won't even be able to determine how "intelligent" AI has become because of how intelligent it is, so this just more unfalsifiable truisms.

It's also kind of evolution of mankind. Which is another reason I think it can't be stopped.

The coming "Singularity" preached by the likes of Kurzweil is just religious Millennialism for nerds who like computers. They are just as deluded as the Christians who believe in the second coming of Jesus.
 
When you let AI run electricity or water supplies, it can kill you. I am not saying it will, but it can.

One of the errors that various AI doomsday prophets continually make is that they assume that their "Evil AI of Doom" not only would but could do whatever it wants to do (which naturally for all AI is to immediately destroy all of humanity).

It's like they assume that no one will implement even the most basic of safety features like an OFF switch while at the same time giving it direct control over nuclear weapons.
 
Last edited:
Well by not being the smartest one means we're not in control any more. Of anything.

That doesn't follow at all. If that reasoning was sound the world would already be run by nerds such as those at CERN. In reality intelligence is a very poor indicator for power and influence.
 
That is one possibility. A more likely method would be withdrawal of services.

As I said, the possibility is there, but I regard it as highly unlikely.

An AI device (for lack of a better word) can potentially kill people if it is given some kind of hands. That could be anything from an actual android body to access to various essential systems. That is, provided there are no safety measures, which would basically be some override control and a non AI back-up control system.

But why should any sane operator of an AI system not install such things? We already have it on non-AI control systems. However, it might be a good idea if some legislation was put in place for such things, preferably yesterday.

Hans
 
An AI device (for lack of a better word) can potentially kill people if it is given some kind of hands. That could be anything from an actual android body to access to various essential systems. That is, provided there are no safety measures, which would basically be some override control and a non AI back-up control system.

But why should any sane operator of an AI system not install such things? We already have it on non-AI control systems.
Now we are entering an entirely different area of possible AI applications: military robots or drones. They have existed in sci-fi, and people’s imagination for years, and we can be quite sure that various military agencies are working in them. These will have minimal safeguards, mainly in order for them not to kill the “friendly” side. The advantage of AI in military robots and drones is that they act like the “fire and forget” missiles of earlier times, because they don’t need communication with a controller, and can’t be jammed.

However, it might be a good idea if some legislation was put in place for such things, preferably yesterday.
I think that there is already some international treaties that limit their application, but it is probably outdated, and in any case that military of various countries are not known to stick to rules that could prevent them from winning.

Poison gases are known to have been used within the last 50 years, and nuclear and biological weapons are not, but this is presumably more because of the risk of being hit back with these weapons, than any legal risk. The same could be true for military applications of AI.
 
I suppose that the way forward will be the same as for other weapons, like gas, etc.: Treaties that forbid them or limit their uses, especially against non-combattants. At an actual battlefield, AI robots replacing human soldiers may not be such a bad thing.

Hans
 
Last edited:
It might not need hands in all cases. Imagine an AI hijacking a suicide hotline, let’s say, and encouraging callers to do the deed. Not sure why it would, but it’s already done a fair amount of inexplicable things.

There's actually tons of them already deployed.

https://www.google.com/search?q=ai+therapist

It mostly suffers from the old GPT issues. It can't talk truth, and it can't tell if it does or does not. Of course it has zero intended social skills .. but it read a lot of books (most of them), and it can act like someone who does.
I guess it could do over 50% of what real therapist would do .. ie. listen patiently and react meaningfully. But I doubt it could react well in the most important moments. And I wouldn't be surprised if it could spiral into right out recommending a suicide, as literature is full of that as well.
 
There's actually tons of them already deployed.

https://www.google.com/search?q=ai+therapist

It mostly suffers from the old GPT issues. It can't talk truth, and it can't tell if it does or does not. Of course it has zero intended social skills .. but it read a lot of books (most of them), and it can act like someone who does.
I guess it could do over 50% of what real therapist would do .. ie. listen patiently and react meaningfully. But I doubt it could react well in the most important moments. And I wouldn't be surprised if it could spiral into right out recommending a suicide, as literature is full of that as well.

It is also expensive. $A149.99 for 6 months access.
 
It might not need hands in all cases. Imagine an AI hijacking a suicide hotline, let’s say, and encouraging callers to do the deed. Not sure why it would, but it’s already done a fair amount of inexplicable things.

Well, I used the term 'hands' in the broadest sense: It must able to manipulate the physical world.

As for your example, while utterly tragical, it would hardly pose a treath to humanity. I am not claiming that AI cannot or will not cause disasters; it will, like most other kinds of technology, but I don't belive it will be the end of humanity.

Hans
 
That is one possibility. A more likely method would be withdrawal of services.

As I said, the possibility is there, but I regard it as highly unlikely.

Its highly unlikely because it assumes that people would hand over total control over critical service like water and electricity to one singular decision-making entity. In reality, like they are today, it's almost certain any such public utilities would still be designed to avoid singular points of failure whether intentional sabotage or accidental ones.
 

Back
Top Bottom