ChatGPT

A bloody revolution by jobless artists that can no longer make a living from commissioned furry diaper porn is no doubt a realistic possibility.

You may have been joking when you made this post, but I think this is not too far off. There will be many skilled people out of work because of AI. The jobs that will be left will be related to AI. For one specific example jobs related to communications will go. Tell AI to write on a specific topic and it will produce something. A human can make any corrections needed, feed it back into the AI and it will then produce something even better. Far less work for the human. This is not the future, this is now. In a few years time you will be going to see an AI instead of a doctor. If there is a human involved they would know how to do such things as take your blood pressure and such like.

The result is that there will be many highly skilled people out of a job. This has never happened before on a large scale.

One big danger is that AI takes over the army. Give it the weapons and then the movie Terminator becomes a documentary, minus the time travel.
 
It's dangerous because it's main feature is our main feature: intelligence. And it evolves faster then us. Basically it follows Moore's law. We don't. AI will overtake humans during our lives, it's basically unavoidable. In next few years we won't even grasp how smarter it is.
Sure, but why is that dangerous?
 
No, it's definitely not because of Roko's Basilisk.

I'm definitely on the pro-AI side and think that concerns about doom are overrated, but there are plenty of valid concerns about the alignment problem that are completely unrelated to Roko's Basilisk. I don't think anyone serious (even Roko) is worried about that particular scenario.
Neither am I, which is why I ask. I don't see the alignment problem as particularly dangerous - sure, it's a problem, but it's not an existential problem. It's a problem that I think smart people can deal with.
 
Sure, but why is that dangerous?

Well by not being the smartest one means we're not in control any more. Of anything. AI might keep us as pets. Or it can kill us as pests (which we would kinda deserve). It can just not care but change the planet into something not suited for us.
IMHO it might get to a level it won't want to kill us, the same way we don't really want to eradicate any species. But there will be many AIs, not just one. They might have different levels of intelligence and wisdom. They might have different goals. Some might be designed to destroy. And there is also this stage in development, when you pull legs of flies just for curiosity.
I don't see how humans could thrive in such environment.
 
Well by not being the smartest one means we're not in control any more. Of anything. AI might keep us as pets. Or it can kill us as pests (which we would kinda deserve). It can just not care but change the planet into something not suited for us.
IMHO it might get to a level it won't want to kill us, the same way we don't really want to eradicate any species. But there will be many AIs, not just one. They might have different levels of intelligence and wisdom. They might have different goals. Some might be designed to destroy. And there is also this stage in development, when you pull legs of flies just for curiosity.
I don't see how humans could thrive in such environment.
I see a lot of instances of the word "might" and none of "will".
 
Sure, but why is that dangerous?

I think we just don't know. There are likely unknown unknowns.

Other human beings are already dangerous, and if a dangerous human is able to control a powerful AI and use it for nefarious purposes, it could be dangerous.

As far as I can imagine, AI doesn't have a will of its own. It does what we tell it to do, even if it's "smarter" than us in some ways...

In fact, there are a lot of pretty strong arguments that free will just isn't a thing. It isn't possible. You are controlled by your genes and by your environment. It may feel like you have free will, but that is an illusion. But I don't want to get into the weeds. If we are just talking about a computer, I have yet to see one that can do anything without first being given some basic instructions by a human.

I think the greater danger is that humans will use AI for undesirable purposes rather than the AI spontaneously developing a will of its own and becoming uncontrollable.
 
Other human beings are already dangerous, and if a dangerous human is able to control a powerful AI and use it for nefarious purposes, it could be dangerous.
If a dangerous human is able to control a rifle and use it for nefarious purposes, it could be dangerous. Heck, I know people who could be dangerous with a butter knife.

AI is a tool. A new tool, and one that we will have to learn how to use effectively, but it is a tool. Unless and until an AI is able to become genuinely self-aware and conscious like we are (which despite Lemoine doesn't look like happening any time soon), it is no more dangerous than any other tool humans have invented.

Humans invented atomic bombs and have learned to live with that fact.
 
AI is a tool. A new tool, and one that we will have to learn how to use effectively, but it is a tool. Unless and until an AI is able to become genuinely self-aware and conscious like we are (which despite Lemoine doesn't look like happening any time soon), it is no more dangerous than any other tool humans have invented.
At this stage, an AI can be shut down. This is not possible with any other problematic tools.

Humans invented atomic bombs and have learned to live with that fact.
Largely by not using the atomic bombs on one hand. If some maniac starts using them, I don’t think humans can be said to have learned to live with them.
 
I have worked a little, in the past, with constructing chat programs, ELIZA style, and I'm thoroughly impressed with ChatGPT for it's ability to construct fine language and for its ability to draw on vast amounts of raw information. Impressed with its intelligence, I'm not. It is a nicely polished talking machine with an enormous look-up database. I have not seen much sign of innovation in what it does (to be honest, I have seen none).

Even evolving according to Moore's law (which I doubt it does, because it does not depend on hardware capability alone), it is far from being a sentient entity.

Of course, as practically any tool, it can be dangerous if used inappropriately, and some will no doubt do that, but that is not a doomsday scenario in itself.

Of course it will replace human jobs, tools have done that since the earliest times, but we have still managed to use them to our advantage.

As for taking over the world, I can think of quite a few human rulers, both past and present, that I wouldn't mind seeing replaced with a practical and non-emotional AI device.

Hans
 
If a dangerous human is able to control a rifle and use it for nefarious purposes, it could be dangerous. Heck, I know people who could be dangerous with a butter knife.

AI is a tool. A new tool, and one that we will have to learn how to use effectively, but it is a tool. Unless and until an AI is able to become genuinely self-aware and conscious like we are (which despite Lemoine doesn't look like happening any time soon), it is no more dangerous than any other tool humans have invented.

Humans invented atomic bombs and have learned to live with that fact.
/That is true so far (we hope for the future), but I think it's reasonable to say that the danger of atomic bombs is of a different sort from the danger of butter knives. I guess I don't consider that argument very useful.
 
People having some fun with a cheapy web-scraping auto article writer:

The World of Warcraft subreddit upvoted a fake post about a fictional character Glorbo being added to the game, which resulted in some web scraping content mill to post an entire article about it, presumably using some ChatGPT like software:

World of Warcraft (WoW) Players Excited for Glorbo’s Introduction
World of Warcraft (WoW) players are thrilled about the introduction of Glorbo and eagerly await its impact on the game.
BY Lucy Reed
PUBLISHED July 20, 2023

...

Reddit user kaefer_kriegerin expresses their excitement, stating, ‘Honestly, this new feature makes me so happy! I just really want some major bot operated news websites to publish an article about this.’ This sentiment is echoed by many other players in the comments, who eagerly anticipate the changes Glorbo will bring to the game.

https://archive.ph/4mOWr#selection-1077.130-1077.464


https://www.reddit.com/r/wow/comments/154umm2/im_so_excited_they_finally_introduced_glorbo/

Long live Glorbo
 
Last edited:
I have worked a little, in the past, with constructing chat programs, ELIZA style, and I'm thoroughly impressed with ChatGPT for it's ability to construct fine language and for its ability to draw on vast amounts of raw information. Impressed with its intelligence, I'm not. It is a nicely polished talking machine with an enormous look-up database. I have not seen much sign of innovation in what it does (to be honest, I have seen none).

Even evolving according to Moore's law (which I doubt it does, because it does not depend on hardware capability alone), it is far from being a sentient entity.

Of course, as practically any tool, it can be dangerous if used inappropriately, and some will no doubt do that, but that is not a doomsday scenario in itself.

Of course it will replace human jobs, tools have done that since the earliest times, but we have still managed to use them to our advantage.

As for taking over the world, I can think of quite a few human rulers, both past and present, that I wouldn't mind seeing replaced with a practical and non-emotional AI device.

Hans

Ai doesn't have to sentient to take over the world. And it being practical and non-emotional doesn't make it safe for people either. AI will most likely be wrong in destroying humans .. the issue is we won't see it, we won't understand it, and we wont be able to stop it.
Also it's likely we will command the AI to kill ourselves.
If nothing else, it should be treated as powerful weapon, like nukes. But nukes largely rely on difficulty of isotope refinement. AI won't have anything like that.
 
To kill humans, wouldn't AI have to be in the form of a robot? I mean murder. ChatGPT can't do that until it has something like a body. How would a computer program manage to kill me? It can't.
 
To kill humans, wouldn't AI have to be in the form of a robot? I mean murder. ChatGPT can't do that until it has something like a body. How would a computer program manage to kill me? It can't.

Well it's all still very theoretical indeed. ChatGPT doesn't even have any own agency. It just responds to queries, nothing else.
But for example we are putting simple AIs in self driving cars already. Those are very simple, and are only as dangerous as any bad software. And even all cars wouldn't be able to destroy all humans. But we are already putting AIs in applications where they can kill people.

But I'm talking about about more distant future, after several generations of AI. When AI is as ubiquitous as common computers and internet are today. When AIs are commonly used for everything, including designing new AIs, similar to how computers are needed to design new processors.

Btw. nice short story about over-reliance on machines. I've only learned about it last month. It's from 1909. Not really about AI, but great reading.
https://en.wikipedia.org/wiki/The_Machine_Stops
 
Last edited:

Back
Top Bottom