ChatGPT

That's cute. Now get an AI trained on images of the real world to develop a few millenia of artistic innovations so that we can get dozens of art movements to rival the ones we got. Wait, you can't? Feeding an AI post-impressionists will simply make it paint like a post-impressionist? Feeding it all the art movements will make it paint like an unholy mixture of all the art-movements with nothing new to offer? Gee, it's almost like they really are just glorified collage makers.
 
That's cute. Now get an AI trained on images of the real world to develop a few millenia of artistic innovations so that we can get dozens of art movements to rival the ones we got. Wait, you can't? Feeding an AI post-impressionists will simply make it paint like a post-impressionist? Feeding it all the art movements will make it paint like an unholy mixture of all the art-movements with nothing new to offer? Gee, it's almost like they really are just glorified collage makers.

Of course an AI isn't going to reinvent all of art. Nor has any single human. They are creative though. I mean, which human artist is doing the extra-limbs and fingers style? Fewer fingers has been done, but this is new.

The art AIs have artistic talents, but don't have the LLM's abilities in understanding complex structure. It's really a different discussion. Take, for example, what GPT-4 came up with when I described yesterday's xkcd to it and asked what it would take to simulate it. There's no place this reply could have been cut-and-pasted from, since it's new. It could not have been written without it actually understanding a number of concepts.

GPT-4: To simulate this scenario, you'd need to consider a few factors:

1. Precise Physical Model: You'd need a detailed model of Earth's tectonic plates, including data about their current positions, velocities, and directions of movement. This information could help you understand how quickly the Atlantic Ocean is expanding.

2. Dynamic Text Editor: This software would need to automatically adjust text as the "page size" (i.e., the surface area of the Atlantic Ocean) increases. It would have to handle wrapping text around the coastlines and adding additional words as the ocean expands. The text editor would also need to accommodate an immense amount of text, since the Atlantic Ocean covers over 41 million square miles.

3. Word Generation: The system would need a way to generate or add new words as the Atlantic Ocean expands. This could involve generating random words, copying from an existing text, or using an AI language model to create coherent and contextually appropriate words.

While it would be technically challenging to implement such a system, it's an interesting thought experiment that encourages us to think about geological processes, software design, and the physical scale of Earth in new and unique ways.
 
However, one AI expert compared Meta’s open-source move to handing over the secrets to making a nuclear weapon.

Dame Wendy Hall, regius professor of computer science at the University of Southampton, said there were questions over whether the tech industry could be trusted to self-regulate LLMs, with the problem looming even larger for open-source models. “It’s a bit like giving people a template to build a nuclear bomb,” she told Today.

https://www.theguardian.com/technol...ds-release-open-source-ai-model-meta-facebook

It's very reasonable and not at all alarmist to compare AI nuclear weapons. I'm glad journalists give individuals like this breath instead of reasonable people.
 
Of course an AI isn't going to reinvent all of art. Nor has any single human. They are creative though. I mean, which human artist is doing the extra-limbs and fingers style? Fewer fingers has been done, but this is new.

That's not creativity, that's random noise. It's literally a glitch that the developers want to fix.

Someone should let Bethesda know about how "creative" their games are.


The art AIs have artistic talents, but don't have the LLM's abilities in understanding complex structure. It's really a different discussion. Take, for example, what GPT-4 came up with when I described yesterday's xkcd to it and asked what it would take to simulate it. There's no place this reply could have been cut-and-pasted from, since it's new. It could not have been written without it actually understanding a number of concepts.

GPT-4: To simulate this scenario, you'd need to consider a few factors:

1. Precise Physical Model: You'd need a detailed model of Earth's tectonic plates, including data about their current positions, velocities, and directions of movement. This information could help you understand how quickly the Atlantic Ocean is expanding.

2. Dynamic Text Editor: This software would need to automatically adjust text as the "page size" (i.e., the surface area of the Atlantic Ocean) increases. It would have to handle wrapping text around the coastlines and adding additional words as the ocean expands. The text editor would also need to accommodate an immense amount of text, since the Atlantic Ocean covers over 41 million square miles.

3. Word Generation: The system would need a way to generate or add new words as the Atlantic Ocean expands. This could involve generating random words, copying from an existing text, or using an AI language model to create coherent and contextually appropriate words.

While it would be technically challenging to implement such a system, it's an interesting thought experiment that encourages us to think about geological processes, software design, and the physical scale of Earth in new and unique ways.

Yeah, no, it's a bloody waffle generator. Which might make it comparable to a lot of human writing, I don't know.

That's four paragraphs of paraphrasing the concept, stating the obvious, and concluding with empty platitudes.
 
Yeah, no, it's a bloody waffle generator. Which might make it comparable to a lot of human writing, I don't know.

That's four paragraphs of paraphrasing the concept, stating the obvious, and concluding with empty platitudes.

Still it's better than IMHO 90% of people. If you are best writer in the world, you will be fine (for now). If you write articles about diet for weekend magazine, you might be in trouble.
 
Say what you will but this thing rocks for cranking out work performance eval crap (which is all crap anyway).
 
That's not creativity, that's random noise. It's literally a glitch that the developers want to fix.
Say what you will-- it's original, and certainly wouldn't be the first art to be called random noise.

Yeah, no, it's a bloody waffle generator. Which might make it comparable to a lot of human writing, I don't know.

That's four paragraphs of paraphrasing the concept, stating the obvious, and concluding with empty platitudes.

There was no concept written out for it to paraphrase. It figured out how to simulate it itself. Sorry you can't appreciate that.
 
I'm still trying to get a grasp on why AI is considered literally dangerous. Is it all because of Roko's Basilisk?
Apparently, there are editors of crap magazines who want to replace all their journalists with AI. If you are a crap journalist, this is literally dangerous.
 
Apparently, there are editors of crap magazines who want to replace all their journalists with AI. If you are a crap journalist, this is literally dangerous.
Dangerous as in losing your job, if you want to define "dangerous" like that, I guess. But there are people saying that AI is an existential threat. Quite a few of them, and some of them experts in the field.

I just don't see it. AI is a new, and probably transformative, technology. But that's all. It's not a threat, and it's not dangerous.

Unless, as I said, you believe in the inevitability of Roko's Basilisk.
 
Dangerous as in losing your job, if you want to define "dangerous" like that, I guess. But there are people saying that AI is an existential threat. Quite a few of them, and some of them experts in the field.

I just don't see it. AI is a new, and probably transformative, technology. But that's all. It's not a threat, and it's not dangerous.

Unless, as I said, you believe in the inevitability of Roko's Basilisk.

It's dangerous because it's main feature is our main feature: intelligence. And it evolves faster then us. Basically it follows Moore's law. We don't. AI will overtake humans during our lives, it's basically unavoidable. In next few years we won't even grasp how smarter it is.
 
I'm still trying to get a grasp on why AI is considered literally dangerous. Is it all because of Roko's Basilisk?

No, it's definitely not because of Roko's Basilisk.

I'm definitely on the pro-AI side and think that concerns about doom are overrated, but there are plenty of valid concerns about the alignment problem that are completely unrelated to Roko's Basilisk. I don't think anyone serious (even Roko) is worried about that particular scenario.
 
That's cute. Now get an AI trained on images of the real world to develop a few millenia of artistic innovations so that we can get dozens of art movements to rival the ones we got. Wait, you can't? Feeding an AI post-impressionists will simply make it paint like a post-impressionist? Feeding it all the art movements will make it paint like an unholy mixture of all the art-movements with nothing new to offer? Gee, it's almost like they really are just glorified collage makers.


Give it the same few millennia and I think it very well might.

In fact, I suspect, given the current rate of improvement, that it won't take nearly that long.
 
It's dangerous because it's main feature is our main feature: intelligence. And it evolves faster then us. Basically it follows Moore's law. We don't. AI will overtake humans during our lives, it's basically unavoidable. In next few years we won't even grasp how smarter it is.


In that case there's only one thing to do, 'Lie back and think of England.' Or whatever country is more appropriate.
 
Dangerous as in losing your job, if you want to define "dangerous" like that, I guess. But there are people saying that AI is an existential threat. Quite a few of them, and some of them experts in the field.

I just don't see it. AI is a new, and probably transformative, technology. But that's all. It's not a threat, and it's not dangerous.

Unless, as I said, you believe in the inevitability of Roko's Basilisk.

The people moaning about the so called "existential threat" of AI never specify how this will happen. Instead they simply assume that computer programs will suddenly become able to do something that they haven't even shown to be theoretically possible. The imagined nature of "super-intelligence" have a strong tendency toward being fantastical if not outright magical.

The basis behind this poor reasoning is the same as with the people who thought that we we would be colonizing other planets and travel to other stars because it "followed directly" from landing on the moon, ignoring the impractical and implausible nature of space travel.

Why what if AI one day became able to do anything and then decided to destroy the planet? Or hackzor the internets and cause all phones to explode? Self-replicating nanomachines that consume everything? We clearly need to take these possibilities seriously.
 
Last edited:
The people moaning about the so called "existential threat" of AI never specify how this will happen. Instead they simply assume that computer programs will suddenly become able to do something that they haven't even shown to be theoretically possible.

AIs will remove too many jobs too quickly for policies to catch up before the rich can consolidate their power, and after that it's basically French Revolution time.

Chop, chop, chop, chop, chop.
 
AIs will remove too many jobs too quickly for policies to catch up before the rich can consolidate their power, and after that it's basically French Revolution time.

Chop, chop, chop, chop, chop.

A bloody revolution by jobless artists that can no longer make a living from commissioned furry diaper porn is no doubt a realistic possibility.
 
And since we can't "grasp how much smarter it is" your Kurzweil wannabe prediction is inherently unfalsifiable. Keep sipping that cool aid.

Top AI scientists claim AI is danger and has to be regulated. Basically all of them. Best you can hear is it is a problem, but we will be able to solve it. Worst you can hear is it's too late already. Looking at how we handled Covid and how we are handling global warming, I think it's too late.
It's also kind of evolution of mankind. Which is another reason I think it can't be stopped.
 
A bloody revolution by jobless artists that can no longer make a living from commissioned furry diaper porn is no doubt a realistic possibility.

Translators, data analysts, advertisers, paralegals, graphic designers ... people posting smart-ass responses on message boards. The jobs under threat are endless.
 

Back
Top Bottom