ChatGPT

AIs don't have any rights. They're software whose main purpose is to squeeze as much value as possible out of the common heritage of human culture for the sole benefit of the rich. And there are no laws to prevent the great theft, no recourse for artists to protect their work from being exploited for the enrichment of disgusting CEOs. These are the end times.

I'm not necessarily disagreeing with the bad things you list, but why do you think it's the end times? Hasn't it always been so, in one way or the other?

Hans
 
AIs don't have any rights. They're software whose main purpose is to squeeze as much value as possible out of the common heritage of human culture for the sole benefit of the rich. And there are no laws to prevent the great theft, no recourse for artists to protect their work from being exploited for the enrichment of disgusting CEOs. These are the end times.

To an extent. But for example Stable Diffusion (image generation) is free. And somehow it doesn't make the issues go away.
 
Last edited:
I'm not necessarily disagreeing with the bad things you list, but why do you think it's the end times? Hasn't it always been so, in one way or the other?

Hans

There was never anything quite like this, or rather, never anything quite like what will eventually appear. And just like Amazon killed off its smaller competition, large "art" corporations will kill off artists that don't work for them in their dystopian "art batteries". It'll be like trying to get your indie game noticed on Steam times a million.
 
A recent episode of the “This American Life” podcast, ep 803 titled “Greetings, People of Earth” discusses how surprised researchers were at Chat GPT being able to generate images. Crude images for now, but I suspect they won’t remain crude for long!

Just listened to this episode. I recommend it!

https://www.thisamericanlife.org/803/greetings-people-of-earth

The part about GPT4 is Act One.

It can also solve problems that you might think would be impossible for it, unless it actually understands a lot of concepts that humans understand. Like how to stack the following objects: A book, a laptop computer, 9 eggs, a bottle and a nail.
 
Last edited:
That's not the end of the story when it comes to AI, though.

Things that the AI tool is used to produce, are one thing. There's still the matter of material used to train the tool during its development, which is another. A lot of that material was not authorized to be used in that way, and the creators deserve recompense for that. In my opinion laws do exist to cover this scenario but if courts feel they're too ambiguous then they need to be clarified and strengthened.


How is this functionally different from when humans do the same thing to train themselves?
 
The creative works that are produced by the AI program when it's being used are, again, a separate issue; this matter concerns the creation of the program itself. AI image-producing software are products; products developed by companies and intended to be sold for profit. If artwork is used for developing the product that is arguably a commercial use of the art, and that is something that usually has to be explicitly licensed unless the artist themselves gave some kind of carte-blanche permission, e.g. Creative Commons or similar.


An art teacher assigns their class a project to do a piece in the style of Andy Warhol. The students dutifully get on the Internet or hit the library and study the works of Andy Warhol. Then they create their own piece.

An AI is told to produce a piece in the style of Andy Warhol. It knows what this means because its training has involved a review of works by Andy Warhol. It creates its own piece.​

Where, how, and why do you slip a matchbook cover in between the two? What makes them fundamentally different?

Do the students owe royalties to the Warhol estate for having reviewed his work? Does the teacher for having made the assignment which prompted it?

If not, then why should the creators of the AI, which has been programmed to do essentially the same thing?
 

The lawsuit against OpenAI claims the three authors “did not consent to the use of their copyrighted books as training material for ChatGPT. Nonetheless, their copyrighted materials were ingested and used to train ChatGPT.”

I'm not a lawyer but what law exactly says that they needed anyone's consent to use copyrighted works as training material?

A copyright means the right to make (and sell) copies of a work. It doesn't cover what people can do with a legally obtained copy of said work.
 
I don't see how that follows logically.

No matter how people ultimately hash out the angels-on-pins discussion around how similar AI programs' training is or isn't to human learning, it doesn't change the fact that AI programs are commercial products manufactured by companies for profit, and as a society we've already decided that it is acceptable for artists to assert some level of control over the commercial use of their artworks.


Sure, but what level?

The training of the AI is not a product being sold. Its ability to use that training may be, but that is pretty much the same as an art school training artists, and then the artists going out and making new work for sale.
What royalty obligations should the art school incur?

The point I'm trying to make is that in an effort, however misguided or unnecessary, to try and cleave some legal distinction between the two there is (IMO) an almost inevitable unintended ...or not ... fallout which will capture humans in the legislative web intended for AI. To the detriment of those humans.

Copyright law is already an overcomplicated labyrinth for which deep pockets are generally the main route to success in court.

Efforts to trap AI won't hurt those with such deep pockets, but I foresee the little guys getting clobbered. The net effect will not be to the benefit of the art community as a whole.
 
You are both missing the premise; maybe it's my fault for not expressing it clearly enough.

Both of your arguments hinge on the matter of whether and how artworks are being used by the program itself. I'm talking about the matter of artworks being used by the software company during the manufacturing process of the program.


How about the artworks being used by art schools and their students during the educational (i.e. manufacturing) process of the schools?

The intent, the process, and the goals are fundamentally the same, to teach the skills needed to produce art. Often art for sale.
 
How about the artworks being used by art schools and their students during the educational (i.e. manufacturing) process of the schools?

Those institutions typically license the study materials they use. I gave an example of this exact scenario, as the company making the videos I described a few posts ago is functionally an online art school and I have already described how they are scrupulous about licensing any original art they use in their courses that isn't already covered by Creative Commons or similar.
 
You are both missing the premise; maybe it's my fault for not expressing it clearly enough.

Both of your arguments hinge on the matter of whether and how artworks are being used by the program itself. I'm talking about the matter of artworks being used by the software company during the manufacturing process of the program.

No wasn't missing that. I decide I want to author a book about how to "do art", so I go to the public library and a look through hundreds of art books, taking notes about how different artists do this and that, I could even extend it further and look through physics books taking notes on how light is reflected from paper. I then distil these notes into the ultimate "How to do art" book and I sell that for £12.99 and retire a billionaire in 12 months. There would be no legal obligation, nor I would contend a moral obligation for me to pay a penny to any of the artists whose artworks I looked at and analysed.

I think this idea of "analysis" will be the legal lynchpin in defence of how they trained their AIs and whether any copyright was breached.
 
AIs don't have any rights. They're software whose main purpose is to squeeze as much value as possible out of the common heritage of human culture for the sole benefit of the rich. And there are no laws to prevent the great theft, no recourse for artists to protect their work from being exploited for the enrichment of disgusting CEOs. These are the end times.

Or you could view it as "They're software whose purpose is to give access to the common heritage of human culture to those who in the past could not afford such access".

The rich have always been able to pay for the art they wanted, the plebs couldn't, generative AI means art is now much more affordable and many, many more plebs can now access what used to be the domain of only the rich.
 
I'm not a lawyer but what law exactly says that they needed anyone's consent to use copyrighted works as training material?

A copyright means the right to make (and sell) copies of a work. It doesn't cover what people can do with a legally obtained copy of said work.

Sadly this is what happens when the lawyers get involved. :) They are of course trying to find some way of winning for their client and they are not arguing in the abstract as we are doing.

The point here is that even though the works were freely available for the AI to "find" on the internet they were not legally on the internet. If I had found the works in question on the internet, even though I've done nothing illegal in finding them that doesn't give me the right to then download the works and publish them. So, they are claiming using the works in the training is like me publishing a downloaded work - it's a breach of their copyright.

I can see these types of actions getting bog-downed in legal minutia, so we'll get no useful judgements about the principles involved.
 
Er - they do state how it is done.
Indeed. For those who don’t want to read the article, they introduce noise that looks much like the noise you get when an image that has been compressed with a noisy compression technique is decompressed.

In fact, I wouldn’t be too surprised if Glaze actually uses a compression technique to introduce the noise.

It is a pity that we don’t get an example of what a generative AI gets out of a ‘glazed’ image. At least I did not see it, but for some reason there images in the article that did not display.
 
Those institutions typically license the study materials they use. I gave an example of this exact scenario, as the company making the videos I described a few posts ago is functionally an online art school and I have already described how they are scrupulous about licensing any original art they use in their courses that isn't already covered by Creative Commons or similar.


Do they typically license all ... every single piece of art their students study? Everything in their libraries? Everything in art appreciation courses? Etc.
 
In very vague terms. To the point I simply don't believe it works.

The article wasn't meant to be an in-depth technical review of the software, if you want to know more go here: https://glaze.cs.uchicago.edu/ and https://glaze.cs.uchicago.edu/what-is-glaze.html lots of detail there.

ETA:
As to does it work:

https://glaze.cs.uchicago.edu/faq.html
Isn't it true that Glaze has already been broken/bypassed?

No, it has not. Since our initial release of Glaze on March 15, 2023, a number of folks have attempted to break or bypass Glaze. Some attempts were more serious than others. Many detractors did not understand what the mimicry attack was, and instead performed Img2Img transformations on Glazed art (see below). Other, more legitimate attempts to bypass Glaze include a PEZ reverse prompt attack by David Marx, the results of which he posted publicly. Others thought that removing artifacts produced by Glaze was equivalent to bypassing Glaze, and developed pixel-smoothing tools, including AdverseCleaner by Lyumin Zhang, author of ControlNet. A few days after creating the project, he added a note on March 28, 2023 admitting it doesn't work as planned.

...snip....


For folks interested in detailed test results showing the impact of these attempts to bypass Glaze, please take a look at the official Glaze paper here.

I've preserved the link to the PDF of the paper.
 
Last edited:

Back
Top Bottom