ChatGPT

That seems a tad regressive and would potentially harm a lot of artists.

I'm a - very much - amateur artist and I've started to use several "AI" tools recently as part of my "workflow" - created a few textures via an AI generative system and used one to turn a sketch into a tiling pattern. Why should that mean I lose my copyright?

And for the bigger companies there is not a problem with the concerns that they are using artwork they "found" online and no one is being paid for that as they can use assets they do own the rights to. For instance Adobe use their own "Adobe Stock" images for their AI training.

Because if AI research is going to have such a callous attitude towards the copyright of the artists that AI learns from, anything created with the help of such an AI should follow suit.

Any AI with documentation on every image it has ever analysed along with appropriate compensation to the copyright owner would obviously be exempt. I suspect AI developers would rather quickly discover that they actually know exactly how "AIs" are doing what they are doing.
 
Because if AI research is going to have such a callous attitude towards the copyright of the artists that AI learns from, anything created with the help of such an AI should follow suit. ...snip...

But that just punishes artists! As AI becomes more prevalent in art tools you are making it less likely an artist will be able to make any money from their creations.
Any AI with documentation on every image it has ever analysed along with appropriate compensation to the copyright owner would obviously be exempt. I suspect AI developers would rather quickly discover that they actually know exactly how "AIs" are doing what they are doing.

Why would this not apply to every human artist as well? Remember the generative AI's are not compositing, or copying from original works. They are truly creating something that has never been seen before at the prompting of humans.

Also the current issue about copyright and compensation is nothing but a blip, companies will more and more be using images they do own the rights to, as I mentioned Adobe has already started on this. And Adobe is so sure that there are no copyright breaches with their generative AI that they are indemnifying people from any legal issues arising from using their tools.

See: https://www.reuters.com/technology/...big-business-with-financial-cover-2023-06-08/

...Adobe Inc said on Thursday it will offer Firefly, its artificial intelligence tool for generating images, to its large business customers, with financial indemnity for copyright challenges involving content made with the tools...
 
Isn’t that what human artists do?

Not really. You can have Warhol, who takes a picture, and makes 9 copies of it in different colors. That's transformative. That's new style. Unmistakable with the original picture.
But what AI does is different. You say "make me another Monet" .. and it does. It's nothing which expert couldn't tell (yet). But it's the same style, it directly competes with the original, and can easily by mistaken as original.
And it's only able to make it similar, because it was trained on Monet pictures.

Or maybe another angle .. you can even make a homage to Monet, and make a picture in his style, and it may be hard to tell. But if you sign it as Monet, you become image forger. If you sign it with your name, or at least if you declare it was made with AI, trained on Monet's pictures .. it's OK IMHO.
 
Why would this not apply to every human artist as well? Remember the generative AI's are not compositing, or copying from original works. They are truly creating something that has never been seen before at the prompting of humans.

Because I feel that discriminating against a non-sentient entity is okay and would solve problems.

I'm not saying my solution is necessary the best or even good, but just make some sort of exception for machine learning that will compensate human artists. We literally don't have to care that the machine is doing the same thing a human does.
 
Not really. You can have Warhol, who takes a picture, and makes 9 copies of it in different colors. That's transformative. That's new style. Unmistakable with the original picture.
But what AI does is different. You say "make me another Monet" .. and it does. It's nothing which expert couldn't tell (yet). But it's the same style, it directly competes with the original, and can easily by mistaken as original.
And it's only able to make it similar, because it was trained on Monet pictures.
But this is not the typical use. AI’s are trained on a huge number of images of many different artists, just like human artists through their life and education have encountered a huge number of images that form the basis of their art. Images that are made in the style of Warhol, Monet, and Van Gogh have been produced by computers long before AI became an issue.

Or maybe another angle .. you can even make a homage to Monet, and make a picture in his style, and it may be hard to tell. But if you sign it as Monet, you become image forger. If you sign it with your name, or at least if you declare it was made with AI, trained on Monet's pictures .. it's OK IMHO.
Yes, I think you are right. But this applies to human art made in these styles as well.
 
Because I feel that discriminating against a non-sentient entity is okay and would solve problems.

I'm not saying my solution is necessary the best or even good, but just make some sort of exception for machine learning that will compensate human artists. We literally don't have to care that the machine is doing the same thing a human does.

Don't disagree with that but your suggestion to address that would harm artists. AI is rapidly becoming "just" another tool artists use so to tell an artist "if you use an AI tool your copyright becomes null and void" will stifle creativity and reduce the potential of artists to earn from their creativity.

Let me give you an example.

I did this, it's a digital piece of artwork produced in Photoshop and Procreate.



Its aspect ratio was set to what I can print out on my home printer - A4 or A3 and where I wanted to put it, so the "crop" at the sides was deliberate. My mother asked me did I have the "full" image as she'd like a copy but not "cut-off".

Now I'd like to help her - makes my life safer - with that but it's a much bigger job than "just" drawing/painting some new bits in. When I created that I used all sorts of techniques, different brushes and so on, and I didn't document it stage by stage. For me it would be harder to fill in the "chopped off bits" than doing the original piece!

So I wondered if generative AI could help out.



That image was created using generative AI to fill in the bits that I hadn't originally painted, and I thought I'd see if it could get rid of the ball (which I don't now like) and if it could fix what to me is a blinding mistake. And that's what it produced, it wasn't just a matter of one click it took some time to get right. Now it isn't perfect, and it will still take me some effort to get it to where I want it but nothing like the work it would have taken.

Why should I lose my copyright because I've used generative AI?
 
Why should I lose my copyright because I've used generative AI?

I don't know all the ins and outs of what went into training that particular AI, but potentially because the AI was trained using images by artists who didn't give their consent (and the copyright for them was still valid). If no such images were used, there is obviously no problem.

But the public domain idea is just me spitballing anyway. If anything, I'd rather declare the AI developers to be in breach of copyright law and start demanding they compensate the artists of images they used.

I feel like this is a case of the sheer volume of disparate data being used as some sort of legal defense. If someone made an AI that was trained solely on Disney movies, they wouldn't see the outside of a courtroom for the rest of their lives. But somehow it's supposed to be okay if your selection is broad enough? It's preposterous.
 
[snip]

I feel like this is a case of the sheer volume of disparate data being used as some sort of legal defense. If someone made an AI that was trained solely on Disney movies, they wouldn't see the outside of a courtroom for the rest of their lives. But somehow it's supposed to be okay if your selection is broad enough? It's preposterous.


I agree.

Art classes have to go.
 
I don't know all the ins and outs of what went into training that particular AI, but potentially because the AI was trained using images by artists who didn't give their consent (and the copyright for them was still valid). If no such images were used, there is obviously no problem.

...snip...

One of the reasons for using that as an example was because the content that the AI generated matched my style, matched my image, matched my technique even though it was never trained on any of my work and I think it clearly shows that what the generative AIs produce is original and unique work regardless of the training images.

I feel like this is a case of the sheer volume of disparate data being used as some sort of legal defense. If someone made an AI that was trained solely on Disney movies, they wouldn't see the outside of a courtroom for the rest of their lives. But somehow it's supposed to be okay if your selection is broad enough? It's preposterous.

I think that is going to be a blip - at first no one knew how successful this was going to be and I bet researchers wanted to do it as cheaply as possible so hit the internet databases. I think given the success you will see further developments using content they have "paid" for i.e. that is legitimately owned in regard to the useage rights anyway. And the artists will still only get the usual peanuts they make from the large stock-content companies.
 
But this is not the typical use. AI’s are trained on a huge number of images of many different artists, just like human artists through their life and education have encountered a huge number of images that form the basis of their art. Images that are made in the style of Warhol, Monet, and Van Gogh have been produced by computers long before AI became an issue.

Yes, I think you are right. But this applies to human art made in these styles as well.

If you generate random picture, and you get about 1% from every training picture .. I don't see issue with that. Sounds like an amount of "inspiration" a human painter could absorb from a picture. It's clear the new picture will be something new.
But if you type "Mona Lisa", most models I tried will just spit decent Mona Lisa painting. Some even with a frame. So it can give way more than 1%. Thing is, you might not know how much you are getting.
You might feel very creative with your "anime voluptuous bikini clad warrior princess" prompt, but it might spit out decent copy of single training picture, and you won't know. If you use it for your personal use, it's fine. Before AI you still might have pay for such picture, but whatever.
But what if you use it commercially, for an ad campaign, in a game, and so on ?
It's similar issue as with ChatGPT. ChatGPT can't source what it's claiming, and you can never known if it's true or not. Image models can't source either .. and you never know how original the pictures are.
 
Is my second painting "AI generated"?

Partially, yes. Doesn't mean you should loose your copyright IMHO. It's still clearly yours. It's an example where majority of the picture (by area, and especially by artistic effect), can be linked to single author. AI didn't change that.
Also I don't think AI generated pictures can't be original and transformative enough to fit current fair use. They can be, and in most cases, they are.
 
... I think it clearly shows that what the generative AIs produce is original and unique work regardless of the training images. ...
In my mind that's like saying if an editor edits my novel (beyond punctuation and spelling) it becomes original and unique work.

I am purposefully using text as an analogy because we have more clear guidelines most of the time what is and isn't copyright infringement and what is original work.

The analogy is valid even if the problems of copyrighting styles are going to be immense.
 
If you generate random picture, and you get about 1% from every training picture .. I don't see issue with that. Sounds like an amount of "inspiration" a human painter could absorb from a picture. It's clear the new picture will be something new.
But if you type "Mona Lisa", most models I tried will just spit decent Mona Lisa painting. Some even with a frame. So it can give way more than 1%. Thing is, you might not know how much you are getting.
You might feel very creative with your "anime voluptuous bikini clad warrior princess" prompt, but it might spit out decent copy of single training picture, and you won't know. If you use it for your personal use, it's fine. Before AI you still might have pay for such picture, but whatever.
But what if you use it commercially, for an ad campaign, in a game, and so on ?
It's similar issue as with ChatGPT. ChatGPT can't source what it's claiming, and you can never known if it's true or not. Image models can't source either .. and you never know how original the pictures are.

Doubt it, that’s not how they generate new images. Plus surely we’d have seen some examples of this given the literally millions of images that have been produced with generative AI if for some reason it could happen.
 
In my mind that's like saying if an editor edits my novel (beyond punctuation and spelling) it becomes original and unique work.

I am purposefully using text as an analogy because we have more clear guidelines most of the time what is and isn't copyright infringement and what is original work.

The analogy is valid even if the problems of copyrighting styles are going to be immense.

Don’t think the analogy works, let me and try and explain why by expanding the analogy.

In my example the generative AI has created a couple of chapters that match my writing style, vocabulary, grammar (misuse) and plot. Those chapters never existed before, it is a new and unique work and no one can tell the difference between mine and the AI’s chapters. I’m sure that if I presented my second image sorry second chapter no one would be able to tell it was created by a generative AI trained on many copyrighted novels as it would be using my characters, my style, my plot. The AI is not cutting and pasting sentences it’s scanned during its training, it’s using the “knowledge” it’s gained to figure out what the new chapter should be.
 

Back
Top Bottom