• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

Don’t think the analogy works, let me and try and explain why by expanding the analogy.
.... I’m sure that if I presented my second image sorry second chapter no one would be able to tell it was created by a generative AI trained on many copyrighted novels as it would be using my characters, my style, my plot. The AI is not cutting and pasting sentences it’s scanned during its training, it’s using the “knowledge” it’s gained to figure out what the new chapter should be.
But I learned to write based on many novels I've read plus some other stuff.

An AI enhanced novel with new chapters in my style of writing wouldn't be using plagiarized copy and paste cuts from other works.
 
True, though we have to assume that the humans GPT-4 was competing against studied those textbooks too. We just can't hold all of that in our puny brains.

Slightly off-topic, but all my law exams were 'open book' and we could bring any reference in that we liked. (Some came to the exam with legal trolleys, i.e. a rolling metal bookcase.)

If you didn't understand the law, those books wouldn't have helped.

(i.e. If you can't recognise the points of law in the questions, you would have no idea what to look up.)

I'm surprised that ChatGPT in any form can recognise a law question at all, which makes me wonder if it was trained on law exams with model question/answer pairs.
 
Slightly off-topic, but all my law exams were 'open book' and we could bring any reference in that we liked. (Some came to the exam with legal trolleys, i.e. a rolling metal bookcase.)

If you didn't understand the law, those books wouldn't have helped.

(i.e. If you can't recognise the points of law in the questions, you would have no idea what to look up.)

I'm surprised that ChatGPT in any form can recognise a law question at all, which makes me wonder if it was trained on law exams with model question/answer pairs.

That makes sense, particularly in law.

I asked GPT-4 for a sample bar exam question and how it would answer it. You can tell us if it looks reasonable (or if it's a known example):
https://chat.openai.com/share/2e62c49f-ada0-4b4e-b28c-1d513c2c627c
 
Last edited:
But I learned to write based on many novels I've read plus some other stuff.
Like a generative AI.


An AI enhanced novel with new chapters in my style of writing wouldn't be using plagiarized copy and paste cuts from other works.

Precisely and that's what the generative art AIs do, they do not copy and paste anything, they generate new, unique content.
 
I don't see why it is extreme to expect that standards should be the same.

Ironic, mebbe, but not sarcastic.

If you think standards should be the same, do you think an AI trained exclusively on Disney properties would survive the legal onslaught?
 
If you think standards should be the same, do you think an AI trained exclusively on Disney properties would survive the legal onslaught?

Why would you limit it in such a way? Since they've used datasets from the publicly accessible internet they have already used probably thousands (hundreds of thousands?) of Disney images we don't need to imagine it. So far it seems Disney is not making any claims for damage.
 
Why would you limit it in such a way? Since they've used datasets from the publicly accessible internet they have already used probably thousands (hundreds of thousands?) of Disney images we don't need to imagine it. So far it seems Disney is not making any claims for damage.


The Disney style, or styles as the case may be, of animation may well be among the most studied and imitated in modern times.

By humans.

As are innumerable other styles.

Also by humans.

This is the natural evolution of art of every kind, and human artists have been doing it since there was more than one human artist.

It isn't clear to me why computers using the same process is such a transformative event. Because they can do it faster? That's the only substantive difference I can see.
 
Last edited:
Why would you limit it in such a way? Since they've used datasets from the publicly accessible internet they have already used probably thousands (hundreds of thousands?) of Disney images we don't need to imagine it. So far it seems Disney is not making any claims for damage.

It doesn't matter why you would do it. The question is whether it is legal, and if it isn't legal, AI developers are only getting away with their ******** because they managed to obscure exactly what they are doing. If Disney isn't doing anything, it's because it isn't worth the trouble and money to go after such an obscure problem at the moment.

But I don't care about some large corporation, I care about the independent artists who have never been given any choice in the matter, because they don't have the resources to pursue any legal claims in the first place.

You are trying to construct a false analogy.

I think you know that.

It's not a false analogy. You are giving the same considerations to an AI as to a human. No one will complain about any person learning drawing and/or animating by only studying Disney properties. Hence, an AI trained exclusively on Disney properties should be fine legally. Would it? If not, AI developers are criminals who are just good at obscuring their copyright infringment.

But maybe you think such an AI would do fine. Personally I don't believe it.
 
The long and big discussion of this topic vis-a-vis AI image generation and copyright makes it seem like it's a complicated topic, but I don't see it that way. In my view, there are no technical details to sweat, here, really.

Firstly, in much the same way that a language program like ChatGPT works by smashing together words it doesn't actually know the definition of together in a sequence that has the highest probability of being correct based on the other sequences of words it was been shown during its "training" process, AI art generators build images by taking the entered prompt and (to simplify) placing lines where they seem most probable to belong based on images it has been shown during its training process that were labeled with the same terms used in current prompt. While I continue to assert that the language model's method of creating text responses does not in any way remotely match the way humans think and compose their communications, I will concede that the image generator's mechanism of operation does seem to at least be analogous to the way humans learn to create some kinds of art, and especially how they learn particular styles of painting or drawing.

Nevertheless, again, like ChatGPT the AI image generator isn't a person. It's not a mind that has opinions or desires and is making decisions with agency, it's a machine that only starts running when an operator activates it, does the work it's programmed to do, and then stops until it's activated again. It is a tool.

A person who makes a painting in the style of another person, isn't violating copyright, as far as I can tell. I can't think of or (quickly) find any legal cases of someone even claiming that a person has committed some kind of tort by painting or drawing or composing something that merely looks like something [known artist] would have created. I don't see why using an AI generator to make the work as opposed to doing it manually with a paintbrush or a digital pen tablet would make a difference in this regard.

A related but separate issue is trademark. Generally speaking if I start making and selling style-accurate hand-drawn pictures of Mickey Mouse I can in theory be sued by Disney for mark infringement. I also don't see why using an AI image generator to produce the images instead of making them by hand would make a difference in this case either.

I guess a simpler way to put it would be: if the person who prompted the AI image generator to generate the image had created it entirely by hand instead, would the image be infringing (whether copyright or trademark)? If the answer is yes, then it is still yes for the AI-generated image. If the answer is no, then it is still no for the AI-generated image.

This seems fairly self-evident to me, I don't understand why people are struggling with it.
 
Last edited:
But maybe you think such an AI would do fine. Personally I don't believe it.


IANAL, but I have a hard time imagining what basis Disney would have for bringing a case. Assuming the AI was trained on legally obtained Disney content (not hacked from hypothetical Disney trade-secret files, for instance).

Suppose Skynet Studios releases an animated film that every critic universally declares "OMG this is so much like a Disney film, I can't believe it's not butter Disney." A U.S. court wouldn't care about that. They'd look at whether any actual specific trademarks or copyrights were violated. Do the individual characters look more similar to specific Disney characters than those Disney characters look to each other? Were the names too similar? Were the sequences of notes in the individual songs too similar?

Now, a big corporation with deep pockets like Disney might try to argue new copyright protections that didn't exist before, kind of like the "look and feel" copyright cases for computer software a few decades ago. But that ship probably already sailed. Animators have been trying to copy the "look and feel" of Disney animated movies for a long time, with occasional success and no legal challenge relevant to the scenario at hand. (The Secret of NIMH is one example.) Plus there's the complication that Disney has been deliberately changing the look and feel of its animated movies. The look and feel of Disney's 3D animated movies didn't originate with Disney.
 
The Disney style, or styles as the case may be, of animation may well be among the most studied and imitated in modern times.

By humans.

As are innumerable other styles.

Also by humans.

This is the natural evolution of art of every kind, and human artists have been doing it since there was more than one human artist.

It isn't clear to me why computers using the same process is such a transformative event. Because they can do it faster? That's the only substantive difference I can see.
I think the issues are more than simply copying a style.
Some images are scraped without permission. It goes beyond a person searching through someone else's work.

AI created work isn't always identified as AI generated, nor which files were scraped and if they remain in a file to be disseminated further. Some people think such work should be clearly identified. Many believe they should have been asked for permission.

The AI programs don't get facts right but present material as if it is confirmed to be factual. People are purposefully asking AI programs to pump out false stories. Like deep fakes, it's becoming harder and harder to know what is true and what is false.​
These things may not be illegal now, but new regulation is warranted.
 
[snip]

It's not a false analogy. You are giving the same considerations to an AI as to a human.


Nope. I'm saying that if something is illegal for a computer to do it should be illegal for a human as well. Illegal is illegal.

If it isn't illegal for a human then why should it be illegal for a computer? Legal is legal.

No one will complain about any person learning drawing and/or animating by only studying Disney properties. Hence, an AI trained exclusively on Disney properties should be fine legally.


I'm pleased to see that we agree.

Would it? If not, AI developers are criminals who are just good at obscuring their copyright infringment.


Why is it the developers? They just created the software. Shouldn't it be the humans who use it?

But maybe you think such an AI would do fine. Personally I don't believe it.


I don't think it's fine or not fine. It just is. A hammer is just a hammer, If someone uses it to drive a nail that's one thing. If they use it to hit somebody else it's another.
 
I think the issues are more than simply copying a style.
Some images are scraped without permission. It goes beyond a person searching through someone else's work.


What if that person collects screenshots while they are doing this searching? How is that different from scraping?

AI created work isn't always identified as AI generated, nor which files were scraped and if they remain in a file to be disseminated further. Some people think such work should be clearly identified. Many believe they should have been asked for permission.


But not when a human does it?

The AI programs don't get facts right but present material as if it is confirmed to be factual.


Sounds like Facebook.

People are purposefully asking AI programs to pump out false stories. Like deep fakes, it's becoming harder and harder to know what is true and what is false.


Okay. That sounds like Fox News.

These things may not be illegal now, but new regulation is warranted.


I don't disagree that new regulation may be warranted. I just think that you are looking in the wrong place to determine what those regulations ought to be.
 
Last edited:
What if that person collects screenshots while they are doing this searching? How is that different from scraping?

The companies developing these AI image generators are doing it for profit.

Commercial use is commercial use; they should have gotten permission, and paid as appropriate, for any material they actively used to develop their product.

Yes, a person can generally download any given image from the internet for personal noncommercial purposes. That is widely recognized to be a different circumstance and isn't relevant.
 
What if that person collects screenshots while they are doing this searching? How is that different from scraping?


The companies developing these AI image generators are doing it for profit.

Commercial use is commercial use; they should have gotten permission, and paid as appropriate, for any material they actively used to develop their product.

Yes, a person can generally download any given image from the internet for personal noncommercial purposes. That is widely recognized to be a different circumstance and isn't relevant.
[/QUOTE]


And if the person (human) doing the scraping screenshots tries to sell anything they created after studying those screenshots then they too should have gotten permission, and paid as appropriate, for any material they actively used to develop their product?
 
IANAL, but I have a hard time imagining what basis Disney would have for bringing a case. Assuming the AI was trained on legally obtained Disney content (not hacked from hypothetical Disney trade-secret files, for instance).

Some sort of licensing law?

If the material were software, I couldn't get away with making it part of my code, no matter how different the final application is. I don't see why non-software stuff should be treated any differently.

Is there a lawyer on the plane?


And if the person (human) doing the scraping screenshots tries to sell anything they created after studying those screenshots then they too should have gotten permission, and paid as appropriate, for any material they actively used to develop their product?


Speaking of false analogies, show me exactly how what a human brain is doing is analogous to what the current crop of "AIs" are doing. Show me the code of the brain that explicitly contains all those copyrighted datasets.

AI developers are full of ****. The copyrighted material is in there, in the software, no matter how much it is buried. And the AI keeps referencing those datasets. If there isn't a law against this, there should be.
 
Last edited:

Back
Top Bottom