ChatGPT

One of the major advances with GPT 4 is that it can "understand" images - you could give it a photo of a Blue Tit and it can describe that photo. (I think a real Turing test would be - when it is first being trained - it giggles every time it sees the word "tit" used .)
The ability to do that surprised me. like it's mapping stuff on to other stuff, then mapping it back again without gibberish overwhelming itself.
 
The ability to do that surprised me. like it's mapping stuff on to other stuff, then mapping it back again without gibberish overwhelming itself.

A recent episode of the “This American Life” podcast, ep 803 titled “Greetings, People of Earth” discusses how surprised researchers were at Chat GPT being able to generate images. Crude images for now, but I suspect they won’t remain crude for long!
 
I think it's still an open question on whether AI creating derivative works is in the same class as the human mind.


So how would you make a distinction between the two, 'derivative' works by humans and ones by AI.

As nearly as I can see the main difference is that AI can do more, faster. Which puts a certain Luddite cast to the anti-AI crowd as far as this issue is concerned.

Most human works are derivative in some sense or another. In many regards it is a feature rather than a bug, because progress in nearly anything is generally derivative.

The question facing most copyright law is not whether a work is derivative, but when does it become too derivative, crossing the line into plagiarism or some other form of outright theft of intellectual property.

This has always been a contentious issue when only human actors are involved. I can't see things going well when legislators try to create some sort of dividing line. I expect that a lot of humans will find themselves suddenly becoming collateral damage.
 
So how would you make a distinction between the two, 'derivative' works by humans and ones by AI.

I wouldn't.

AI is just a program used by a human. If a work produced using AI would be "too derivative" and trip copyright laws if the operator had made it by hand or photographically or via any other method, there is no difference.
 
I wouldn't.

AI is just a program used by a human. If a work produced using AI would be "too derivative" and trip copyright laws if the operator had made it by hand or photographically or via any other method, there is no difference.


Exactly.

And there is thus no need for new legislation, which given the current abilities of our Congress would likely do more damage than good.
 
Exactly.

And there is thus no need for new legislation, which given the current abilities of our Congress would likely do more damage than good.

That's not the end of the story when it comes to AI, though.

Things that the AI tool is used to produce, are one thing. There's still the matter of material used to train the tool during its development, which is another. A lot of that material was not authorized to be used in that way, and the creators deserve recompense for that. In my opinion laws do exist to cover this scenario but if courts feel they're too ambiguous then they need to be clarified and strengthened.
 
That's not the end of the story when it comes to AI, though.

Things that the AI tool is used to produce, are one thing. There's still the matter of material used to train the tool during its development, which is another. A lot of that material was not authorized to be used in that way, and the creators deserve recompense for that. In my opinion laws do exist to cover this scenario but if courts feel they're too ambiguous then they need to be clarified and strengthened.

But is there even a need to "authorize to be used that way" ? You don't need that to transformative work, at least in paintings .. like collage or caricature. You can't ban people from doing that, and you can't get royalties from that.

Maybe the issue is those works are usually not transformative enough. It's not a collage, it's not caricature .. it's just another picture in the same style, the new AI works compete directly with the original works, usually even without being marked as AI. That's somewhat similar to unknown painter copying famous painter style, selling it under famous painter name. That's certainly fraud, even if it might not be copyright infringement.

But then, what if you mix 2 styles to get something new ? Everybody sees it's neither typical painter A, nor typical painter B .. but it's still pretty. It still competes with both painter A and B. If you can do that as a person, you typically deserve your place on the scene, and it's not considered infringement. People might say you have been influenced by A and B, but the works are certainly yours.
For AI mixing styles is trivial. The works can't exist without A or B. On the other hand it is something new, neither A or B. It's not collage or caricature, it's not unauthorized copy. Mayne it's just AI was influenced by works of A and B ? Or rather .. commanded to make a painting influenced by A and B ?
 
But is there even a need to "authorize to be used that way" ? You don't need that to transformative work, at least in paintings .. like collage or caricature. You can't ban people from doing that, and you can't get royalties from that.

The creative works that are produced by the AI program when it's being used are, again, a separate issue; this matter concerns the creation of the program itself. AI image-producing software are products; products developed by companies and intended to be sold for profit. If artwork is used for developing the product that is arguably a commercial use of the art, and that is something that usually has to be explicitly licensed unless the artist themselves gave some kind of carte-blanche permission, e.g. Creative Commons or similar.
 
It seems that human artists should likewise not be allowed to be inspired by artwork from other artists, unless it is explicitly permitted.

Only untalented people like me should be allowed to look at all artwork.
 
It seems that human artists should likewise not be allowed to be inspired by artwork from other artists, unless it is explicitly permitted.

I don't see how that follows logically.

No matter how people ultimately hash out the angels-on-pins discussion around how similar AI programs' training is or isn't to human learning, it doesn't change the fact that AI programs are commercial products manufactured by companies for profit, and as a society we've already decided that it is acceptable for artists to assert some level of control over the commercial use of their artworks.
 
I don't see how that follows logically.

No matter how people ultimately hash out the angels-on-pins discussion around how similar AI programs' training is or isn't to human learning, it doesn't change the fact that AI programs are commercial products manufactured by companies for profit, and as a society we've already decided that it is acceptable for artists to assert some level of control over the commercial use of their artworks.
You don’t think that it is commercial use when human artists are using other artists’ artworks as inspiration?

We don’t even need to look at artists, who do art for art’s sake. Commercial artists who make artwork for ads and magazines, often produce artwork that is intended to be similar to well-known artworks by other artists. Caricatures often do the same thing.

We have recently had a case in Denmark involving this kind of problem. There is a well-known sculpture in Copenhagen from 193 called The Little Mermaid, and it has often been seen as a symbol of Copenhagen and tourism. In 2019 a newspaper caricature where the mermaid was drawn with a zombie face, caused the heirs to the artist Edvard Eriksen who made the sculpture, to go to the courts to get a huge sum of royalties out of it, but they lost (eventually).
 
You don’t think that it is commercial use when human artists are using other artists’ artworks as inspiration?

I think there's a clear difference between someone looking at and being inspired by something to, say, develop their own skills, and a company using that thing during a manufacturing process in order to make money.

Allow me to use a real-world example, although admittedly an anecdotal one, to illustrate the principle I'm talking about here.

I have in the past paid for video lessons on making 3D art using tools like Blender. The lessons I used were produced by a small company that specializes in exactly that kind of training. They offered many, many courses on the different aspects of 3D creation, and some of those involved taking 2D art or photographs and building - from scratch - a 3D scene matching that 2D art as closely as possible. It was referred to as "translating" and it's considered an important skill for a serious 3D artist to have, since video game assets and movie scenes very often begin life as 2D concept art which 3D modelers must use as authoritative reference for their models.

Now it's a very common thing for a 3D modeler to take a digital painting, say, or a life photograph, and use it as "inspiration" or a 1-to-1 direct reference when just working on personal projects. If it's something you intend to use as a portfolio piece - something you want to show the public as evidence of your skill, which would (hopefully) lead to financial opportunities - it's considered at the very least good form to contact the artist who made the original art or photograph, tell them your intentions, and ask for permission to use their work that way. It's not necessary, even legally - there's no possible way a 3D rendering based on a photograph or a 2D painting would ever be considered enough of a "derivative work" to fall afoul of copyright law - but it is, still and all, considered best practice to get permission.

But it's very different if you're a company producing video lessons about 3D art that you sell - for money - and one of those video courses explicitly uses a given 2D work as its reference material. Even if the work is just being used as an exemplar and customers are being instructed to make a thing and not necessarily that thing, the artwork's use in the making of that course video is still commercial use, and that particular company that I talked about, whenever it was not using Creative Commons works, always makes sure that it has an actual explicit license to commercially use any reference material that is used in its courses, not merely verbal permission. That is how it's done in the creative world whenever making money is part of the equation.
 
But it's very different if you're a company producing video lessons about 3D art that you sell - for money - and one of those video courses explicitly uses a given 2D work as its reference material. Even if the work is just being used as an exemplar and customers are being instructed to make a thing and not necessarily that thing, the artwork's use in the making of that course video is still commercial use, and that particular company that I talked about, whenever it was not using Creative Commons works, always makes sure that it has an actual explicit license to commercially use any reference material that is used in its courses, not merely verbal permission. That is how it's done in the creative world whenever making money is part of the equation.
I do agree with you to a certain extent. When you are creating art that is expressly meant to cite another artist’s work, like you were in your course, or in my example the caricature, I do agree that an explicit license might be necessary (well, perhaps not in the case of the caricature, as the court decided).

But when we are not talking about citing other works, but making something completely new, I believe that AIs have the same right to be aware of other works, just like humans do. Every artwork that humans produce is influenced by other artworks that that artist has seen at some point, and the same is the case for AIs.
 
I don't see how that follows logically.

No matter how people ultimately hash out the angels-on-pins discussion around how similar AI programs' training is or isn't to human learning, it doesn't change the fact that AI programs are commercial products manufactured by companies for profit, and as a society we've already decided that it is acceptable for artists to assert some level of control over the commercial use of their artworks.

But that's the very legal point that is being asked. The argument put by some is that the AI are not using the work in the manner of them storing the image, then regurgitating the same image but are analysing the image - like a human artist does when they look at another artist's work - they then use the analysis when they create a new work*, not the artist's work itself.

It's like me spending hours looking at a Rembrandt and working out how he used colour to represent light, when I then produce a piece of art I may think "remember how Rembrandt did X&Y to achieve Z I'll do that". That is me using an analysis of his work and not copying one of his works, even if the copyright still existed my use of my analysis would not be considered something he should receive any renumeration for.

Some of this has arisen without malice - most of the available generative AIs came out of academic research which legally could use the datasets compiled from what they could grab off the 'net (research use is one of the exceptions to copyright protection in many countries).

Now I think these types of legal action will soon be quite irrelevant as folk will move from using the datasets scraped "blindly" from the internet and start using datasets that they do have the rights to. For example, Adobe using its stock image library.

*There is some interesting research that makes that not quite as clearcut as it might have been, it seems as if sometimes generative AI can reproduce an original work based on a prompt.
 
Last edited:
But that's the very legal point that is being asked. The argument put by some is that the AI are not using the work in the manner of them storing the image, then regurgitating the same image but are analysing the image - like a human artist does when they look at another artist's work - they then use the analysis when they create a new work*, not the artist's work itself.

It's like me spending hours looking at a Rembrandt and working out how he used colour to represent light, when I then produce a piece of art I may think "remember how Rembrandt did X&Y to achieve Z I'll do that". That is me using an analysis of his work and not copying one of his works, even if the copyright still existed my use of my analysis would not be considered something he should receive any renumeration for.

Some of this has arisen without malice - most of the available generative AIs came out of academic research which legally could use the datasets compiled from what they could grab off the 'net (research use is one of the exceptions to copyright protection in many countries).

Now I think these types of legal action will soon be quite irrelevant as folk will move from using the datasets scraped "blindly" from the internet and start using datasets that they do have the rights to. For example, Adobe using its stock image library.

*There is some interesting research that makes that not quite as clearcut as it might have been, it seems as if sometimes generative AI can reproduce an original work based on a prompt.

From a legal perspective, most of this material reserves all copyright except where explicitly granted or for exceptions permitted by law (which includes your example of a human being studying it and making derivative works)

It's entirely unclear to me that an AI would be or should be treated as analogous to a human mind.
 
But when we are not talking about citing other works, but making something completely new, I believe that AIs have the same right to be aware of other works, just like humans do. Every artwork that humans produce is influenced by other artworks that that artist has seen at some point, and the same is the case for AIs.

But that's the very legal point that is being asked. The argument put by some is that the AI are not using the work in the manner of them storing the image, then regurgitating the same image but are analysing the image - like a human artist does when they look at another artist's work - they then use the analysis when they create a new work*, not the artist's work itself.

You are both missing the premise; maybe it's my fault for not expressing it clearly enough.

Both of your arguments hinge on the matter of whether and how artworks are being used by the program itself. I'm talking about the matter of artworks being used by the software company during the manufacturing process of the program.
 
I do agree with you to a certain extent. When you are creating art that is expressly meant to cite another artist’s work, like you were in your course, or in my example the caricature, I do agree that an explicit license might be necessary (well, perhaps not in the case of the caricature, as the court decided).

But when we are not talking about citing other works, but making something completely new, I believe that AIs have the same right to be aware of other works, just like humans do. Every artwork that humans produce is influenced by other artworks that that artist has seen at some point, and the same is the case for AIs.

AIs don't have any rights. They're software whose main purpose is to squeeze as much value as possible out of the common heritage of human culture for the sole benefit of the rich. And there are no laws to prevent the great theft, no recourse for artists to protect their work from being exploited for the enrichment of disgusting CEOs. These are the end times.
 

Back
Top Bottom