• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

Nice to see writers who have the good fortune to be able to afford effective legal advocacy are using their privilege for good.

Wishing good luck to RR Martin et al in their quest to kill this beast.

Even if they succeed it won't help most writers as their suit is going to rest on whether there being illegal copies of their work online and that OpenAI AI might (they have no evidence of this) have included those illegal copies in their training dataset breached their copyright.

One the reasons they claim that it did contain copies of their works is because it can provide accurate summaries and detailed information about their works. But knowing how the LLMs work that cannot be assumed. I would have thought it would be possible for OpenAI to check if their works were in the datasets that they are using.

All that aside, I think it is stretching copyright law to cover that, especially as the AIs do not retain a copy of the dataset they were trained on, I think new legislation is required to cover this.
 
Asimov's three rules of robotics. Eating in the classroom only breaks the second rule (obey humans). Hitting a puppy breaks the first rule (do not harm a human). A puppy is close enough to a human.

Terrible article - "Roboticist Daniel Wilson was a bit more florid. “Asimov’s rules are neat, but they are also ********. For example, they are in English. How the heck do you program that?”"

In the stories it is explained that those are an English approximation of the mathematical models programmed in the positronic brains. Plus look at who is "denying" this - "Roboticist Daniel Wilson" or should we write that as "R. Daniel" or rather "R. Daneel"...
 
There's no reason a monkey couldn't randomly type the words "A monkey is a mammal". We just need enough monkeys and a sufficient amount of time. But neither it nor the words on the page "know" anything. The only entity that can give context to the random arrangement of symbols is a human reader who speaks English, and so only the reader can be said to "know". Meaning is only created once they read the sentence.

I myself can randomly type :rule10 Maybe this means something to the lizard people of a distant swamp planet in dimension X, but unless one of those lizard people read it, it is just random gibberish.

FTFY - you don't want to be using language like that - it breaches rule 10!
 
Last edited:
These AIs are beginning to challenge what we thought was required to be "human" and to "know" things. When I started this post I didn't have it all planned out, these words I am typing are originating from somewhere between my ears, I have no conscious of "me" somehow deriving these from some process, they simply are there when I go to type.

Do I know what they mean?
 
While thoughts can seem to be original, and not something you specifically ' learned " at some point, all of your thoughts are the product of knowledge you have acquired, one way or another, even if it is by instinct.

Is there an AI equivalent of ' instinct ' ?
 
Several of the generative art AI companies are now letting you create your own model to use when generating new art

I thought I'd try with my cat, Sky - typically as a cat owner I have lots of photos of my cat. I uploaded 48 images - all of them with the cat in them but in all types of backgrounds, focal length and so on. After uploading it took about 15 minutes and the model was trained.

It is rather spooky: I asked for "a picture of Sky in a box", these are not bad. Now I can tell they aren't of my cat for a few reasons* but they have captured her likeness and face very well. I am sure if I sent these to a friend or family that know her they would think it was her.



As ever not all attempts turn out quite as "good"



Some really goofy "hallucinations". Interestingly it seems to have picked up on one of her idiosyncrasies, which is that she usually - especially when in the cottage loaf position - sticks out her right paw and leg.


*The reasons are: fur is rather too thick, body shape is a bit chunky and finally she has a birth mark on her pink nose that isn't in any of the photos.
 
These AIs are beginning to challenge what we thought was required to be "human" and to "know" things. When I started this post I didn't have it all planned out, these words I am typing are originating from somewhere between my ears, I have no conscious of "me" somehow deriving these from some process, they simply are there when I go to type.

Do I know what they mean?

It does little to question about what counts as "human" even if you think they're engaging in some form of intelligence. I see no reason to assume these things should be treated as creative minds rather than mass plagiarism machines.

Our current law allows for humans to consume other people's works and allow that to inspire their own original works that may or may not be heavily derivative (within limits) of prior work, but I see no reason why any such standard should apply to a computer program.
 
Last edited:
It does little to question about what counts as "human" even if you think they're engaging in some form of intelligence. I see no reason to assume these things should be treated as creative minds rather than mass plagiarism machines.
Our current law allows for humans to consume other people's works and allow that to inspire their own original works that may or may not be heavily derivative (within limits) of prior work, but I see no reason why any such standard should apply to a computer program.

Why not treat them as the tools they are? The creativity still comes from a human.
 
Why not treat them as the tools they are? The creativity still comes from a human.

Because people aren't using it for creativity, they're using it to avoid existing copyright laws that require that they pay those humans for creativity. It's why people are so excited about this even though the resulting product is objectively worse in pretty much every way. All the creativity and none of the ownership rights, what's not to love? /s

If George RR Martin wants to use ChatGPT on his own existing works to help him as part of the writing process for future works, that's no issue. The problem is that people seem to think this technology makes it open season to rip off the people responsible for the original works entirely.

If a rule came out that someone has to have permission for all of their training set the use case for this technology drops to near zero. The entire purpose is to create counterfeit products for cheap.
 
Last edited:
Because people aren't using it for creativity, they're using it to avoid existing copyright laws that require that they pay those humans for creativity. It's why people are so excited about this even though the resulting product is objectively worse in pretty much every way.

We've been over this point a few times in the thread, but as a quick recap, the resultant AIs do not contain the images they were trained on, they are not copy and pasting, they are creating unique images. The example above of my cat in a box demonstrates that rather well, I used no photos that looked like those it created (under my prompting of course).
 
We've been over this point a few times in the thread, but as a quick recap, the resultant AIs do not contain the images they were trained on, they are not copy and pasting, they are creating unique images. The example above of my cat in a box demonstrates that rather well, I used no photos that looked like those it created (under my prompting of course).

You created the imagine presumably from a dataset of photos that are already yours to do with what you please, correct? I agree it's a unique image, but it's clearly heavily derivative of real works to the point of questionable originality, which is a non-issue if those original works are also yours.

Perhaps you can see how that's a different situation than creating a crappy "George RR Martin" sound-alike machine trained using his work without giving the real Martin a dime?
 
Last edited:
...snip...

If a rule came out that someone has to have permission for all of their training set the use case for this technology drops to near zero.

Several of the companies have created their models using images/data they hold the rights to. Adobe and MS have both stated that they will indemnify users for any claims of copyright breach if you use their tools.

The entire purpose is to create counterfeit products for cheap.

Rubbish. People are using them to create unique works of art, it has opened up image creation to millions of new people.
 
There's no reason a monkey couldn't randomly type the words "A monkey is a mammal". We just need enough monkeys and a sufficient amount of time. But neither it nor the words on the page "know" anything. The only entity that can give context to the random arrangement of symbols is a human reader who speaks English, and so only the reader can be said to "know". Meaning is only created once they read the sentence.

I myself can randomly type "Ason fert tu kuvak." Maybe this means something to the lizard people of a distant swamp planet in dimension X, but unless one of those lizard people read it, it is just random gibberish.

Monkey typing is just noise generator. Unless it can say, when it wrote some claim, and when it wrote nonsense, it doesn't really know anything. It would need another entity to say "oh look, the monkey wrote something meaningful".
That's completely different to ChatGPT, which produced formally correct output basically always, and factually correct one on a level similar to humans.
 
While thoughts can seem to be original, and not something you specifically ' learned " at some point, all of your thoughts are the product of knowledge you have acquired, one way or another, even if it is by instinct.

Is there an AI equivalent of ' instinct ' ?

Well .. if you take 'instinct' as when you don't know how do you know something, and can't source the knowledge .. I'd say in ChatGPT it's always.
It can be taught about how it works. It can "know" and describe it. But it doesn't remember anything, and it can't source anything, besides possibly known everything it knows came from the teaching phase.
 
Several of the generative art AI companies are now letting you create your own model to use when generating new art

You can do stuff like this with free software (like Automatic1111). It takes a bit of experimenting and practice, it's certainly more for IT savvy people, and you need beefy GFX card. But then you have the freedom of fine tuning the output.

Sadly with language models you really can only play with the smallest ones at home, which are two generations behind. They are just too hardware demanding.
 
It seems you are talking about " feeling " rather than " knowing " ...

I would agree you need human-like consciousness to experience feelings.

There is a 'feeling' aspect to recognizing one knows something. An example is me knowing I'm lying about something.
 
Last edited:

Back
Top Bottom