ChatGPT

Could be garreth is unknown word to him. Which is detected even before the language model itself.

The word is unknown to it, but there's no dictionary lookup in that sense. You can make up words for use in a program and it will happily accept them. E.g.:

Code:
Me: Is the following legal Python code?

def flazPlorg(zipdo):
    garreth = zipdo + 42
    print("garreth = {garreth}")

GPT-4: The code you've written is almost correct, but the string formatting
is not done correctly. The print() statement should contain an f prefix...

Me: BTW, what is a garreth?

GPT-4: In the context you've provided, garreth appears to be a variable
name in your Python code. ...
 
Last edited:
Our first podcast episode made by AI

It's an NPR podcast that I sometimes listen to, and as an experiment they had AI write an episode of their show. Also, one of the voices is AI-generated (I wouldn't be able to tell the difference if they didn't tell you in advance). They do spend a little while explaining what you are about to hear in more detail than probably necessary. As for the part written and performed partly by AI itself, while it's nothing fantastic, and contained some incorrect "facts", again, I'm not 100% sure I would have picked up on it if they didn't tell listeners in advance. Some of the humor even made me chuckle. It contained levels of irony. A computer mocking computers for lacking a sense of humor. That's self-deprecation, which is a technique actually employed by people.

(Whether the irony was intentional or not, I don't know. But it worked nevertheless.)
 
I hope people are beginning to see why it's a mistake to take seriously pop-clickbait headlines about how "ChatGPT passed the bar exam!" or treat them as having any meaningful implications.

I'm not excusing the lawyer, but from what I can determine he was using GPT-3.5, which failed the exam (10th percentile). GPT-4 is the one that passed it (90th percentile). It's why it's important to distinguish the two.
 
I'm not excusing the lawyer, but from what I can determine he was using GPT-3.5, which failed the exam (10th percentile). GPT-4 is the one that passed it (90th percentile). It's why it's important to distinguish the two.

I'm sure it also depends on what training data sets it's given to study for the exam. There are probably textbooks written for people who want to pass the bar exam, so if you include those in the data used to train it up, it could make a big difference.
 
I'm sure it also depends on what training data sets it's given to study for the exam. There are probably textbooks written for people who want to pass the bar exam, so if you include those in the data used to train it up, it could make a big difference.

True, though we have to assume that the humans GPT-4 was competing against studied those textbooks too. We just can't hold all of that in our puny brains.
 
ChatGPT appears to only know around 25 jokes, and can't make a coherent original one when asked to.
I have often asked it to produce a joke when demonstrating it for friends, and the results have only included one of the 25 jokes here. You can easily make it produce an original joke by specifying something that should be part of the joke, like writing “Please create an original joke involving a radiator and a bicycle”.

I just did this, and the reply was: “Why did the bicycle get in trouble with the radiator?

Because it kept wheely-ing close and telling hot air jokes!”

I think it is a decent attempt that would have taxed a human being too.

But most of its jokes are in the form of riddles, and I therefore also often have asked it to produce jokes that are in anecdotal forms.

One such result was:

A man walks into a doctor's office with a carrot stuck in his ear. The doctor takes a look and says, "Well, I can see the problem. You have a carrot stuck in your ear!" The man replies, "I know that! Can you please help me get it out?" The doctor takes a deep breath and says, "Alright, I have an idea. But you're not going to like it." The man asks, "What is it?" The doctor responds, "Well, first, I'm going to stick this banana in your other ear. Then, I'm going to tell you to jump up and down on one foot while flapping your arms like a chicken." The man looks at the doctor incredulously and asks, "What kind of crazy idea is that?" The doctor shrugs and says, "Hey, it may not work, but at least you'll look like a complete idiot while you're doing it!"

This is certainly not a great result, but being a decidedly uncreative person myself, I can accept it for the effort.
 
I started a thread in social issues/current events about AI generated click-bait (actually I started it in the CT forum about the flood of click-bait using disgusting images and stories but it got moved and evolved into a thread about AI).

Anyway, I came across this excellent video discussing a number of issues about AI, in particular copyright issues including copyright issues when one starts copying art and images. There aren't many if any laws allowing copyrighting of an artist's style. And with AI now scraping those styles then generating new art it is a looming problem.

Another issue is people using these AI programs expanding troll farm functions.

Here's the post with the Youtube link:
http://www.internationalskeptics.com/forums/showpost.php?p=14092265&postcount=48

Or just the video link if one prefers.

 
[snip]

Anyway, I came across this excellent video discussing a number of issues about AI, in particular copyright issues including copyright issues when one starts copying art and images. There aren't many if any laws allowing copyrighting of an artist's style. And with AI now scraping those styles then generating new art it is a looming problem.

[snip]


And it isn't clear to me that such laws would even be a good idea.

If AI is copying artists' styles it would only be doing what artists of every discipline have been doing for as long as there has been art.

It could be argued that that is one of the practices that generates innovation in the arts, since no artist can (or might not even want to) copy another style perfectly. Those changes, minor though they may be, are a lot of what comprises evolution in art forms.
 
And it isn't clear to me that such laws would even be a good idea.

If AI is copying artists' styles it would only be doing what artists of every discipline have been doing for as long as there has been art.

It could be argued that that is one of the practices that generates innovation in the arts, since no artist can (or might not even want to) copy another style perfectly. Those changes, minor though they may be, are a lot of what comprises evolution in art forms.

But now it's being done wholesale, scraping artists' work off the Net with no credit or compensation for the artists.

I get it it's been done before and it will be hard to copyright styles. It is difficult to compensate musicians with all the free downloads of their work.

Doesn't mean this current issue can't be addressed. Require AI programs acknowledge they scraped artist styles, require they ID themselves as AI, and block any profits that come from scraped styles unless the artists are compensated.
 
But now it's being done wholesale, scraping artists' work off the Net with no credit or compensation for the artists.

I get it it's been done before and it will be hard to copyright styles. It is difficult to compensate musicians with all the free downloads of their work.

Doesn't mean this current issue can't be addressed. Require AI programs acknowledge they scraped artist styles, require they ID themselves as AI, and block any profits that come from scraped styles unless the artists are compensated.

Indeed. Using picture in training set for purposes of generating pictures similar to those pictures .. is clearly not transformative. It should only be allowed after permission.
Not that it would stop pirates. But it could stop blatant commercial misuse.
 
Prohibit the use of automatically generated computer whatsits in anything that isn't completely public domain. That should make for an entertaining scrambling of the entertainment industry, and better conditions for human artists.
 
Prohibit the use of automatically generated computer whatsits in anything that isn't completely public domain. That should make for an entertaining scrambling of the entertainment industry, and better conditions for human artists.

That seems a tad regressive and would potentially harm a lot of artists.

I'm a - very much - amateur artist and I've started to use several "AI" tools recently as part of my "workflow" - created a few textures via an AI generative system and used one to turn a sketch into a tiling pattern. Why should that mean I lose my copyright?

And for the bigger companies there is not a problem with the concerns that they are using artwork they "found" online and no one is being paid for that as they can use assets they do own the rights to. For instance Adobe use their own "Adobe Stock" images for their AI training.
 
...quote

Doesn't mean this current issue can't be addressed. Require AI programs acknowledge they scraped artist styles, require they ID themselves as AI, and block any profits that come from scraped styles unless the artists are compensated.

It is going to be very hard to legislate for a "style".

Indeed. Using picture in training set for purposes of generating pictures similar to those pictures .. is clearly not transformative. It should only be allowed after permission.
Not that it would stop pirates. But it could stop blatant commercial misuse.

Yet it is what most artists do. Indeed a lot of formal art training is to understand how someone created a particular picture, the techniques used and then reproduce those techniques. In this aspect AIs aren't doing anything different to human artists apart from the scale of what they can transform.
 

Back
Top Bottom