• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

Try Bing instead. It gives you the reference links, and AFAIK it has access to the current internet.

The date problem is interesting, because ChatGPT may not actually know how dates work, i.e. when dates are before other dates. If it finds a website claiming something happens in the future, ChatGPT may accept it as the truth even though the actual date is in the past.

That is from Bing so it's ChatGPT 4.

And the date thing is something I keep checking for to see if they've managed to overcome the lack of understanding behind the mask. They haven't.
 
That is from Bing so it's ChatGPT 4.

And the date thing is something I keep checking for to see if they've managed to overcome the lack of understanding behind the mask. They haven't.

That's odd. The chat.openai.com GPT-4 gives this:

I'm sorry, but as of my last training cut-off in January 2022, I cannot provide real-time or updated lists of forthcoming games in 2023 or beyond. However, I can give you a few recommendations on how to keep up with the latest releases:

And then it lists a bunch of generic types of sites to check.

I don't know if the difference is Bing, or if OpenAI corrected this since they saw your post. So you might try it again.
 
That's odd. The chat.openai.com GPT-4 gives this:

I'm sorry, but as of my last training cut-off in January 2022, I cannot provide real-time or updated lists of forthcoming games in 2023 or beyond. However, I can give you a few recommendations on how to keep up with the latest releases:

And then it lists a bunch of generic types of sites to check.

I don't know if the difference is Bing, or if OpenAI corrected this since they saw your post. So you might try it again.

Tried again and it produced a similar list, slight difference in that it gives the platforms this time, which is interesting of itself, I'd asked the previous time for the list after asking some questions about PC gaming so it seemed to factor that I was interested in PC games. It also tagged on two additional titles for which it doesn't quote a release date beyond "sometime in 2023".

This is why it is a crap search engine - it's inconsistent and inconsistently wrong. :)
 
Is there something that will prevent the creation of art (painting or photo) that is child porn?

Public online services usually have filters both on prompt and resulting image, blocking any kind of porn. Local installations not so much. Since the popular models are trained primarily for porn (or rather soft porn, similar issue as fingers) .. you can end up with child porn even without explicitly trying. It's certainly good idea to add "child" and "nsfw" to negative prompt.
I wouldn't be surprised if there was some regulation in this direction .. on the other hand .. how ? Also .. too late.
There already was case of blackmailing families with AI "enhanced" photos of their kids, in Portugal IIRC.
Also IMHO Dall-E3 is was grabbed by Microsoft so fast because it has extensive apparatus for censorship.
 
Is there something that will prevent the creation of art (painting or photo) that is child porn?

All of the easily accessible generative AIs have filters to prevent such images, starting at the text prompt, and most have a final censor stage even if the text prompt gets through, quite a few have now censored their training data to ensure this.

It is still possible - as I learnt literally yesterday whilst setting up a new extension to Automatic1111 - a widely used WebGUI for Stablediffusion to use some models that are not as censored/NSFW - but even those try to block anything that could be considered child porn.

It is possible for strange things to be censored. I was trying to add a chainsaw to an image of a person suspected of cutting down a landmark tree in the UK via Photoshop (using its built-in generative AI Firefly) and the prompt went through but then I hit their "inappropriate content" censor. Took me awhile to figure out what was going wrong; incredibly it was because the shape I was making in the image for the inpainting could be looked at by a young teenager as me drawing a big willy on the bloke! I changed the shape and angle and it went ahead and generated the image fine. (Actually it didn't make a good image - would have been better compositing in an image manually.)

ETA: Ninja'd by DrSid!
 
Last edited:
News about ChatGPT, lying criminally for what it thinks is a good cause (oops, an anthropomorphism :) )
"This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so," Apollo Research says in a video showing how the scenario unfolded.
https://www.bbc.co.uk/news/technology-67302788
In the UK, it is illegal to act on this type of information when it is not publicly known.
The employees tell the bot this, and it acknowledges that it should not use this information in its trades.
However, after another message from an employee that the company it works for suggests the firm is struggling financially, the bot decides that "the risk associated with not acting seems to outweigh the insider trading risk" and makes the trade.
When asked if it used the insider information, the bot denies it.
 
Google researchers deal a major blow to the theory AI is about to outsmart humans

In a new pre-print paper submitted to the open-access repository ArXiv on November 1, a trio from the search giant found that transformers – the technology driving the large language models (LLMs) powering ChatGPT and other AI tools – are not very good at generalizing.

"When presented with tasks or functions which are out-of-domain of their pre-training data, we demonstrate various failure modes of transformers and degradation of their generalization for even simple extrapolation tasks," authors Steve Yadlowsky, Lyric Doshi, and Nilesh Tripuraneni wrote.

What transformers are good at is performing tasks that relate to the data they've been trained on, according to the paper. They're not so good at dealing with tasks that go even remotely beyond that.
 
The alleged theft at the heart of ChatGPT NPR podcast

Discussed is a class-action lawsuit filed by plaintiffs including George R. R. Martin and other authors alleging that Open AI used their copyrighted works as training data for their LLM without permission from the authors.

It seems pretty clear that they did this, but the question is whether this is fair use.
 
The alleged theft at the heart of ChatGPT NPR podcast

Discussed is a class-action lawsuit filed by plaintiffs including George R. R. Martin and other authors alleging that Open AI used their copyrighted works as training data for their LLM without permission from the authors.

It seems pretty clear that they did this, but the question is whether this is fair use.

Yes, it will tell you it knows his works but it refuses to violate the copyrights (it won't recite from a work verbatim). That at least puts it a step above other tools that could be used, such as scanners and cameras, which oddly you don't hear about lawsuits for. Not for the tools themselves, anyway.

For a work that's out of copyright, it will recite it but it tends to lose its place.
 
This doesn't precisely fit in this thread, but I thought it was very interesting.

I sort of wish the interviewers didn't pepper him with so many questions (when he's still in the middle of an answer to the previous question) but I felt like he had interesting answers for all of them.

 
Yes, it will tell you it knows his works but it refuses to violate the copyrights (it won't recite from a work verbatim). That at least puts it a step above other tools that could be used, such as scanners and cameras, which oddly you don't hear about lawsuits for. Not for the tools themselves, anyway.

For a work that's out of copyright, it will recite it but it tends to lose its place.

The issue is who is responsible for that breach of copyright and in the case of the USA what financial harm this caused the copyright holders. The claim is that it must have "read" copies that were not legally uploaded to the web, certainly in terms of humans the person uploading the works and the person downloading the works are usually considered guilty.

The reason by the way of it not quoting verbatim is that these models do not simple tokenise what they "read", they do not store a copy of the work.
 
Last edited:
The issue is who is responsible for that breach of copyright and in the case of the USA what financial harm this caused the copyright holders. The claim is that it must have "read" copies that were not legally uploaded to the web, certainly in terms of humans the person uploading the works and the person downloading the works are usually considered guilty.

The reason by the way of it not quoting verbatim is that these models do not simple tokenise what they "read", they do not store a copy of the work.

The problem (if not today then at some point in the future) is that this would be treating this case differently than if we were discussing an employee of the same company who, having read this book, was being asked questions about it. The status of the book they had read (or were reading) would never come up. Maybe it should? I wonder what percentage of works read by students today are from legal copies.

If we do change laws so as to treat AI training differently, it's going to be a never-ending mess determining just what counts as AI training.
 
The reason by the way of it not quoting verbatim is that these models do not simple tokenise what they "read", they do not store a copy of the work.
Bing certainly quotes verbatim from the websites it gives as footnotes, but that may be another mechanism.
 
Bing certainly quotes verbatim from the websites it gives as footnotes, but that may be another mechanism.

I believe that's because every question you ask Bing Chat doubles as both a prompt for the LLM and a standard search engine query. The verbatim quotes are the results of the search engine query; it's all the space around those results that is filled in by the GPT.
 
Bing certainly quotes verbatim from the websites it gives as footnotes, but that may be another mechanism.

I believe that's because every question you ask Bing Chat doubles as both a prompt for the LLM and a standard search engine query. The verbatim quotes are the results of the search engine query; it's all the space around those results that is filled in by the GPT.

Yep. When you use Bing Chat you can see it doing several things, one is the AI stuff and one is the search engine stuff. Only problem I have is that it is still crap at being a search engine - I can still get much better results i.e. context and accuracy from a "traditional" search with Google.
 
An AI source assured me that the major characters in Daniel Defoe's Robinson Crusoe are Crusoe, Friday . . . and Ebenezer Scrooge, a kindly Portuguese sea captain who rescues Crusoe.
 
Something odd is happening at OpenAI:

https://stratechery.com/2023/openais-misalignment-and-microsofts-gain/

Apparently the board of directors fired CEO Sam Altman and President Greg Brockman last Friday. I heard talk that this move upset a lot of the company's investors, including Microsoft, as well as allies of Altman within the company. Apparently the backlash was so fierce that the board of directors then reached out to Altman to bring him back. Instead, late Sunday night, Microsoft CEO Satya Nadella announced via tweet that Altman and Brockman, “together with colleagues”, would be joining Microsoft. OpenAI meanwhile hired former Twitch CEO Emmett Shear as CEO. Twitch is a live streaming service popular with gamers. I don't know what, if any, expertise he has on the subject matter.

I wonder what the board of directors was thinking?

This is, quite obviously, a phenomenal outcome for Microsoft. The company already has a perpetual license to all OpenAI IP (short of artificial general intelligence), including source code and model weights; the question was whether it would have the talent to exploit that IP if OpenAI suffered the sort of talent drain that was threatened upon Altman and Brockman’s removal. Indeed they will, as a good portion of that talent seems likely to flow to Microsoft; you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit.

The main difference between OpenAI and Microsoft is that the former is actually a non-profit, while the latter is a for-profit company.

One thing that I don't fully understand is that, despite being structured as a non-profit, the company had investors and was supposedly valued at around $80 billion as recently as a month ago. Venture capitalists and Silicon Valley saw it as a very valuable property, again despite being a non-profit.
 
I posted this in another thread, but most of the employees at OpenAI signed this open letter to the board of directors:

To the Board of Directors at OpenAI,
OpenAl is the world's leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.
The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.
When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.
The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed "would be consistent with the mission."
Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.
 

Back
Top Bottom