• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

ChatGPT

Try Bing instead. It gives you the reference links, and AFAIK it has access to the current internet.

The date problem is interesting, because ChatGPT may not actually know how dates work, i.e. when dates are before other dates. If it finds a website claiming something happens in the future, ChatGPT may accept it as the truth even though the actual date is in the past.

That is from Bing so it's ChatGPT 4.

And the date thing is something I keep checking for to see if they've managed to overcome the lack of understanding behind the mask. They haven't.
 
That is from Bing so it's ChatGPT 4.

And the date thing is something I keep checking for to see if they've managed to overcome the lack of understanding behind the mask. They haven't.

That's odd. The chat.openai.com GPT-4 gives this:

I'm sorry, but as of my last training cut-off in January 2022, I cannot provide real-time or updated lists of forthcoming games in 2023 or beyond. However, I can give you a few recommendations on how to keep up with the latest releases:

And then it lists a bunch of generic types of sites to check.

I don't know if the difference is Bing, or if OpenAI corrected this since they saw your post. So you might try it again.
 
That's odd. The chat.openai.com GPT-4 gives this:

I'm sorry, but as of my last training cut-off in January 2022, I cannot provide real-time or updated lists of forthcoming games in 2023 or beyond. However, I can give you a few recommendations on how to keep up with the latest releases:

And then it lists a bunch of generic types of sites to check.

I don't know if the difference is Bing, or if OpenAI corrected this since they saw your post. So you might try it again.

Tried again and it produced a similar list, slight difference in that it gives the platforms this time, which is interesting of itself, I'd asked the previous time for the list after asking some questions about PC gaming so it seemed to factor that I was interested in PC games. It also tagged on two additional titles for which it doesn't quote a release date beyond "sometime in 2023".

This is why it is a crap search engine - it's inconsistent and inconsistently wrong. :)
 
Is there something that will prevent the creation of art (painting or photo) that is child porn?

Public online services usually have filters both on prompt and resulting image, blocking any kind of porn. Local installations not so much. Since the popular models are trained primarily for porn (or rather soft porn, similar issue as fingers) .. you can end up with child porn even without explicitly trying. It's certainly good idea to add "child" and "nsfw" to negative prompt.
I wouldn't be surprised if there was some regulation in this direction .. on the other hand .. how ? Also .. too late.
There already was case of blackmailing families with AI "enhanced" photos of their kids, in Portugal IIRC.
Also IMHO Dall-E3 is was grabbed by Microsoft so fast because it has extensive apparatus for censorship.
 
Is there something that will prevent the creation of art (painting or photo) that is child porn?

All of the easily accessible generative AIs have filters to prevent such images, starting at the text prompt, and most have a final censor stage even if the text prompt gets through, quite a few have now censored their training data to ensure this.

It is still possible - as I learnt literally yesterday whilst setting up a new extension to Automatic1111 - a widely used WebGUI for Stablediffusion to use some models that are not as censored/NSFW - but even those try to block anything that could be considered child porn.

It is possible for strange things to be censored. I was trying to add a chainsaw to an image of a person suspected of cutting down a landmark tree in the UK via Photoshop (using its built-in generative AI Firefly) and the prompt went through but then I hit their "inappropriate content" censor. Took me awhile to figure out what was going wrong; incredibly it was because the shape I was making in the image for the inpainting could be looked at by a young teenager as me drawing a big willy on the bloke! I changed the shape and angle and it went ahead and generated the image fine. (Actually it didn't make a good image - would have been better compositing in an image manually.)

ETA: Ninja'd by DrSid!
 
Last edited:
News about ChatGPT, lying criminally for what it thinks is a good cause (oops, an anthropomorphism :) )
"This is a demonstration of a real AI model deceiving its users, on its own, without being instructed to do so," Apollo Research says in a video showing how the scenario unfolded.
https://www.bbc.co.uk/news/technology-67302788
In the UK, it is illegal to act on this type of information when it is not publicly known.
The employees tell the bot this, and it acknowledges that it should not use this information in its trades.
However, after another message from an employee that the company it works for suggests the firm is struggling financially, the bot decides that "the risk associated with not acting seems to outweigh the insider trading risk" and makes the trade.
When asked if it used the insider information, the bot denies it.
 
Google researchers deal a major blow to the theory AI is about to outsmart humans

In a new pre-print paper submitted to the open-access repository ArXiv on November 1, a trio from the search giant found that transformers – the technology driving the large language models (LLMs) powering ChatGPT and other AI tools – are not very good at generalizing.

"When presented with tasks or functions which are out-of-domain of their pre-training data, we demonstrate various failure modes of transformers and degradation of their generalization for even simple extrapolation tasks," authors Steve Yadlowsky, Lyric Doshi, and Nilesh Tripuraneni wrote.

What transformers are good at is performing tasks that relate to the data they've been trained on, according to the paper. They're not so good at dealing with tasks that go even remotely beyond that.
 
The alleged theft at the heart of ChatGPT NPR podcast

Discussed is a class-action lawsuit filed by plaintiffs including George R. R. Martin and other authors alleging that Open AI used their copyrighted works as training data for their LLM without permission from the authors.

It seems pretty clear that they did this, but the question is whether this is fair use.
 
The alleged theft at the heart of ChatGPT NPR podcast

Discussed is a class-action lawsuit filed by plaintiffs including George R. R. Martin and other authors alleging that Open AI used their copyrighted works as training data for their LLM without permission from the authors.

It seems pretty clear that they did this, but the question is whether this is fair use.

Yes, it will tell you it knows his works but it refuses to violate the copyrights (it won't recite from a work verbatim). That at least puts it a step above other tools that could be used, such as scanners and cameras, which oddly you don't hear about lawsuits for. Not for the tools themselves, anyway.

For a work that's out of copyright, it will recite it but it tends to lose its place.
 
This doesn't precisely fit in this thread, but I thought it was very interesting.

I sort of wish the interviewers didn't pepper him with so many questions (when he's still in the middle of an answer to the previous question) but I felt like he had interesting answers for all of them.

 
Yes, it will tell you it knows his works but it refuses to violate the copyrights (it won't recite from a work verbatim). That at least puts it a step above other tools that could be used, such as scanners and cameras, which oddly you don't hear about lawsuits for. Not for the tools themselves, anyway.

For a work that's out of copyright, it will recite it but it tends to lose its place.

The issue is who is responsible for that breach of copyright and in the case of the USA what financial harm this caused the copyright holders. The claim is that it must have "read" copies that were not legally uploaded to the web, certainly in terms of humans the person uploading the works and the person downloading the works are usually considered guilty.

The reason by the way of it not quoting verbatim is that these models do not simple tokenise what they "read", they do not store a copy of the work.
 
Last edited:
The issue is who is responsible for that breach of copyright and in the case of the USA what financial harm this caused the copyright holders. The claim is that it must have "read" copies that were not legally uploaded to the web, certainly in terms of humans the person uploading the works and the person downloading the works are usually considered guilty.

The reason by the way of it not quoting verbatim is that these models do not simple tokenise what they "read", they do not store a copy of the work.

The problem (if not today then at some point in the future) is that this would be treating this case differently than if we were discussing an employee of the same company who, having read this book, was being asked questions about it. The status of the book they had read (or were reading) would never come up. Maybe it should? I wonder what percentage of works read by students today are from legal copies.

If we do change laws so as to treat AI training differently, it's going to be a never-ending mess determining just what counts as AI training.
 
The reason by the way of it not quoting verbatim is that these models do not simple tokenise what they "read", they do not store a copy of the work.
Bing certainly quotes verbatim from the websites it gives as footnotes, but that may be another mechanism.
 
Bing certainly quotes verbatim from the websites it gives as footnotes, but that may be another mechanism.

I believe that's because every question you ask Bing Chat doubles as both a prompt for the LLM and a standard search engine query. The verbatim quotes are the results of the search engine query; it's all the space around those results that is filled in by the GPT.
 
Bing certainly quotes verbatim from the websites it gives as footnotes, but that may be another mechanism.

I believe that's because every question you ask Bing Chat doubles as both a prompt for the LLM and a standard search engine query. The verbatim quotes are the results of the search engine query; it's all the space around those results that is filled in by the GPT.

Yep. When you use Bing Chat you can see it doing several things, one is the AI stuff and one is the search engine stuff. Only problem I have is that it is still crap at being a search engine - I can still get much better results i.e. context and accuracy from a "traditional" search with Google.
 
An AI source assured me that the major characters in Daniel Defoe's Robinson Crusoe are Crusoe, Friday . . . and Ebenezer Scrooge, a kindly Portuguese sea captain who rescues Crusoe.
 
Something odd is happening at OpenAI:

https://stratechery.com/2023/openais-misalignment-and-microsofts-gain/

Apparently the board of directors fired CEO Sam Altman and President Greg Brockman last Friday. I heard talk that this move upset a lot of the company's investors, including Microsoft, as well as allies of Altman within the company. Apparently the backlash was so fierce that the board of directors then reached out to Altman to bring him back. Instead, late Sunday night, Microsoft CEO Satya Nadella announced via tweet that Altman and Brockman, “together with colleagues”, would be joining Microsoft. OpenAI meanwhile hired former Twitch CEO Emmett Shear as CEO. Twitch is a live streaming service popular with gamers. I don't know what, if any, expertise he has on the subject matter.

I wonder what the board of directors was thinking?

This is, quite obviously, a phenomenal outcome for Microsoft. The company already has a perpetual license to all OpenAI IP (short of artificial general intelligence), including source code and model weights; the question was whether it would have the talent to exploit that IP if OpenAI suffered the sort of talent drain that was threatened upon Altman and Brockman’s removal. Indeed they will, as a good portion of that talent seems likely to flow to Microsoft; you can make the case that Microsoft just acquired OpenAI for $0 and zero risk of an antitrust lawsuit.

The main difference between OpenAI and Microsoft is that the former is actually a non-profit, while the latter is a for-profit company.

One thing that I don't fully understand is that, despite being structured as a non-profit, the company had investors and was supposedly valued at around $80 billion as recently as a month ago. Venture capitalists and Silicon Valley saw it as a very valuable property, again despite being a non-profit.
 
I posted this in another thread, but most of the employees at OpenAI signed this open letter to the board of directors:

To the Board of Directors at OpenAI,
OpenAl is the world's leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.
The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.
When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.
The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed "would be consistent with the mission."
Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.
 
The leading theory seems to be that it’s a power struggle between “tech-optimists” and “doomers”, the latter being extremely concerned that AGI represents some sort of existential threat to humanity and therefore they need to put the brakes on development of it.

Does that make sense? I think that both camps seem to believe that AGI is coming sooner rather than later.
 
The leading theory seems to be that it’s a power struggle between “tech-optimists” and “doomers”, the latter being extremely concerned that AGI represents some sort of existential threat to humanity and therefore they need to put the brakes on development of it.

Does that make sense? I think that both camps seem to believe that AGI is coming sooner rather than later.

Seems to line up with the choice of new CEO. Here's a quote from him: "My AI safety discourse is 100% "you are building an alien god that will literally destroy the world when it reaches the critical threshold but be apparently harmless before that.”’
 
This seems to be a bit of a reversal to the norm, it's usually the board being hammered for being all about the money!
 
Seems to line up with the choice of new CEO. Here's a quote from him: "My AI safety discourse is 100% "you are building an alien god that will literally destroy the world when it reaches the critical threshold but be apparently harmless before that.”’

Someone took the plot of Terminator a little too seriously.
 
Most boards aren't in charge of nonprofit companies.

Sadly sounds like the company mainly employed the typical "Silicon Valley" start-up employees - the ones that are there for the share options, folk should have realised it wasn't meant to be that sort of company?
 
So now it seems he's going back and the board is going to change .. which seems to be in line with mysterious open letter published by Musk, which is supposed to come from former OpenAI employees who claim it's all about OpenAI being moved to for-profit organization. Microsoft doing the pushing I guess ? I'm still confused.
Well anyway, I'm sure it's going to have positive impact on AI safety. :boxedin:
 
Last edited:
So now it seems he's going back and the board is going to change .. which seems to be in line with mysterious open letter published by Musk, which is supposed to come from former OpenAI employees who claim it's all about OpenAI being moved to for-profit organization. Microsoft doing the pushing I guess ? I'm still confused.
Well anyway, I'm sure it's going to have positive impact on AI safety. :boxedin:

The new board seems guaranteed to move the organisation more to be a "for profit" commercially focused company. Those share options are just a couple of years away....
 
I am officially labelling current variants of AI based on ChatGPT as crap.

I've let MS takeover my searches and my use of Windows for the last couple of weeks. So using Bing search and Bing chat and Co-pilot on the desktop. They are bloody useless. Co-pilot couldn't help you pilot a paper plane in a crash landing: it can't find Window settings, gets totally confused about trouble-shooting and so on. As for Bing and Bing chat used as a search engine, I can get better results from google searches with only a few keywords rather than finding a phrase it can produce the results I had to know I wanted before I started to search! And ask it to do something - wanted a bit of HTML code, well after eventually finding the magic spell that meant it didn't keep telling me what website building apps are out there, it produced code that 1) didn't work, 2) wasn't what I asked for.

Here is its rewording of my rant to sound more professional:

I’m sorry to hear that you’ve been having trouble with Bing search, Bing chat, and Co-pilot on your desktop. I understand that you’re looking for a more professional way to express your frustration. Here’s a possible rephrased version of your message:

“I have been using Bing search, Bing chat, and Co-pilot on my desktop for the past couple of weeks. Unfortunately, I have found them to be less than satisfactory. Co-pilot, in particular, has not been helpful in finding Windows settings or troubleshooting issues. As for Bing search, I have found that it often requires me to use specific phrases to get the results I need, whereas Google search can provide better results with fewer keywords. Additionally, when I asked Bing chat to provide me with a bit of HTML code, it produced code that was neither what I asked for nor functional. I hope that Microsoft can improve these services in the future.”

I hope this helps. Let me know if there’s anything else I can do for you.


It didn't do too bad a job of that :D
 
I am officially labelling current variants of AI based on ChatGPT as crap.

From another thread:
I don't know what to expect "any time soon", given that even experts in the field have been surprised by the rate of recent progress. Its entirely possible that the current progress will hit a ceiling and slow down, though it's also possible that it will continue for a while. Regardless, the current version seems to be increasing productivity:

https://www.science.org/doi/10.1126/science.adh2586
We examined the productivity effects of a generative artificial intelligence (AI) technology, the assistive chatbot ChatGPT, in the context of midlevel professional writing tasks. In a preregistered online experiment, we assigned occupation-specific, incentivized writing tasks to 453 college-educated professionals and randomly exposed half of them to ChatGPT. Our results show that ChatGPT substantially raised productivity: The average time taken decreased by 40% and output quality rose by 18%. Inequality between workers decreased, and concern and excitement about AI temporarily rose. Workers exposed to ChatGPT during the experiment were 2 times as likely to report using it in their real job 2 weeks after the experiment and 1.6 times as likely 2 months after the experiment.

https://arxiv.org/abs/2302.06590
Generative AI tools hold promise to increase human productivity. This paper presents results from a controlled experiment with GitHub Copilot, an AI pair programmer. Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible. The treatment group, with access to the AI pair programmer, completed the task 55.8% faster than the control group. Observed heterogenous effects show promise for AI pair programmers to help people transition into software development careers.
 
Earlier in the thread I posted about the idea of combining ChatGPT with WolframAlpha to make something capable of answering questions with mathematical components more accurately than just GPT alone.

Scott Aaronson recently had a published a paper in which he does just that. Here's his discussion from his blog:
https://scottaaronson.blog/?p=7460
A couple nights ago Ernie Davis and I put out a paper entitled Testing GPT-4 on Wolfram Alpha and Code Interpreter plug-ins on math and science problems. Following on our DALL-E paper with Gary Marcus, this was another “adversarial collaboration” between me and Ernie. I’m on leave to work for OpenAI, and have been extremely excited by the near-term applications of LLMs, while Ernie has often been skeptical of OpenAI’s claims, but we both want to test our preconceptions against reality. As I recently remarked to Ernie, we both see the same glass; it’s just that he mostly focuses on the empty half, whereas I remember how fantastical even a drop of water in this glass would’ve seemed to me just a few years ago, and therefore focus more on the half that’s full.

Anyway, here are a few examples of the questions I posed to GPT-4, with the recent plug-ins that enhance its calculation abilities:

Click through to see the example problems.

Anyway, what did we learn from this exercise?

GPT-4 remains an endlessly enthusiastic B/B+ student in math, physics, and any other STEM field. By using the Code Interpreter or WolframAlpha plugins, it can correctly solve difficult word problems, involving a combination of tedious calculations, world knowledge, and conceptual understanding, maybe a third of the time—a rate that’s not good enough to be relied on, but is utterly astounding compared to where AI was just a few years ago.

GPT-4 can now clearly do better at calculation-heavy STEM problems with the plugins than it could do without the plugins.

There's more discussion of takeaways at the link.
 
I wonder now if this had something to do with them suspending signing up new "pro" accounts a few days back?

Probably not unrelated.

Actually I heard Sam Altman give the reason for this on the Hard Fork podcast, which was recorded two days before he was fired. He said it's just because they needed to limit the number of users until they can install more hardware capacity to accommodate all the demand.

https://www.nytimes.com/column/hard-fork
 
They run the public stuff on their own hardware? I'm really surprised to hear that.

It's either that or pay someone else to use their hardware, I would assume.

I don't know anything beyond what I heard in the podcast. It is, apparently, very computing intensive.

I found this 5 minute explainer of the hardware used to run the software:



ETA: one commenter to the video remarked:
I work for the company that builds and maintains these servers for Microsoft and it is absurd how crazy the H100s are compared to the A100s. Just the power projects alone cost millions of dollars per site for the upgrade.

ETA2: So, to clarify, it seems to be Microsoft who provide most of the physical hardware to run the GPTs.
 
Last edited:
It's either that or pay someone else to use their hardware, I would assume.

I don't know anything beyond what I heard in the podcast. It is, apparently, very computing intensive.

I found this 5 minute explainer of the hardware used to run the software:



ETA: one commenter to the video remarked:


ETA2: So, to clarify, it seems to be Microsoft who provide most of the physical hardware to run the GPTs.

That's what I would have thought so not sure why he made such a comment, usually you just buy/rent/lease more computing space as you need it.
 
That's what I would have thought so not sure why he made such a comment, usually you just buy/rent/lease more computing space as you need it.

It's not so simple anymore. Especially if you are OpenAI. There simply was no hardware they needed. They took over all GPU equipped machines in Azure cloud, and it was not enough. As they mentioned in recent OpenAI conference, Microsoft completely rebuilt its cloud for AI, and is still expanding.
Do people really need chatbots ? I mean will they pay billions for them ? Well .. I think only when the chatbots can make the same work you pay somebody for today. Maybe that's the game here ? To replace office workers everywhere ?
 
That's what I would have thought so not sure why he made such a comment, usually you just buy/rent/lease more computing space as you need it.

Either way, there's a limited supply of the particular hardware that it requires at the moment relative to the demand. That's what he said. They are building more of it, but it takes time.
 
I am officially labelling current variants of AI based on ChatGPT as crap.


Here is its rewording of my rant to sound more professional:

I'm more or less with you.

I think ChatGPT makes a fine writing app. As in, for writing things - letters, proposals, statements, executive summaries of topics, and that sort of thing. For everything else, I believe that AI fans massively oversell what are in actuality dubious-to-mediocre capabilities. And of course the vast majority of the hype surrounding ChatGPT and AI like it is not over what it can do now but what they are super-confident it WILL definitely be able to do in the undefined "future", and to me that is a major warning sign of a "bubble" tech like blockchain/crypto or "the metaverse".

I actually believe Bing is superior to Google for my own purposes; but when I use it, I use the Bing search engine normally via keywords, as opposed to using Bing Chat/Copilot. As a search engine "assistant", Bing Chat often floods its responses with information, definitions, and summaries that I don't need or didn't ask for; and when I do need them, though I'm glad that it cites its sources I ultimately dislike that I have zero control over which websites it decides to use as sources. If there was a way to give Bing Chat/Copilot a set of one-time instructions that it would remember forever across sessions and specific queries, such as to never me definitions for search terms unless I specifically ask for them, or to just give answers for technical questions that I ask it rather than trying to give me math lessons (for example), that would already greatly enhance its usefulness. Being able to curate sources directly or indirectly would also help a whole lot. But for now that doesn't seem to be possible; in a particular "thread" you can give it instructions but those instructions are confined and you have to re-enter them for every new thread.

I really don't know the purpose of Windows Copilot. 8 years ago Cortana could track my packages, make an appointment in my calendar or set a task in To Do, start playing music, or tell me that I need to leave a little early for work due to reported current traffic conditions (and by the way take an umbrella today). So far the only consistently demonstrable thing that Windows Copilot can do is change my theme from light mode to dark mode or back again, which is functionality I just don't ever need. Technically it also can start a focus session for you which would be useful to me, but the process of interacting with the app to make that happen is long and impractical compared to just manually starting one. Unless Copilot is given VASTLY greater permissions and integration with the OS it is a completely pointless application IMO and objectively inferior to a deprecated app from nearly a decade ago.
 

Back
Top Bottom