• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

The leading theory seems to be that it’s a power struggle between “tech-optimists” and “doomers”, the latter being extremely concerned that AGI represents some sort of existential threat to humanity and therefore they need to put the brakes on development of it.

Does that make sense? I think that both camps seem to believe that AGI is coming sooner rather than later.
 
The leading theory seems to be that it’s a power struggle between “tech-optimists” and “doomers”, the latter being extremely concerned that AGI represents some sort of existential threat to humanity and therefore they need to put the brakes on development of it.

Does that make sense? I think that both camps seem to believe that AGI is coming sooner rather than later.

Seems to line up with the choice of new CEO. Here's a quote from him: "My AI safety discourse is 100% "you are building an alien god that will literally destroy the world when it reaches the critical threshold but be apparently harmless before that.”’
 
This seems to be a bit of a reversal to the norm, it's usually the board being hammered for being all about the money!
 
Seems to line up with the choice of new CEO. Here's a quote from him: "My AI safety discourse is 100% "you are building an alien god that will literally destroy the world when it reaches the critical threshold but be apparently harmless before that.”’

Someone took the plot of Terminator a little too seriously.
 
Most boards aren't in charge of nonprofit companies.

Sadly sounds like the company mainly employed the typical "Silicon Valley" start-up employees - the ones that are there for the share options, folk should have realised it wasn't meant to be that sort of company?
 
So now it seems he's going back and the board is going to change .. which seems to be in line with mysterious open letter published by Musk, which is supposed to come from former OpenAI employees who claim it's all about OpenAI being moved to for-profit organization. Microsoft doing the pushing I guess ? I'm still confused.
Well anyway, I'm sure it's going to have positive impact on AI safety. :boxedin:
 
Last edited:
So now it seems he's going back and the board is going to change .. which seems to be in line with mysterious open letter published by Musk, which is supposed to come from former OpenAI employees who claim it's all about OpenAI being moved to for-profit organization. Microsoft doing the pushing I guess ? I'm still confused.
Well anyway, I'm sure it's going to have positive impact on AI safety. :boxedin:

The new board seems guaranteed to move the organisation more to be a "for profit" commercially focused company. Those share options are just a couple of years away....
 
I am officially labelling current variants of AI based on ChatGPT as crap.

I've let MS takeover my searches and my use of Windows for the last couple of weeks. So using Bing search and Bing chat and Co-pilot on the desktop. They are bloody useless. Co-pilot couldn't help you pilot a paper plane in a crash landing: it can't find Window settings, gets totally confused about trouble-shooting and so on. As for Bing and Bing chat used as a search engine, I can get better results from google searches with only a few keywords rather than finding a phrase it can produce the results I had to know I wanted before I started to search! And ask it to do something - wanted a bit of HTML code, well after eventually finding the magic spell that meant it didn't keep telling me what website building apps are out there, it produced code that 1) didn't work, 2) wasn't what I asked for.

Here is its rewording of my rant to sound more professional:

I’m sorry to hear that you’ve been having trouble with Bing search, Bing chat, and Co-pilot on your desktop. I understand that you’re looking for a more professional way to express your frustration. Here’s a possible rephrased version of your message:

“I have been using Bing search, Bing chat, and Co-pilot on my desktop for the past couple of weeks. Unfortunately, I have found them to be less than satisfactory. Co-pilot, in particular, has not been helpful in finding Windows settings or troubleshooting issues. As for Bing search, I have found that it often requires me to use specific phrases to get the results I need, whereas Google search can provide better results with fewer keywords. Additionally, when I asked Bing chat to provide me with a bit of HTML code, it produced code that was neither what I asked for nor functional. I hope that Microsoft can improve these services in the future.”

I hope this helps. Let me know if there’s anything else I can do for you.


It didn't do too bad a job of that :D
 
I am officially labelling current variants of AI based on ChatGPT as crap.

From another thread:
I don't know what to expect "any time soon", given that even experts in the field have been surprised by the rate of recent progress. Its entirely possible that the current progress will hit a ceiling and slow down, though it's also possible that it will continue for a while. Regardless, the current version seems to be increasing productivity:

https://www.science.org/doi/10.1126/science.adh2586
We examined the productivity effects of a generative artificial intelligence (AI) technology, the assistive chatbot ChatGPT, in the context of midlevel professional writing tasks. In a preregistered online experiment, we assigned occupation-specific, incentivized writing tasks to 453 college-educated professionals and randomly exposed half of them to ChatGPT. Our results show that ChatGPT substantially raised productivity: The average time taken decreased by 40% and output quality rose by 18%. Inequality between workers decreased, and concern and excitement about AI temporarily rose. Workers exposed to ChatGPT during the experiment were 2 times as likely to report using it in their real job 2 weeks after the experiment and 1.6 times as likely 2 months after the experiment.

https://arxiv.org/abs/2302.06590
Generative AI tools hold promise to increase human productivity. This paper presents results from a controlled experiment with GitHub Copilot, an AI pair programmer. Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible. The treatment group, with access to the AI pair programmer, completed the task 55.8% faster than the control group. Observed heterogenous effects show promise for AI pair programmers to help people transition into software development careers.
 
Earlier in the thread I posted about the idea of combining ChatGPT with WolframAlpha to make something capable of answering questions with mathematical components more accurately than just GPT alone.

Scott Aaronson recently had a published a paper in which he does just that. Here's his discussion from his blog:
https://scottaaronson.blog/?p=7460
A couple nights ago Ernie Davis and I put out a paper entitled Testing GPT-4 on Wolfram Alpha and Code Interpreter plug-ins on math and science problems. Following on our DALL-E paper with Gary Marcus, this was another “adversarial collaboration” between me and Ernie. I’m on leave to work for OpenAI, and have been extremely excited by the near-term applications of LLMs, while Ernie has often been skeptical of OpenAI’s claims, but we both want to test our preconceptions against reality. As I recently remarked to Ernie, we both see the same glass; it’s just that he mostly focuses on the empty half, whereas I remember how fantastical even a drop of water in this glass would’ve seemed to me just a few years ago, and therefore focus more on the half that’s full.

Anyway, here are a few examples of the questions I posed to GPT-4, with the recent plug-ins that enhance its calculation abilities:

Click through to see the example problems.

Anyway, what did we learn from this exercise?

GPT-4 remains an endlessly enthusiastic B/B+ student in math, physics, and any other STEM field. By using the Code Interpreter or WolframAlpha plugins, it can correctly solve difficult word problems, involving a combination of tedious calculations, world knowledge, and conceptual understanding, maybe a third of the time—a rate that’s not good enough to be relied on, but is utterly astounding compared to where AI was just a few years ago.

GPT-4 can now clearly do better at calculation-heavy STEM problems with the plugins than it could do without the plugins.

There's more discussion of takeaways at the link.
 
I wonder now if this had something to do with them suspending signing up new "pro" accounts a few days back?

Probably not unrelated.

Actually I heard Sam Altman give the reason for this on the Hard Fork podcast, which was recorded two days before he was fired. He said it's just because they needed to limit the number of users until they can install more hardware capacity to accommodate all the demand.

https://www.nytimes.com/column/hard-fork
 
They run the public stuff on their own hardware? I'm really surprised to hear that.

It's either that or pay someone else to use their hardware, I would assume.

I don't know anything beyond what I heard in the podcast. It is, apparently, very computing intensive.

I found this 5 minute explainer of the hardware used to run the software:



ETA: one commenter to the video remarked:
I work for the company that builds and maintains these servers for Microsoft and it is absurd how crazy the H100s are compared to the A100s. Just the power projects alone cost millions of dollars per site for the upgrade.

ETA2: So, to clarify, it seems to be Microsoft who provide most of the physical hardware to run the GPTs.
 
Last edited:
It's either that or pay someone else to use their hardware, I would assume.

I don't know anything beyond what I heard in the podcast. It is, apparently, very computing intensive.

I found this 5 minute explainer of the hardware used to run the software:



ETA: one commenter to the video remarked:


ETA2: So, to clarify, it seems to be Microsoft who provide most of the physical hardware to run the GPTs.

That's what I would have thought so not sure why he made such a comment, usually you just buy/rent/lease more computing space as you need it.
 
That's what I would have thought so not sure why he made such a comment, usually you just buy/rent/lease more computing space as you need it.

It's not so simple anymore. Especially if you are OpenAI. There simply was no hardware they needed. They took over all GPU equipped machines in Azure cloud, and it was not enough. As they mentioned in recent OpenAI conference, Microsoft completely rebuilt its cloud for AI, and is still expanding.
Do people really need chatbots ? I mean will they pay billions for them ? Well .. I think only when the chatbots can make the same work you pay somebody for today. Maybe that's the game here ? To replace office workers everywhere ?
 
That's what I would have thought so not sure why he made such a comment, usually you just buy/rent/lease more computing space as you need it.

Either way, there's a limited supply of the particular hardware that it requires at the moment relative to the demand. That's what he said. They are building more of it, but it takes time.
 
I am officially labelling current variants of AI based on ChatGPT as crap.


Here is its rewording of my rant to sound more professional:

I'm more or less with you.

I think ChatGPT makes a fine writing app. As in, for writing things - letters, proposals, statements, executive summaries of topics, and that sort of thing. For everything else, I believe that AI fans massively oversell what are in actuality dubious-to-mediocre capabilities. And of course the vast majority of the hype surrounding ChatGPT and AI like it is not over what it can do now but what they are super-confident it WILL definitely be able to do in the undefined "future", and to me that is a major warning sign of a "bubble" tech like blockchain/crypto or "the metaverse".

I actually believe Bing is superior to Google for my own purposes; but when I use it, I use the Bing search engine normally via keywords, as opposed to using Bing Chat/Copilot. As a search engine "assistant", Bing Chat often floods its responses with information, definitions, and summaries that I don't need or didn't ask for; and when I do need them, though I'm glad that it cites its sources I ultimately dislike that I have zero control over which websites it decides to use as sources. If there was a way to give Bing Chat/Copilot a set of one-time instructions that it would remember forever across sessions and specific queries, such as to never me definitions for search terms unless I specifically ask for them, or to just give answers for technical questions that I ask it rather than trying to give me math lessons (for example), that would already greatly enhance its usefulness. Being able to curate sources directly or indirectly would also help a whole lot. But for now that doesn't seem to be possible; in a particular "thread" you can give it instructions but those instructions are confined and you have to re-enter them for every new thread.

I really don't know the purpose of Windows Copilot. 8 years ago Cortana could track my packages, make an appointment in my calendar or set a task in To Do, start playing music, or tell me that I need to leave a little early for work due to reported current traffic conditions (and by the way take an umbrella today). So far the only consistently demonstrable thing that Windows Copilot can do is change my theme from light mode to dark mode or back again, which is functionality I just don't ever need. Technically it also can start a focus session for you which would be useful to me, but the process of interacting with the app to make that happen is long and impractical compared to just manually starting one. Unless Copilot is given VASTLY greater permissions and integration with the OS it is a completely pointless application IMO and objectively inferior to a deprecated app from nearly a decade ago.
 

Back
Top Bottom