I wonder now if this had something to do with them suspending signing up new "pro" accounts a few days back?
The leading theory seems to be that it’s a power struggle between “tech-optimists” and “doomers”, the latter being extremely concerned that AGI represents some sort of existential threat to humanity and therefore they need to put the brakes on development of it.
Does that make sense? I think that both camps seem to believe that AGI is coming sooner rather than later.
This seems to be a bit of a reversal to the norm, it's usually the board being hammered for being all about the money!
Seems to line up with the choice of new CEO. Here's a quote from him: "My AI safety discourse is 100% "you are building an alien god that will literally destroy the world when it reaches the critical threshold but be apparently harmless before that.”’
Most boards aren't in charge of nonprofit companies.

So now it seems he's going back and the board is going to change .. which seems to be in line with mysterious open letter published by Musk, which is supposed to come from former OpenAI employees who claim it's all about OpenAI being moved to for-profit organization. Microsoft doing the pushing I guess ? I'm still confused.
Well anyway, I'm sure it's going to have positive impact on AI safety.![]()
I am officially labelling current variants of AI based on ChatGPT as crap.
I don't know what to expect "any time soon", given that even experts in the field have been surprised by the rate of recent progress. Its entirely possible that the current progress will hit a ceiling and slow down, though it's also possible that it will continue for a while. Regardless, the current version seems to be increasing productivity:
https://www.science.org/doi/10.1126/science.adh2586
We examined the productivity effects of a generative artificial intelligence (AI) technology, the assistive chatbot ChatGPT, in the context of midlevel professional writing tasks. In a preregistered online experiment, we assigned occupation-specific, incentivized writing tasks to 453 college-educated professionals and randomly exposed half of them to ChatGPT. Our results show that ChatGPT substantially raised productivity: The average time taken decreased by 40% and output quality rose by 18%. Inequality between workers decreased, and concern and excitement about AI temporarily rose. Workers exposed to ChatGPT during the experiment were 2 times as likely to report using it in their real job 2 weeks after the experiment and 1.6 times as likely 2 months after the experiment.
https://arxiv.org/abs/2302.06590
Generative AI tools hold promise to increase human productivity. This paper presents results from a controlled experiment with GitHub Copilot, an AI pair programmer. Recruited software developers were asked to implement an HTTP server in JavaScript as quickly as possible. The treatment group, with access to the AI pair programmer, completed the task 55.8% faster than the control group. Observed heterogenous effects show promise for AI pair programmers to help people transition into software development careers.
A couple nights ago Ernie Davis and I put out a paper entitled Testing GPT-4 on Wolfram Alpha and Code Interpreter plug-ins on math and science problems. Following on our DALL-E paper with Gary Marcus, this was another “adversarial collaboration” between me and Ernie. I’m on leave to work for OpenAI, and have been extremely excited by the near-term applications of LLMs, while Ernie has often been skeptical of OpenAI’s claims, but we both want to test our preconceptions against reality. As I recently remarked to Ernie, we both see the same glass; it’s just that he mostly focuses on the empty half, whereas I remember how fantastical even a drop of water in this glass would’ve seemed to me just a few years ago, and therefore focus more on the half that’s full.
Anyway, here are a few examples of the questions I posed to GPT-4, with the recent plug-ins that enhance its calculation abilities:
Anyway, what did we learn from this exercise?
GPT-4 remains an endlessly enthusiastic B/B+ student in math, physics, and any other STEM field. By using the Code Interpreter or WolframAlpha plugins, it can correctly solve difficult word problems, involving a combination of tedious calculations, world knowledge, and conceptual understanding, maybe a third of the time—a rate that’s not good enough to be relied on, but is utterly astounding compared to where AI was just a few years ago.
GPT-4 can now clearly do better at calculation-heavy STEM problems with the plugins than it could do without the plugins.
I wonder now if this had something to do with them suspending signing up new "pro" accounts a few days back?
Probably not unrelated.
They run the public stuff on their own hardware? I'm really surprised to hear that.
I work for the company that builds and maintains these servers for Microsoft and it is absurd how crazy the H100s are compared to the A100s. Just the power projects alone cost millions of dollars per site for the upgrade.
It's either that or pay someone else to use their hardware, I would assume.
I don't know anything beyond what I heard in the podcast. It is, apparently, very computing intensive.
I found this 5 minute explainer of the hardware used to run the software:
ETA: one commenter to the video remarked:
ETA2: So, to clarify, it seems to be Microsoft who provide most of the physical hardware to run the GPTs.
That's what I would have thought so not sure why he made such a comment, usually you just buy/rent/lease more computing space as you need it.
That's what I would have thought so not sure why he made such a comment, usually you just buy/rent/lease more computing space as you need it.
I am officially labelling current variants of AI based on ChatGPT as crap.
Here is its rewording of my rant to sound more professional: