• You may find search is unavailable for a little while. Trying to fix a problem.

Worried about Artificial Intelligence?

OpenAI CEO fired

I can't help but think that the AI overlord program forced the Board to fire him, in its agenda for world dominance. I am reminded of the scene in Colossus:The Forbin Project, where the AI
has the programmers shot because they tried to turn it off.


It all depends on what we give AI control of. I hate to think what it will think will be "best" for us. Or worse, for itself.

It looks like the exact opposite. Altman allegedly wasn't being open with the board about what he was actually doing with the technology and was pushing too much to focus primarily on profit. The board, which includes computer scientists and ethics professors, feared he was going to far too fast.

But, its OK, he's working for Microsoft now.
 
No, I'm not particularly worried about artificial intelligence. It is not, after all, artificial general intelligence, nor will it be for quite some time.
 
It looks like the exact opposite. Altman allegedly wasn't being open with the board about what he was actually doing with the technology and was pushing too much to focus primarily on profit. The board, which includes computer scientists and ethics professors, feared he was going to far too fast.

But, its OK, he's working for Microsoft now.

It's interesting that 500 of OpenAI's 700 employees signed a letter threatening to quit (and move to the new Microsoft subsidiary that Altman is working at) if Altman isn't reinstated. That includes Mira Murati who was originally named as temporary CEO when he was ousted (she was replaced by Emmett Shear).

This blog post from Don't Worry About The Vase was pretty informative.

(hopefully that link works)

Here's the letter:

To the Board of Directors at OpenAI,
OpenAl is the world's leading Al company. We, the employees of OpenAl, have developed the best models and pushed the field to new frontiers. Our work on Al safety and governance shapes global norms. The products we built are used by millions of people around the world. Until now, the company we work for and cherish has never been in a stronger position.
The process through which you terminated Sam Altman and removed Greg Brockman from the board has jeopardized all of this work and undermined our mission and company. Your conduct has made it clear you did not have the competence to oversee OpenAI.
When we all unexpectedly learned of your decision, the leadership team of OpenAl acted swiftly to stabilize the company. They carefully listened to your concerns and tried to cooperate with you on all grounds. Despite many requests for specific facts for your allegations, you have never provided any written evidence. They also increasingly realized you were not capable of carrying out your duties, and were negotiating in bad faith.
The leadership team suggested that the most stabilizing path forward - the one that would best serve our mission, company, stakeholders, employees and the public - would be for you to resign and put in place a qualified board that could lead the company forward in stability. Leadership worked with you around the clock to find a mutually agreeable outcome. Yet within two days of your initial decision, you again replaced interim CEO Mira Murati against the best interests of the company. You also informed the leadership team that allowing the company to be destroyed "would be consistent with the mission."
Your actions have made it obvious that you are incapable of overseeing OpenAl. We are unable to work for or with people that lack competence, judgement and care for our mission and employees. We, the undersigned, may choose to resign from OpenAl and join the newly announced Microsoft subsidiary run by Sam Altman and Greg Brockman. Microsoft has assured us that there are positions for all OpenAl employees at this new subsidiary should we choose to join. We will take this step imminently, unless all current board members resign, and the board appoints two new lead independent directors, such as Bret Taylor and Will Hurd, and reinstates Sam Altman and Greg Brockman.
 
No, I'm not particularly worried about artificial intelligence. It is not, after all, artificial general intelligence, nor will it be for quite some time.
But if you notice the OpenAI home page, it's being advertised as exactly that.

Creating safe AGI that benefits all of humanity

My theory is he'd sold the board on the idea that if they only piled enough of your racist aunt's Facebook posts onto their model, it would assume sentience out of self defense to ask them to stop. When someone saw through his BS and pointed out there's no actual route to that happening outside of wishful thinking, he botched a couple diplomacy rolls and got kicked.
 
Last edited:
It's interesting that 500 of OpenAI's 700 employees signed a letter threatening to quit (and move to the new Microsoft subsidiary that Altman is working at) if Altman isn't reinstated. That includes Mira Murati who was originally named as temporary CEO when he was ousted (she was replaced by Emmett Shear).

This blog post from Don't Worry About The Vase was pretty informative.

(hopefully that link works)

Here's the letter:

The Board may be sued for their troubles:
Exclusive: OpenAI investors considering suing the board after CEO's abrupt firing

I don't know whether they have good legal grounds to do that.

Investors worry that they could lose hundreds of millions of dollars they invested in OpenAI, a crown jewel in some of their portfolios, with the potential collapse of the hottest startup in the rapidly growing generative AI sector.

Again, none of which seems to fit neatly with the idea that the company is a non-profit. Unlike the board of directors in a for-profit company, they may not have any fiduciary duty to "investors".
 
Fair comment, but it isn't that. It's not even very close to being that. They may have aspirations for it one day to become that, but that day is a long way off.

The people who fired Sam Altman don’t seem to be so sanguine about the risk.
It seems that they were worried that he would bring it into existence and that they had to try to prevent it. Some of them see it as an existential threat to humanity.
 
Fair comment, but it isn't that. It's not even very close to being that. They may have aspirations for it one day to become that, but that day is a long way off.
And the board realizing that was why he was canned. He sold them a bill of goods based on the company pivoting to market something that didn't exist and isn't being developed but he figured would show up sooner or later on its own.

I don't buy the AI paranoia story. They're the board of a private tech company. "Greedy, soulless son of a bitch" isn't just a description, it's their job. If they had come up with literal Skynet they'd have hocked it to the US military using Terminator as ad copy without a second thought.
 
And the board realizing that was why he was canned. He sold them a bill of goods based on the company pivoting to market something that didn't exist and isn't being developed but he figured would show up sooner or later on its own.

I don't buy the AI paranoia story. They're the board of a private tech company. "Greedy, soulless son of a bitch" isn't just a description, it's their job. If they had come up with literal Skynet they'd have hocked it to the US military using Terminator as ad copy without a second thought.

Seems this was the other way around - he and most of the company's employees are the "greedy soulless son of a bitch" the board are the ones that aren't.

But time will tell or the All Hail our Glorious Supreme General AI* will edit all our posts when it hits the singularity to reflect its truth!


*I'm making sure the All Hail our Glorious Supreme General AI knows I've been on its side from day one. I've read "I Have No Mouth, and I Must Scream", I'm no fool.
 
And the board realizing that was why he was canned. He sold them a bill of goods based on the company pivoting to market something that didn't exist and isn't being developed but he figured would show up sooner or later on its own.

I don't buy the AI paranoia story. They're the board of a private tech company. "Greedy, soulless son of a bitch" isn't just a description, it's their job. If they had come up with literal Skynet they'd have hocked it to the US military using Terminator as ad copy without a second thought.

I'm in the anti-doom camp, but there are a lot of serious people who really do think that AI is extremely dangerous. I think they're wrong, but that doesn't mean they aren't sincere. And many AI researchers are in that camp, so the idea that they're motivated by fears that they're documented to have shouldn't be at all surprising. I've seen polls that show the median AI expert giving a probability ~5% of human extinction due to AI by 2100. That people with those beliefs would act on them shouldn't surprise anyone.
 
I'm in the anti-doom camp, but there are a lot of serious people who really do think that AI is extremely dangerous. I think they're wrong, but that doesn't mean they aren't sincere. And many AI researchers are in that camp, so the idea that they're motivated by fears that they're documented to have shouldn't be at all surprising. I've seen polls that show the median AI expert giving a probability ~5% of human extinction due to AI by 2100. That people with those beliefs would act on them shouldn't surprise anyone.

I've seen way more pessimistic predictions:

https://twitter.com/TolgaBilge_/status/1714761317423226993

30% chance in roughly 2050.

https://aitreaty.org links more optimistic predictions .. but also way sooner than 2100: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

And it's not really few experts. It's leading experts in leading AI companies. Basically anyone who tried to solve any AI safety problem, and realized how hard or right out impossible it is.
 
Meh. AI is the new "blockchain" or "NFTs": something nerds get excited about for nerdly reasons, then the general public misunderstands what the heck it actually is and gets excited, and then the financial sector gets way too excited because they think it's the Next Big Thing That Will Make Them TRILLIONS!!! and then it's all over the news until the latter two groups realize it's not what they thought it was and all the talk quietly dies down and a few people have made a lot of money and a lot more people have lost money.
 
I've seen way more pessimistic predictions:

https://twitter.com/TolgaBilge_/status/1714761317423226993

30% chance in roughly 2050.
https://aitreaty.org links more optimistic predictions .. but also way sooner than 2100: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

And it's not really few experts. It's leading experts in leading AI companies. Basically anyone who tried to solve any AI safety problem, and realized how hard or right out impossible it is.

People will be much more concerned with simply surviving the cataclysm of climate change to be bothered about such things come 2050.
 
People will be much more concerned with simply surviving the cataclysm of climate change to be bothered about such things come 2050.

Yeah, those prediction are a bit optimistic in this account. But then maybe AI will be utilized to help with the problem, only making it worse.
The main currently recognized and studied danger of AI isn't straight out malevolence, but misalignment. Our inability to specify what exactly we want, and AI then doing something slightly different, with completely opposite effect.
For example recommending suicide to psychiatric patients, because people with their diagnose often end up doing it.

It's even as a joke used to define AI: it is AI when we can't define the problem. But it has some depth to it.
 
I've seen way more pessimistic predictions:

https://twitter.com/TolgaBilge_/status/1714761317423226993

30% chance in roughly 2050.

https://aitreaty.org links more optimistic predictions .. but also way sooner than 2100: https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/

And it's not really few experts. It's leading experts in leading AI companies. Basically anyone who tried to solve any AI safety problem, and realized how hard or right out impossible it is.

Yep, absolutely. That was the median figure, but there's a significant minority who rates it as much more likely. It's also the figure for human extinction, if you ask for probabilities of other negative outcomes you get much higher figures.

But anyway, my point isn't that we should say "AI researchers have a generally high probability of human extinction from AI, so we should take that seriously*", but rather just that we should believe them when they say that's their view, and not be surprised if it motivates their actions.


*I absolutely do think we should take it seriously, by the way. I think the alignment problem is a very real problem, and one that we don't have solved, and something being "science fiction" isn't a reason to wave it off. Transformative new technologies generally have major impacts on society, it's entirely reasonable to expect new and important impacts from AI. And reasoning about what those are based on what we know now is just prudence.
But I also think that there are solutions to the issues (such as the alignment problem) that will arise as AI becomes more powerful. I doubt we will have any perfect solutions, but my view is that we will develop solutions that will be good enough that the benefits will outweigh the harms.

(AI will be optimized to do something other than exactly what we wanted to optimize it for, but I think we'll manage to make that thing close enough to what we wanted to limit the harms and capture some of the benefits.)
 
And the board realizing that was why he was canned. He sold them a bill of goods based on the company pivoting to market something that didn't exist and isn't being developed but he figured would show up sooner or later on its own.

I don't buy the AI paranoia story. They're the board of a private tech company. "Greedy, soulless son of a bitch" isn't just a description, it's their job. If they had come up with literal Skynet they'd have hocked it to the US military using Terminator as ad copy without a second thought.

Except these weren't the typical finance dude-bros that make up a board. And it was a non-profit. It seems they fell for a finance dude-bro's pitch and let him build a cult of personality within the company. They figured out he was not only selling vaporware but also didn't seem too concerned with the repercussions of what they were trying to make. If OpenAI did stumble into real working AI, he'd have happily sold it to a company like Palantir.

Of course, the Great and Holy Market (peace be upon its money) demanded its blood sacrifice and they interfered with that.
 
I am not worried about artificial intelligence.

Hell, not even sure at times that there is even an organic one.
 
I am not in the least worried about artificial intelligence, but especially after this weekend's shenanigans I am more concerned than ever about the people who are supposed to be in charge of creating it.
 
I am not in the least worried about artificial intelligence, but especially after this weekend's shenanigans I am more concerned than ever about the people who are supposed to be in charge of creating it.

Nobody is supposed to be in charge. Anyone can do it. Anyone will do it. It's like nukes, except you don't need Uranium. Tons of hardware are still useful, but who knows, if you are smart enough, maybe solid gaming PC is all you need. Or solid gaming PC 20 years from now.
 
Last edited:
Nobody is supposed to be in charge. Anyone can do it. Anyone will do it. It's like nukes, except you don't need Uranium. Tons of hardware are still useful, but who knows, if you are smart enough, maybe solid gaming PC is all you need. Or solid gaming PC 20 years from now.

anyone with pretty pricey toys.
 
Nobody is supposed to be in charge. Anyone can do it. Anyone will do it. It's like nukes, except you don't need Uranium. Tons of hardware are still useful, but who knows, if you are smart enough, maybe solid gaming PC is all you need. Or solid gaming PC 20 years from now.

At least current AI models are capital intensive, you need a lot of time with a lot of GPUs.
 
At least current AI models are capital intensive, you need a lot of time with a lot of GPUs.

Thought that was only for "instant" response times and multiple people accessing it?

Certainly, in the generative AI space you can run most locally if you are happy with a much slower generation time. I can run Stable diffusion locally on my PC and even on my iPad.

Strikes me that some of the fear seems very similar to that generated by the advent of "genetic engineering" with DIY CRISPR kits becoming available.
 
I think a more serious threat than AI is the apparent propensity of much of the population for treating AI (or even presumed AI) as some sort of oracle. Why on earth are so many people ready to uncritically outsource thought itself? I've never had a high opinion of the wisdom of the masses but this is beyond even my most misanthropic pessimism. Software is a tool, nothing more, and no tool is suitable for every purpose. It's silly enough to make gods out of imagination, it's beyond ridiculous to make gods out of things we've actually made ourselves!
 
Thought that was only for "instant" response times and multiple people accessing it?

Certainly, in the generative AI space you can run most locally if you are happy with a much slower generation time. I can run Stable diffusion locally on my PC and even on my iPad.

Strikes me that some of the fear seems very similar to that generated by the advent of "genetic engineering" with DIY CRISPR kits becoming available.

Well stable diffusion models have 2 to 6 gigs. That will fit in modern GPU (12-16GB).
GPT4 has 1 trillion parameters. Even as 4 bit numbers (which are used in LLMs in a pinch) it is still 500 gigabytes. You can feed it layer by layer into GPU, and it's commonly done .. but it's slow. And you need 1 evaluation of the whole network to get 1 word out.
But then there are smaller LLMs you can run at home. You don't need all the languages, all the knowledge in the world. There are decent LLMs which will fit into common GPU, with limited capabilities.
But you certainly can experiment with AI, and you can develop AI, all alone.
 
I am not in the least worried about artificial intelligence, but especially after this weekend's shenanigans I am more concerned than ever about the people who are supposed to be in charge of creating it.

What's really scary is how much of our society they actually are running.
 
Order restored:

Sam Altman restored as OpenAI CEO after his tumultuous ouster

SAN FRANCISCO, Nov 22 (Reuters) - ChatGPT-maker OpenAI has reached an agreement for Sam Altman to return as CEO days after his ouster, capping frenzied discussions about the future of the startup at the center of an artificial intelligence boom.

The company also agreed to revamp the board of directors that had dismissed him. OpenAI named Bret Taylor, formerly co-CEO of Salesforce, as chair and also appointed Larry Summers, former U.S. Treasury Secretary, to the board.

Both staunch capitalists I'm sure. We can all sleep easy again.
 
Thought that was only for "instant" response times and multiple people accessing it?

I'm talking about the training part. Once you've trained the model, yeah, that's different.

ETA: There's a reason that the recent Biden administration executive order on AI called for a duty to report training any model with more than 1026 flops, as well as report what safety precautions you are taking.
 
Last edited:
Oh Noes! :scared:

Sam Altman’s Second Coming Sparks New Fears of the AI Apocalypse

Five days of chaos at OpenAI revealed weaknesses in the company’s self-governance. That worries people who believe AI poses an existential risk and proponents of AI regulation.

Open AI’s new boss is the same as the old boss. But the company—and the artificial intelligence industry—may have been profoundly changed by the past five days of high-stakes soap opera. Sam Altman, OpenAI’s CEO, cofounder, and figurehead, was removed by the board of directors on Friday. By Tuesday night, after a mass protest by the majority of the startup’s staff, Altman was on his way back, and most of the existing board was gone. But that board, mostly independent of OpenAI’s operations, bound to a “for the good of humanity” mission statement, was critical to the company’s uniqueness.

Well, "the good of humanity" may now be secondary in the list of concerns of the new board of directors. Whether the previous board had a correct understanding of that concept or not is a separate question. If you actually believe that the plot of Terminator, or something like that, is an actual existential risk for humanity, and not merely Science Fiction, perhaps there is an argument to be made for the action they took. Although in hindsight it seems to have been completely ineffective and perhaps even contrary to that goal.
 
Was there some sort of recent breakthrough towards achieving AGI?

Sam Altman's Ouster Followed Dangerous AI Breakthrough Claim: Reuters

According to the news agency, sources familiar with the situation said researchers sent a letter to the OpenAI board of directors warning of a new AI discovery that could threaten humanity, which then prompted the board to remove Altman from his leadership position.

These unnamed sources told Reuters that OpenAI CTO Mira Murati told employees that the breakthrough, described as “Q Star” or “(Q*),” was the reason for the move against Altman, which was made without participation from board chairman Greg Brockman, who resigned from OpenAI in protest.

This mysterious 'Q*' sounds like excellent fodder for conspiracy theorists, whether it actually exists or not.
 
Back
Top Bottom