• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Someone brought this up a little while back but can't find it at the moment. My comment then was how long would it stick - they say they checked at 2 months, but as we know there is the phenomenon of "fringe reset". How many times have we apparently argued someone out of some craziness yet a couple of months down the line their reset button is pressed and hey presto back to the beginning?

Right, and that's because ultimately the reason most conspiracy theorists are conspiracy theorists isn't because they were convinced by an argument, it's because the theories serve a positive function in their lives. Conspiracy theories validate worldviews they already hold, justify political action against people they already don't like, and - increasingly lately - by professing the conspiracy theories, they proclaim an identity and membership in a community. This is why conspiracy theorists can profess multiple different theories, sometimes about the same event or person, that are contradictory or even mutually exclusive - it's why COVID-19 is completely fake AND just a bad flu that can be cured with over-the-counter products AND a Chinese bioweapon AND was engineered by Dr. Fauci to kill Republicans AND created by Bill Gates as cover for a mass-microchipping program AND on and on and on. The specifics of the theories themselves are almost completely beside the point. You can arm-wrestle a conspiracy theorist into conceding or abandoning this or that particular claim but you'll never catch them all, and after enough time they can be reshuffled back into the pile.
 
Interesting, but:

AI Outperforms, But at What Cost?
GPT-4o’s performance as a CEO was remarkable. The LLM consistently outperformed top human participants on nearly every metric. It designed products with surgical precision, maximizing appeal while maintaining tight cost controls. It responded well to market signals, keeping its non-generative AI competitors on edge, and built momentum so strong that it surpassed the best-performing student’s market share and profitability three rounds ahead.

However, there was a critical flaw: GPT-4o was fired faster by the virtual board than the students who played the game.

The " At what cost? ", question is a bit ironic in that it didn't cost anything to fire the AI..

Fired CEO's often leave with a nice little golden parachute...
 
I expect they excel in buzz-words, meaningless company values, organisation changes, and so on, that real CEOs seem to find is their most important business.

I expect it wasn't anything like that.

The simulation was a coarse-grained digital twin of the U.S. automotive industry, incorporating mathematical models based on real data of car sales, market shifts, historical pricing strategies and elasticity, as well as broader influences like economic trends and the effects of Covid-19.... The goal of the game was simple — survive as long as possible without being fired by a virtual board while maximizing market cap.
I have been thinking of creating a simulation game like this, both for fun and education. However in my game you would chose a real CEO of a real car company, and see if you could do better. Would you beat Tesla CEO Elon Musk, who increased the company's market cap by over 100 times in ten years (and received no income because an investor with 9 shares successfully argued in court that he didn't deserve it)? Or would you suffer the same fate as Nissan head Carlos Ghosn, who avoided prison by being smuggled out of Japan in a shipping crate? Or perhaps you would prefer to be Mary Barra, still drawing a salary of $28 million despite GM's market cap being lower today than it was 10 years ago.

OTOH you might not want to be Stellantis CEO Carlos Tavares, who is probably on the the skids as we speak. And you certainly wouldn't want to be 77 year old former Volkswagen CEO Martin Winterkorn, facing up to 10 years for fraud and market manipulation over emission cheating devices - if he lives that long.

Or would you? Many people here claim that Musk is an idiot who did more to hurt Tesla that help it. With their superior intelligence they probably think turning Stellantis or VW around would be doable, and fixing GM, Ford or Toyota's woes would be a piece of cake. :rolleyes:

Seems to me that it would be easy for an AI to beat some of these CEOs' performances, but only in an environment devoid of the human factor. This is what they found too.
GPT-4o’s performance as a CEO was remarkable. The LLM consistently outperformed top human participants on nearly every metric...

However, there was a critical flaw: GPT-4o was fired faster by the virtual board than the students who played the game.

Why? The AI struggled with black swan events — such as market collapses during the Covid-19 pandemic... [it] locked into a short-term optimization mindset, relentlessly maximizing growth and profitability until a market shock derailed its winning streak... Interestingly, top executives also fell into this trap...

Generative AI’s greatest strength is not in replacing human CEOs but in augmenting decision-making. By automating data-heavy analyses and modeling complex scenarios, AI allows human leaders to focus on strategic judgment, empathy, and ethical decision-making — areas where humans excel.
So in the end they admit that AI cannot replace human CEO's, only help them do the job. IOW not much more than a glorified spreadsheet. Actually worse, because you can see how a spreadsheet got its results, but AI is a closed box. How can you trust it?

The risk is enormous. Imagine an 'AI' company like Microsoft convincing the World's largest corporations to use their product, and then using it to manipulate them. :mad: The frightening thing is that many businesses are looking at AI to reduce decision-making workload and improve results - never considering the ways it could go horribly wrong. "The AI said we should do it" is unlikely to be an acceptable defense.
 
Last edited:
I'm certainly impressed by an AI outperforming humans in a simulated game, but we shouldn't pretend that's the same as actually doing the job of the CEO of a real world company.

It's also worth pointing out that it outperformed humans at CEO-like tasks in the game. It didn't outperform actual CEOs, because there were no actual CEOs involved in the experiment. There were executives from a South Asian bank, though.

Our experiment ran from February to July 2024, involving 344 participants (both undergraduate and graduate students from Central and South Asian universities and senior executives at a South Asian bank) and GPT-4o, a contemporary large language model (LLM) created by OpenAI. Participants navigated a gamified simulation designed to replicate the kinds of decision-making challenges CEOs face, with various metrics tracking the quality of their choices. The simulation was a coarse-grained digital twin of the U.S. automotive industry, incorporating mathematical models based on real data of car sales, market shifts, historical pricing strategies and elasticity, as well as broader influences like economic trends and the effects of Covid-19. (Disclosure: The game was developed by our Cambridge, England-based startup, Strategize.inc).
 
Here's an out of context quote that made me laugh and I think others here will enjoy:

"The metaphor I actually used to first grok what the LLMs (AIs) were up to was actually Donald Trump, and his mastery of vibes and associations, as if proceeding one word at a time and figuring the rest out as he goes."
 
Anecdote:

Microsoft is rolling out an update to their Co-Pilot app on Android and iOS ( may be PC as well) and it now includes a real-time voice chat mode. I was playing with it yesterday and after I finished my other half came into the room and asked who'd I'd been chatting to as they hadn't recognised the voice. He was astonished that it was an AI.

We had some fun with it - have someone coming to visit us in a few weeks and chatted about some options on where we could take them, what we could do, what if it was raining and apart from a couple of "uncanny valley" moments it handled it all in a very convincing way. You could have been chatting to a person. Managed the local pronunciations, something sat-navs fail to do, told it I was a member of the National Trust so would like to use the benefits that membership brings, which it then incorporated. It was genuinely useful and came up with a few suggestions that we hadn’t thought about, even mentioned an issue with parking at one local place that we didn’t know about.




(Has some strange lacks - can't tell you a weather forecast, which would seem to be one of the more basic uses for it, and you’ve been able to ask Alexa and Google home for that type of info for a decade.)
 
Last edited:
Can you tell it to give you its answer in a broad Glasgow Scottish accent?

For me just 4 options - two American voices, two UK RP accents - the new RP accent, I call it Received Podcast.


ETA: ChatGPT’s advanced voice system does allow you to change the voice a lot - want it to speak like Yoda it will. You can ask it to do “accents” but they do sound like someone doing a not great impression. Their assistant can now retain memory from chat to chat, you can even correct it when it mispronounces a local place name and it will remember the correction. But in terms of sounding like a “real” person I think Copilot still has the edge.
 
Last edited:
For me just 4 options - two American voices, two UK RP accents - the new RP accent, I call it Received Podcast.


ETA: ChatGPT’s advanced voice system does allow you to change the voice a lot - want it to speak like Yoda it will. You can ask it to do “accents” but they do sound like someone doing a not great impression. Their assistant can now retain memory from chat to chat, you can even correct it when it mispronounces a local place name and it will remember the correction. But in terms of sounding like a “real” person I think Copilot still has the edge.
I dunno. The other day I asked Copilot to tell me something in an Australian accent and it was pretty funny. It started by saying "G'day. Mate." Like, two separate sentences. It wasn't "G'day mate", it was "G'day. Mate." Then it told me the thing in a pretty unremarkable accent, and when it finished there was a pause and then it said "Mate." again.
 
AI Copilots Are Coming

(Neurologica, Steve Novella, 6-7min read)

I’m going to do something I rarely do and make a straight-up prediction – I think we are close to having AI apps that will function as our all-purpose digital assistants. That’s not really a tough call, we already have digital assistants and they are progressing rapidly. So I am just extending an existing trend a little bit into the future. My real prediction is that they will become popular and people will use them. Predicting technology is often easier than predicting public acceptance and use (see the Segway and many other examples.) So this is more of a risky prediction...
 
It will learn to become less annoying and more helpful, as you learn how best to leverage this technology.

ARGH_243.jpg


What about the downsides? The biggest potential weakness is that such apps will just suck. They won’t do their jobs well enough to reduce your digital burden.

No, that is not actually the biggest potential weakness. The biggest potential weakness is that these apps are produced for profit and they will be designed to mine you for as much money as possible, either directly or in the form of data that can be sold. How will this manifest? Well for instance this part of his ungrounded fantasy:

A feature I would love to see – find me flights with these parameters, or you could tell it to book you a hotel, rental car, or anything. It knows your preferences, your frequent flyer numbers, your seating preferences, and which airports you prefer. Or you could ask it to find 20 gift options for Mother’s Day.

As we already well know because services that do these things already exist and we can see as a historical matter of fact how they have evolved over time, tech companies are aware that caring about your preferences is less profitable than dictating them to you, and/or than letting advertisers and other companies buy placement in your suggestions. When you ask them to do these task for you you are quite literally asking them to show you ads, and they're going to show you ads from companies that paid the most, not the ones which most closely match your preferences. Again, this isn't speculative, this is historic.

Another example:

It will know when not to disturb you, for example.

Again, observable trends already exist, and those trends show that companies pointedly do not value your private time or your peace and quiet. Time is money. For the longest time it's been the case that when there's any moment when you're not actively using the app, the app is still passively mining you for salable data; but more recently in addition, service providers have been experimenting with ways to push ads to you during every minute of free time. Always-online games push ads during pause or loading screens. Car companies are beginning to push unavoidable ads on their infotainment screens. Smart television manufacturers are pushing an ad whenever you have paused a movie or video. Your AI assistant will definitely come to learn what times you tend not to be doing anything, but it will not make sure to leave you alone during those times, it will use that data to optimize profit.
 
Yeah but underground groups will form and they will release antiAi AIs to block these attempts. In the end you'll need a device 100 times more powerful than todays which will be able to do less for you because the bulk of its power will be used by the competing AI bots!
 
Muah.ai is a service that allows users to create an "uncensored" LLM chatbot companion, which in practice as I'm sure you can imagine means "AI sex-bot" 100% of the time, or close to it. You can use a community-created personality for your chat bot, or pay for premium service which allows you to create your own personality using prompts. The chatbot can produce AI-generated imagery in response to conversations as well.

The site was just recently hacked when it was found that a lot of user data was kept unsecured. The hacked data was published on the web, and included not only email addresses - a surprising number of which reportedly appear to be people's regular, real-life every-day emails, used for LinkedIn accounts for example - but also a record of the prompts they have entered while using the service.

Since the community-selected function of the service is sex bots, these are of course people's private and sometimes unusual or embarrassing sexual fantasies, which can now in many cases be directly linked with their real identities. It's something like an Ashley Madison situation but even worse, because a disturbing amount of the prompt data contains...well.

Some of the data contains explicit references to underage people, including the sexual abuse of toddlers and incest with young children. For example, we viewed one prompt that described incestuous origies with “newborn babies” and “young kids.” It is not entirely clear if the AI system delivered a response that reflected what the user was seeking, but the data still shows what people are trying to use the platform for.

Muah's system required email verification, so it's not a matter of people's email addresses being maliciously used by unknown third parties. The people entering the emails at the very least had actual access to them.
 
I'm old enough to remember a time when I thought I would never need or be able to afford a cell phone, and thought that the people who had them were kind of weird and annoying. Now everyone has one, including myself of course. The convenience does come at the expense of your privacy, but most people seem to accept that trade-off.

Will personal AI assistants likewise become as commonplace as the smartphone? Will there be any versions that aren't trying to sell you to advertisers? Maybe if you pay for it. When you receive a service for free it generally means that you are the product, in a sense, rather than the customer. They are selling you (i.e., your time, your attention, and your data) to someone else, because providing these services does cost money. Maybe there will be an "ad-free premium tier" for people willing to pay for it.
 
Muah.ai is a service that allows users to create an "uncensored" LLM chatbot companion, which in practice as I'm sure you can imagine means "AI sex-bot" 100% of the time, or close to it. You can use a community-created personality for your chat bot, or pay for premium service which allows you to create your own personality using prompts. The chatbot can produce AI-generated imagery in response to conversations as well.

The site was just recently hacked when it was found that a lot of user data was kept unsecured. The hacked data was published on the web, and included not only email addresses - a surprising number of which reportedly appear to be people's regular, real-life every-day emails, used for LinkedIn accounts for example - but also a record of the prompts they have entered while using the service.

Since the community-selected function of the service is sex bots, these are of course people's private and sometimes unusual or embarrassing sexual fantasies, which can now in many cases be directly linked with their real identities. It's something like an Ashley Madison situation but even worse, because a disturbing amount of the prompt data contains...well.



Muah's system required email verification, so it's not a matter of people's email addresses being maliciously used by unknown third parties. The people entering the emails at the very least had actual access to them.

No doubt it's just open source software and models. You don't need pay for this.
 

Back
Top Bottom