• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

*I read an article a month or so ago in which the author was excited by the capability of this type of AI and really did claim that it would be great to be able to set up an AI agent to not only remind him of his partner's birthday but to order on his behalf a "thoughtful present" and set up a "romantic" meal date. Yep he thought it was great that he wouldn't have to do any kind of thinking for his partner's birthday after telling the AI to sort it out, and he considered that it buying a present would constitute a "thoughtful present". He was saying it was great because the agent AI could go through her social media posts to find her favourite restaurant and to work out what she would like for a present. Presumably the goal is to never have to even directly communicate with your partner, prompt your AI up to send a few "fun and romantic" messages every day to your partner so they know they are in your thoughts, they can do the same with their AI...
Emotional teledildonics.

I predict that as AI takes over more and more of the human experience, the touch of an actual human will become more and more prized.

I thought AI was supposed to usher in the death of gainful employment, so I'd have more time to spend with my significant other, rather than replace significant other time.
Agentic AI is all things to all men. Tech bros are telling on themselves when they say it will minimize their obligations to interact with other human beings.
 
From everything and everyone that ever contributed to training the AI and wasn't employed by Microsoft.
The report it generated (I included it as a download) couldn’t have been stolen from anywhere as the paper has just been published. It produced new content.
 
The report it generated (I included it as a download) couldn’t have been stolen from anywhere as the paper has just been published. It produced new content.
I don't know what you're referring to, I don't see any download.

I also don't know what you mean. If I understand you correctly, it got information from a paper (if the author of that paper isn't okay with AI scraping, that's stolen too), and then it added that information to its horrifying amalgamation of other stolen stuff in order to create the podcast.

I know it isn't legally stealing, but I consider it stealing ethically, for whatever that's worth.
 
Caught this on the CBC Radio show https://www.cbc.ca/radio/whitecoat today:
The human face of 'AI psychosis'

After a seemingly innocuous question about pi, Allan Brooks tumbled down a ChatGPT rabbit hole. Three weeks later, he emerged, after spending 300 hours in a spiralling 7,000-prompt exchange with the chatbot. Dr. Keith Sakata, the psychiatrist whose viral thread on X breaks down the phenomenon known as “AI psychosis,” says the built-in sycophancy of large language models like ChatGPT needs to change before more harm is done.
More here:

and


Can we put the genie back in the bottle?
 
the bubble burst is going to wipe trillions out of people's index funds, that's the primary danger right now imo
 
I am a big fan of Metallica, but I did like this:

(I only listened to the first song here.) One thing I've noticed about AI voices is that they are often easier to understand than actual human voices. Every word is pronounced or enunciated correctly. I remember when Weird Al Yankovic did a parody of Nirvana's Smells like Teen Spirit and he was basically making fun of the fact that it was very hard to understand the lyrics to the song. That's one of the more extreme examples, but even here I found it easier to understand some of the lyrics than with the original.
 
I don't know what you're referring to, I don't see any download.

I also don't know what you mean. If I understand you correctly, it got information from a paper (if the author of that paper isn't okay with AI scraping, that's stolen too), and then it added that information to its horrifying amalgamation of other stolen stuff in order to create the podcast.

I know it isn't legally stealing, but I consider it stealing ethically, for whatever that's worth.
Earlier in the thread: https://internationalskeptics.com/f.../artificial-intelligence.369280/post-14654298

I have no idea how the word stolen applies at all.
 
I wondered what an AI would do if I asked "is the paper was correct?". Picked deep research on Copilot and it produced a report, I've only skimmed it so don't have any direct comments about it, I went to download the report and got the option to do so as a PDF/Doc file, which is what I've attached but it also gave me the option of "Generate a podcast" - which is something I've not noticed before. I clicked on that and it took a minute to generate a podcast. I've listened to it and I'm shocked at how good it is (obviously not perfect), I knew they were getting better but I thought it would be pretty much a text-to-voice summary of the report it had generated. It isn't - it's only a 6 minute listen if you want to check it: https://copilot.microsoft.com/shares/podcasts/Btrvdh5kC5pA2S6RJmBbE
Is that the actual paper or a synopsis of it? (the PDF file is 11 pages long). I liked the podcast. Did you specify that it would be a man and woman with a British accent talking about it? (I think he asks the questions mostly and she answers them). Or was it as simple as "make this into a podcast"?
 
Is that the actual paper or a synopsis of it? (the PDF file is 11 pages long). I liked the podcast. Did you specify that it would be a man and woman with a British accent talking about it? (I think he asks the questions mostly and she answers them). Or was it as simple as "make this into a podcast"?
I uploaded the paper to Copilot, then asked "Is that paper correct?" and selected the "Deep Research" option. That produced the "research paper", when I went to export the results of that prompt it gave me three options "Export as PDF, Export as Docx, Generate a podcast". All I did was choose "generate a podcast", everything in that Podcast was AI generated I had no input or choice of options at all. I have told Copilot it can remember where I live, which is perhaps why it defaulted to British sounding voices.

ETA: I'm wondering if you can give it directions for the podcast?

Yes you can: make a podcast summarising that research, with 3 presenters, one a serious sounding academic, one a journalist and a third who makes humorous but insightful comments, it should be breezy but authoritative, mentioning actual papers referenced in the research.
New Podcast: https://copilot.microsoft.com/shares/podcasts/3ej5DVcfrYMNKebX4Rq7y
 

Back
Top Bottom