EHocking
Penultimate Amazing
V.geeky. I had to parse the paper title through an LLM just to work out what the subject was
For fun I asked Copilot to summarise it, it did and in effect hid the sarcasm and irony in the original and produced a sanitised version without the precautionary warnings. In other words like many humans reading it would have done it failed to understand the meaning of the article. And when I then let it lead me down the rabbit hole it got more and more detached from the meaning of the article. A very good example of how relying on AI summaries can be problematic, and given how the commercial companies are pushing the use of AI to summarise complex subjects it should be causing alarm bells.V.geeky. I had to parse the paper title through an LLM just to work out what the subject was

Not allowed to delete the output, they can delete the input so your prompts can still be deleted... I don't know if the court's decision is warranted or not as I've not followed the case.I'm not sure if anyone here will care, but ChatGpt is not allowed to delete user prompts etc
![]()
Jan Wildeboer 😷:krulorange: (@jwildeboer@social.wildeboer.net)
Attached: 3 images #Oops Due to the ongoing case New York Times v #OpenAI, you cannot really delete your #ChatGPT prompts and conversations as the court has ordered [1] on 13th of May that *all* logs must be stored until further notice. OpenAI is furious as that means "including sensitive...social.wildeboer.net
Unsourced screenshot says anonymous narrative says...
Just for you the link is But I agree, what qualifications and experience gomjabbar has to make those statements is unknown.Unsourced screenshot says anonymous narrative says...
I thought it was interesting and in fact a little bit funny. If it's not true, then it's still a little bit funny.Unsourced screenshot says anonymous narrative says...
Yes, people even make entire careers out of telling fictional narratives for the amusement of others.I thought it was interesting and in fact a little bit funny. If it's not true, then it's still a little bit funny.
While I do not disagree, this is heading towards a world where you can't say anything off the cuff. Everything has to be backed up with robust sources and citations to peer-reviewed journals and I don't think anybody's actually going to do that.Yes, people even make entire careers out of telling fictional narratives for the amusement of others.
My concern is that it's very easy for us to fall into the same trap that LLMs do: Being saturated with untrue/unsupported/unverifiable claims, which lead us to hallucinate a false reality. So it bothers me when, in an ostensibly rational discussion about AI, people post engagingly humorous anecdotes that seem truthy but are not actually known to be true.
To my mind, that's one of the ways misinformation spreads, and people end up living in a "post truth world".
Compared to many previous generations they have improved text and especially specified text within images a lot. It does show how they have no understanding of what they are producing. If a human was mocking up such a keyboard the text would be 99% accurate (not a hundred percent as people make errors and mistakes as well as generative AIs).How did “Strange” become “Sttarge”? And we have “Compere”, “Somis”, and “Bockspace”.
Clearly, letters aren’t its strong side. Perhaps they should teach it to read and write?
These things aren't thinking. They're not reading, or doing math, or anything like reasoning about the prompt. They're mechanically finding a statistically likely fulfillment of the prompt.How did “Strange” become “Sttarge”? And we have “Compere”, “Somis”, and “Bockspace”.
Clearly, letters aren’t its strong side. Perhaps they should teach it to read and write?
Also, they don't know what words or letters look like. They're not producing writing. They generate an image that is like writing.These things aren't thinking. They're not reading, or doing math, or anything like reasoning about the prompt. They're mechanically finding a statistically likely fulfillment of the prompt.