• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

I finally reached this article in the September issue of Scientific American:How AI Knows Things No One Told It

I fear it is behind a paywall, but suffice to say that most has been mentioned here already. LLMs seem to build up their own model of the world, and can act as computers even though they do not have facilities for it, like e.g. memory.

The most surprising part to me was that LLMs have also developed abilities to learn without going through the learning part of their training. That is, they can learn despite having a fixed knowledge database.

The author concludes:
Although LLMs have enough blind spots not to qualify as artificial general intelligence, or AGI—the term for a machine that attains the resourcefulness of animal brains—these emergent abilities suggest to some researchers that tech companies are closer to AGI than even optimists had guessed.
 
I finally reached this article in the September issue of Scientific American:How AI Knows Things No One Told It

LLMs have also managed to ace the bar exam, write a sonnet about the Higgs boson and make an attempt to break up their users' marriage. Few had expected a fairly straightforward autocorrection algorithm to acquire such broad abilities.
I call BS.

'Aced' the bar exam? No, it cheated.

Wrote a sonnet about the Higgs boson? It doesn't even know what a Higgs boson is, let alone why writing a sonnet about it is stupid.

'Attempted' to break up a users' marriage? This is anthropomorphism.
 
'Know' is going to need a new definition that clarifies the difference between actual conscious thought and a close facsimile that AI programs can achieve.

Was Hal the computer on 2001 consciously thinking it needed to protect the mission when it was programed to protect the mission, or, did it make its own decision independently?

And what about in the future when we might be growing brains from stem cells and that gets combined with AI as a means to program thought in said brain?
 
From another forum discussion:
Computers, having no intelligence, can’t manifest bias. They can’t learn.

AI can most certainly learn.

When one says they can only replicate bias, what if they consume (metaphorically) all the bias in the internet data they learn from. Who then imparts the bias they manifest?

I have been blown away in the last few weeks learning about how extensively AI is spreading into every aspect of our lives, and just what AI is. It's not your mother's computer science. Freakonomics podcast on public radio this afternoon was an eyeopener and a good place to start.

https://freakonomics.com/podcast/season-13-episode-4/

Not sure what the title or summary is on this episode. It would appear to be conflated with episode 3. When I figure it out I'll post an update.

https://freakonomics.com/podcast/a-i-is-changing-everything-does-that-include-you/

AI Is Changing Everything. Does That Include You?

For all the speculation about the future, A.I. tools can be useful right now. Adam Davidson discovers what they can help us do, how we can get the most from them — and why the things that make them helpful also make them dangerous. (Part 3 of “How to Think About A.I.“)
https://freakonomics.com/podcast/how-to-stop-worrying-and-love-the-robot-apocalypse-ep-461/

How To Stop Worrying and Love the Robot Apocalypse

It’s true that robots (and other smart technologies) will kill many jobs. It may also be true that newer collaborative robots (“cobots”) will totally reinvigorate how work gets done. That, at least, is what the economists are telling us. Should we believe them?

I'm still soaking it all in which is why I urge you all to listen to the source. Set aside the time. This is emerging to be one of the most important issues of our time and that is not hyperbole.
 
Last edited:
Keep an eye out for anthropomorphic language in articles reporting on AI including in scientific papers. Don't be misled.

Unlike anthropomorphizing animals in research for which we have clear genetic and evolutionary evidence, it doesn't apply to non-sentient computers.
 
Last edited:
What is 'knowing' besides being able to repeat something you have read, seen or heard?
Like I just posted, anthropomorphic language in articles reporting on AI, including in scientific papers, is misleading.

Sentient 'knowing' differs in its nature from an AI program where a correct answer can be produced.

If you need more explaining than that I suggest you invest some time understanding the current status of biological consciousness.
 
What is 'knowing' besides being able to repeat something you have read, seen or heard?

There's something. Can't put my finger on it, but there's something. Anyone who's taught a child ever, and tried to make them really understand, instead of regurgitate; as well as everyone who's been a child, and remembers it, or indeed an older student, who's been a graduate student (and obviously does remember those days), will definitely vouch for it, that there's a difference.
 
There's something. Can't put my finger on it, but there's something. Anyone who's taught a child ever, and tried to make them really understand, instead of regurgitate; as well as everyone who's been a child, and remembers it, or indeed an older student, who's been a graduate student (and obviously does remember those days), will definitely vouch for it, that there's a difference.

I can't find the link to the research I read but they asked little kids to disobey a rule about eating in the classroom and the kids easily broke the rule they had learned. But when asked to break the rule about hitting a puppy they knew it was wrong and didn't do it.

That's pretty subtle and I suspect an AI program could be adapted to regurgitating the difference between the rules. But the youngest children tested had an innate sense of empathy they didn't learn.

You could get an AI program to mimic empathy, but I don't believe the mimicry is the same thing as 'knowing'.
 
I can't find the link to the research I read but they asked little kids to disobey a rule about eating in the classroom and the kids easily broke the rule they had learned. But when asked to break the rule about hitting a puppy they knew it was wrong and didn't do it.

That's pretty subtle and I suspect an AI program could be adapted to regurgitating the difference between the rules. But the youngest children tested had an innate sense of empathy they didn't learn.

You could get an AI program to mimic empathy, but I don't believe the mimicry is the same thing as 'knowing'.

Asimov's three rules of robotics. Eating in the classroom only breaks the second rule (obey humans). Hitting a puppy breaks the first rule (do not harm a human). A puppy is close enough to a human.
 
It doesn't 'know' what the Higgs boson is any more that an encyclopedia does.

Sure. Encyclopedia "knows" what Huggs boson is too. ChatGPT is only encyclopedia with better language processing. But that means that it can answer questions about it, explain details or make an overview, or even write a poem about it.
What more knowing is there ?
 
Sure. Encyclopedia "knows" what Huggs boson is too. ChatGPT is only encyclopedia with better language processing. But that means that it can answer questions about it, explain details or make an overview, or even write a poem about it.
What more knowing is there ?

There's no reason a monkey couldn't randomly type the words "A monkey is a mammal". We just need enough monkeys and a sufficient amount of time. But neither it nor the words on the page "know" anything. The only entity that can give context to the random arrangement of symbols is a human reader who speaks English, and so only the reader can be said to "know". Meaning is only created once they read the sentence.

I myself can randomly type "Ason fert tu kuvak." Maybe this means something to the lizard people of a distant swamp planet in dimension X, but unless one of those lizard people read it, it is just random gibberish.
 
Game of Thrones author sues ChatGPT owner OpenAI
https://www.bbc.co.uk/news/technology-66866577

"US authors George RR Martin and John Grisham are suing ChatGPT-owner OpenAI over claims their copyright was infringed to train the system."

Nice to see writers who have the good fortune to be able to afford effective legal advocacy are using their privilege for good.

Wishing good luck to RR Martin et al in their quest to kill this beast.
 

Back
Top Bottom