ChatGPT

Article in Scientific American:

How AI Knows Things No One Told It

Absolutely mind boggling. If this article is to be believed AIs are moving away from very high level, but very specific artificial intelligence (like dominate Chess) to artificial general intelligence, operating in the same way that animal brains operate.

I don’t know if excitement or worry is the best response.
 
Article in Scientific American:

How AI Knows Things No One Told It

And many of us think our intelligence and whole "meness" is an emergent property of the brain, we may indeed be mimicking our own intelligence as we develop these systems without meaning to.

Obviously any "meness" arising in these emergent properties won't be the same as ours, not got the same inputs, hardware is different and so on.

Fascinating times.
 
One of my friends has used ChatGPT for editing. She wrote a text that was convoluted and difficult to understand l, and asked ChatGPT to make it easier to read, and it made a tremendous job out of it.
 
I wonder, is this actually just the (unrepresentative) view/understanding/lack-of-understanding of just one set of folks, or is that the general consensus? That no one knows how exactly? If the latter, then double wow, exponential wow.
As far as I know, it is the nature of systems based on neural networks that no one knows how they work. As we see in the article, in order to investigate, scientists are now trying monitor activity in various parts of its system, much like EEG is used to investigate how the brain works.
 
Can ChatGPT make a calculation about this image and what happened?

Given the dimensions and weight of the screw and the height from which it fell, can ChatGPT calculate the probability of this happening?

uc


..or should I just start dropping screws?
 
It most certainly produces text based upon "facts", or more specifically datasets, that it has been trained on. I don't know where you got the impression of otherwise.

Facts are included in the massive datasets it uses, but ChatGPT has no mechanism for recognising them. It is just, as I said, a very sophisticated predictive text generator. It simply cannot be relied on to produce output that is factually correct, that is not what it does, however much it looks like it.

There are also problems with the dataset that it uses, in that it's representative of only a subset of the population.

Interesting episode of Word of Mouth about it here - https://www.bbc.co.uk/programmes/m001l97m
 
Facts are included in the massive datasets it uses, but ChatGPT has no mechanism for recognising them. It is just, as I said, a very sophisticated predictive text generator. It simply cannot be relied on to produce output that is factually correct, that is not what it does, however much it looks like it.
Are we not ourselves something similar, i.e. pattern-matching systems? Apart from some genetically coded instincts, everything we do is based on what we have seen others doing.

I have certainly seen ChatGPT producing factually correct output, not just something that looks like it. It seems to me that it’s main weakness is that it doesn’t know when it might be wrong. A human has an idea when she probably remembers an address wrongly, but ChatGPT is always dead sure, even when the address is completely wrong. I think it is because ChatGPT doesn’t have enough experience with its own memory, whereas we know we have been wrong many times.
 
Are we not ourselves something similar, i.e. pattern-matching systems? Apart from some genetically coded instincts, everything we do is based on what we have seen others doing.

I have certainly seen ChatGPT producing factually correct output, not just something that looks like it. It seems to me that it’s main weakness is that it doesn’t know when it might be wrong. A human has an idea when she probably remembers an address wrongly, but ChatGPT is always dead sure, even when the address is completely wrong. I think it is because ChatGPT doesn’t have enough experience with its own memory, whereas we know we have been wrong many times.

This what I think the large language models are in fact showing, much of what we call intelligence, that we take as evidence of us being "I" is beginning to seem like something that doesn't require that.

Plus the article says these generative AIs do seem to be starting to create for themselves "internal" representations of the world.

Sounds awfully like how we think we think.
 
As far as I know, it is the nature of systems based on neural networks that no one knows how they work. As we see in the article, in order to investigate, scientists are now trying monitor activity in various parts of its system, much like EEG is used to investigate how the brain works.
The best way I've heard it described was by Steven Novella from the Skeptic's Guide to the Universe podcast. He said that although the parameters and the training data are fed into it by humans, nobody understands why it makes the decisions that it makes about how to produce its output. It's clear that inputs are being processed into outputs, but exactly how is a Black Box.

Have I posted the CGPGrey video in this thread yet? It explains very well why this is the case:

 
On a lighter note, and something that makes my nerd heart beat faster.
A modder from the Skyrim community used ChatGPT with two other voice AI's to create a mod that allows you to have full on conversations with all NPC's in the game.
It's still quite stilted to listen to, but the implications for games are staggering

https://www.youtube.com/watch?v=0wCjosz1vOA
 
On a lighter note, and something that makes my nerd heart beat faster.
A modder from the Skyrim community used ChatGPT with two other voice AI's to create a mod that allows you to have full on conversations with all NPC's in the game.
It's still quite stilted to listen to, but the implications for games are staggering

https://www.youtube.com/watch?v=0wCjosz1vOA

It's being explored, and I know at least one developer is now incorporating this. You'll start to see it in new games towards the end of this year and onwards. One of the issues so far is keeping it "on track", I don't think it is much of a problem as I suspect the first couple of games to feature generative AI NPCs will get a lot of publicity and a lot of jokes about what you can discuss with NPCs, but I think that will soon pass and it will come to be accepted that NPCs are no longer as limited as they are at the moment.
 
I would worry as a voice actor. Once the AI has learned your voice, can a company use your voice in perpetuity in later games?
 
I would worry as a voice actor. Once the AI has learned your voice, can a company use your voice in perpetuity in later games?

Depends on what you negotiate. In the UK most developers will negotiate a "buy out" for the rights to what you provide, most contracts will also feature a perpetuity clause and the rights to use your work in future products. If you are a "name" you will (or rather your agent or business manager) probably not agree to a buy out. There are bound to be some issues arising because AI recreation of a voice simply wasn't considered in contracts, but most contracts in the game industry do have a general "any future technology" clause that could be argued covers AI generating your voice. I think wanting to have the right to use AI to generate your voice will become a standard part of contracts either to explicitly include it or exclude it, if you want that type of contract it will come at an additional cost.
 
Someone made a Skyrim mod using ChatGPT that lets you have open-ended conversations with NPCs instead of the usual finite set of canned responses.

 

Back
Top Bottom