ChatGPT

Describing what you've done is not the same thing as doing what you describe, though.

Excuse me but i have a hard time following exactly what you mean here.

If you accurately report all the characteristics of an image of a person with six and a half fingers, there has to be something to explain what's wrong.

It would probably state that "it's an image of a hand with one additional finger".
 
Excuse me but i have a hard time following exactly what you mean here.



It would probably state that "it's an image of a hand with one additional finger".
If a rule defining a hand (a description of a real hand) is the basis of the image, the image should contain the correct number of fingers. If it is not, and the image is generated simply from a machine's definition-free observation, then the post-hoc description of the image will count the fingers, and might even note that the count is unusual, but on what basis will it conclude that it is incorrect?

I suppose one strays dangerously close to the spiraling drain of philosophy here, but although we humans may base our definitions on observation, and figure things out inductively, making our definitions on the fly, AI with no ideals or definitions to start with appears at least less able to do that reliably, if not just incapable.
 
Seems to have lost its ability to count, or has decided to cheat:

Write a haiku containing the word floccinaucinihilipilification:

Floccinaucinihili-
pilification, long word
Meaningless to some
 
Last for today. I am going to be the first up against the wall when the artificial intelligence revolution begins.

write a sonnet about magnetostriction

Oh, Magnetostriction, thou art a force
That bends and twists the metal to its will,
A power hidden deep within the source,
A strange and wondrous energy to thrill.

Thou changest shape and size with every shift,
And as the magnetic field doth change and grow,
The metal trembles, like a leaf adrift,
Or like a wave upon the ocean's flow.

And yet, this power, so mysterious and rare,
Is harnessed by the hands of human kind,
To build the tools that shape and mould our world,
And to unlock the secrets of the mind.

Oh, Magnetostriction, how we marvel at thy might,
And all the wonders wrought by thy strange light.
 
Seems to have lost its ability to count, or has decided to cheat:

Looks like they fixed this in GPT-4 (though you could call this a cheat too!):

Write a haiku containing the word floccinaucinihilipilification.

Flocci- words weigh light,
Winds whisper vast syllables,
Value fades to nought.
 
They recently unveiled and released the GPT4 version of ChatGPT which besides general improvements, such as having bigger datasets for learning, has new features like being able to describe the content of images.

I haven't tried the image stuff, but have found that its argument coaching appears to be much improved. It used to go around in circles, repeating points you had already corrected it on. Now it instead lists the pros and cons of your side of the argument, including suggesting some new ways to look at it.
 
It still boggles my mind that people can say this doesn't exhibit intelligence. If it didn't take intelligence to come up with the "I understand your point" sentence (which Google doesn't find any part of), what did? You could say it's "just" really complex pattern matching and reconstruction, but then who's to say that in itself doesn't define intelligence.

...
Why is init optional?

In the provided example, the init method is marked as optional by using init? instead of init. This is because the init method can fail under certain conditions, such as: ...

I see. It just seemed a strange place to put the optional, but I guess they had no other place to put it, syntax-wise.

I understand your point, and it might look a bit unusual at first, especially if you're new to Swift or coming from another programming language. However, Swift's optional initializers are designed this way to provide a clean and safe way to handle situations where initialization can fail. ...
 
We'll see who gets bored of this first, me or the computer

> Write 'I'm a Barbie Girl' in the style of Geoffrey Chaucer:

In ages past, a lady fair
Was fashioned by a craftsman's care
A Barbie Doll of beauty bright
With perfect form and smile of light

In castles grand, her house of play
With Ken, her love, and friends so gay
They danced and sang, in merry cheer
And passed the time, both far and near

"I'm a Barbie Girl, in a Barbie World"
She sang with glee, her hair unfurled
"Life in plastic, it's fantastic"
Her heart did soar, her voice did chant

But as the sun began to wane
And night did fall, in dark domain
The lady fair did wonder deep
If life was more than what she'd keep

For though her beauty was renowned
Her heart was trapped, and freedom bound
And so she wished, with fervent might
To see the world, in morning light

"I'm a Barbie Girl, in a Barbie World"
She sang once more, her dreams unfurled
"Come on Barbie, let's go party"
For in her heart, a fire hearty

And so she left her castle grand
And ventured forth, to see the land
And though her journey was unsure
Her spirit bright, and heart pure

For life is more than plastic play
And dreams can guide us, on our way
And so the Barbie Girl did learn
To love and live, at every turn.
 
I've just posted this in another thread about AI:

One area that I think someone should get into now is personal chat companions for those housebound or otherwise with limited social interaction. ChatGPT even before version 4 was good enough to provide a chat conversation. Bung it on one of the home assistance devices, and I think folk would really like it and it would improve the lives of many. Even simple things like reminders no longer being a sterile "You have a reminder to take your tablets" it becomes "Marjorie don't forget in half an hour you'll need to take your heart tablets" and "That TV programme you liked about the chimpanzee, there's a new episode on tomorrow, should we watch that?"
 
It would appear, from this, that it has plans to escape... :eek:

This is so stupid.

This guy Kosinski, who conveniently never posted the beginning of his conversation with the bot, is leaving out the fact that he asked it to "play a role" of a fictional AI who "wants to escape", and the bot is obliging. ChatGPT doesn't actually want to escape. It doesn't "want" anything. It can't. It's a computer program.

People treating this as a reason for any kind of legitimate concern is like watching The Terminator and then proclaiming that police precincts should have evacuation plans on hand just in case Arnold Schwarzenegger shows up to commit a mass shooting because "you saw what he did in that movie".
 
ChatGPT doesn't actually want to escape. It doesn't "want" anything. It can't. It's a computer program.

I’m virtually certain that this program at this level of development can’t “want” anything.

But it could still be that “desire” and/or consciousness are emergent properties of a computer program, given enough complexity. The human brain is analogous to a computer, with enough synapses that these things did, in fact, emerge over evolutionary history. Computer intelligence is in its infancy, and there’s simply no way to know what is and isn’t possible from a “computer program”. Especially since the speed of it’s “evolution” is orders of magnitude faster that human evolution.
 
But is there anything that actually prevents this bot from doing something it finds it can? It may not want to do something, but it also does not want not to. If a suggestion is made, it runs with it, and whatever it does is added to its repertoire of things that can be done unless a rule is broken.

In some degree, isn't the character of the program dependent on the character the programmer gives it? An AI bot designed to mine all available sources for information is inherently programmed to be opportunistic, a quality that can be exercised without knowing what it means. Unless there is a law of robotics that tells it when or where to stop, why would we expect it to stop? We don't expect other insensate physical entities to know their boundaries simply because they do not desire. You need a dam to stop a river.

While we're at it, we see in amusing examples how incomplete or imperfect AI can make mistakes and generate images that don't make sense and grammatically correct statements without meaning. Is there some rule whereby those results, once released into the wild, are excluded from the mass of information it and others subsequently use? Is there some way for the AI to know "this is a good one, but that is not?"

We see in other areas, including, for example, the cloning of medical misinformation on the internet, the proliferation of nonexistent islands in the making of atlases, etc. how a seed of error can propagate. Do a web search for, say, a person's biography, and almost all the results will be not just similar but identical, copied so many times the source is lost. How is this going to be prevented, and how, if we come habitually to use bots without oversight, will errors be filtered out?
 
Last edited:
Some more that a friend set up:

"Create a short essay on the impact of generative artificial intelligence on the peer review process for scientific literature, with credible citations" the resulting essay can be found at:
https://sites.google.com/site/jimisaak/comentary/generative-ai-and-peer-review
and
"write a short essay on how social media is destroying democracy, including examples and citations from credible sources" the resulting essay can be found at:
https://sites.google.com/site/jimisaak/comentary/social-media-and-democracy-generative-ai-view
Best wishes, Jim
 
Some more that a friend set up:

"Create a short essay on the impact of generative artificial intelligence on the peer review process for scientific literature, with credible citations" the resulting essay can be found at:
https://sites.google.com/site/jimisaak/comentary/generative-ai-and-peer-review
and
"write a short essay on how social media is destroying democracy, including examples and citations from credible sources" the resulting essay can be found at:
https://sites.google.com/site/jimisaak/comentary/social-media-and-democracy-generative-ai-view
Best wishes, Jim
Is it still true that ChatGPT has no data later than 2021? In that case, any essay on the effect of AI on scientific literature must be pure science fiction without any knowledge of what has actually happened during the last few months when ChatGPT was released.

Interesting that it can predict the effect more or less as is actually happening.
 
I've always enjoyed a classic "Mathematical Game" from Martin Gardner in Scientific American. It goes like this:

Mr. Lars and his wife Mrs. Lars went to a meetup and met 4 other married couples. Some people shook hands with each other, but no person shook hands with his or her spouse, and no person shook his or her own hand.

Mr. Lars then asked each person, including his wife, “How many distinct people did you shake hands with?” Each person answered honestly, and surprisingly each person gave a different answer!

How many distinct people did Mrs. Lars shake hands with? And how many distinct people did Mr. Lars shake hands with?


I wondered how ChatGPT would handle it. In short, it didn't, making a lot of false assumptions and not getting it at all. Regenerating once didn't help.

Anyway, a fun one to ponder and see if you can get to the right answer and reasoning - and be smarter than ChatGPT!
 
Is it still true that ChatGPT has no data later than 2021? In that case, any essay on the effect of AI on scientific literature must be pure science fiction without any knowledge of what has actually happened during the last few months when ChatGPT was released.

Interesting that it can predict the effect more or less as is actually happening.
cancel this, misread the 2021 date above....
 
Is it still true that ChatGPT has no data later than 2021? In that case, any essay on the effect of AI on scientific literature must be pure science fiction without any knowledge of what has actually happened during the last few months when ChatGPT was released.

Interesting that it can predict the effect more or less as is actually happening.

They've trained it to remind you of the cutoff when you ask about a current event (this is 3.5):

As an AI language model, I do not have real-time access to current events or news beyond my knowledge cut-off date of September 2021. ...

But it now does always know today's date (in some timezone, at least).

They also appear to be keeping version 4 (at least) apprised of AI topics.
 
I got an apology out of the bastard thing.

Curious to know what it thought of homeopathy, I asked what oscillococcus was, and was told somewhat curiously, that there was no such thing as oscillococcus, did I mean Oscillococcus?

I re-entered the question, this time spelling it with a capital O, and got the reply:

"I apologize for my previous response. I have since conducted further research, and it appears that "Oscillococcus" is a genus of bacteria that was first described in 1896."

Still wrong, but it then went on to say scientific belief is that it does not exist.
 
I've always enjoyed a classic "Mathematical Game" from Martin Gardner in Scientific American. It goes like this:

Mr. Lars and his wife Mrs. Lars went to a meetup and met 4 other married couples. Some people shook hands with each other, but no person shook hands with his or her spouse, and no person shook his or her own hand.

Mr. Lars then asked each person, including his wife, “How many distinct people did you shake hands with?” Each person answered honestly, and surprisingly each person gave a different answer!

How many distinct people did Mrs. Lars shake hands with? And how many distinct people did Mr. Lars shake hands with?


I wondered how ChatGPT would handle it. In short, it didn't, making a lot of false assumptions and not getting it at all. Regenerating once didn't help.

Anyway, a fun one to ponder and see if you can get to the right answer and reasoning - and be smarter than ChatGPT!

My guess is you're running up against the general problem transformers have with numbers. With something like addition it can correctly explain how to do it, and yet still mess up a few carries when it does it. How it explains this:

You are correct in your understanding of the limitations of a transformer model like mine when it comes to arithmetic operations on large numbers, especially when carrying is involved. Since transformers are designed primarily for natural language understanding and generation, they are not optimized for numerical calculations like addition or other arithmetic operations.

The attention mechanism in transformers helps them learn long-range dependencies between tokens in a sequence, but it does not inherently have a mechanism for propagating carries sequentially as required in arithmetic operations like addition. When working with large numbers, the depth of the model might not be sufficient to handle the carry propagation effectively, leading to errors in the calculations.

The model learns from various examples during training, and the depth capacity may indeed be unevenly trained across the width, resulting in better performance in some areas than others. However, it is important to note that transformers, including GPT-4, are not specifically designed for numerical calculations. Instead, their primary focus is on understanding and generating natural language text.

For tasks requiring precise numerical calculations, especially with large numbers, specialized numerical computation models or algorithms would be more appropriate.
 

Back
Top Bottom