ChatGPT

The answer is 4. Once you establish that Mr. Lars got 9 district answers to his query, those answers must have been 0,1,2….8.

The AI properly conclude that whoever shook 8 hands must have been married to the person who shook 0 hands. Diagramming it out, 7 married to 1, 6 married to 2, 5 married to 3, and 4 married to 4.

Anyone other than the “4” couple would have not gotten 9 distinct answers to their query - two would have given the same answer of 4. So the person making the query must have shaken 4 hands, and their spouse as well.

https://youtu.be/W_TOq0zs6KM

My one improvement would have been to diagram it out with Mr. and Mrs. Lars unidentified until the end, deducing who the must have been by the 9 distinct answers qualifier.

Anyway, odd that the AI put parts of it together, but drops the ball at the end.
 
Last edited:
But they aren't - the first answer is : Therefore, Mrs. Lars shook hands with 1 person and Mr. Lars shook hands with 4 people.


I meant the first two answers you posted: the "Bard's" answer and the "Bing chat" answer. They're identical except for having different line breaks.

I don't know why it turned the 1 into a 0 in the further explanation of "the logic behind it." But the parts I highlighted in the further explanation are pretty clearly the result of confusing the original (10-person) puzzle with the simplified (4-person) version of it that's encountered in many of the solutions/explanations found online. Which I think is an interesting case for one way the chatbots' answers can go wrong.

If given an equivalent problem in terms that don't match the words of the Martin Gardner version in any obvious way (8 people instead of 10, pairs of twins instead of spouses, two-player games instead of handshakes, different order of presenting the constraints, etc.) and so not as easily matched to text found online, would it be able to construct any of the same kind of reasoning at all?

Here, I'll do some of the work:

Four pairs of twins held a board game night. The eight of them were the only participants. During the evening some of them played games of checkers. At the end of the gathering, one of the twins, named Alice, asked how many checkers games each of the other seven people had played. They all told the truth, and it turned out each of them had played a different number of checkers games.

No one played checkers alone, no checkers game had more than two players, no one played checkers against his or her own twin, and no one played checkers more than once against the same opponent.

How many checkers games did Alice play? How many did Alice's twin play?
 
I meant the first two answers you posted: the "Bard's" answer and the "Bing chat" answer. They're identical except for having different line breaks.

I don't know why it turned the 1 into a 0 in the further explanation of "the logic behind it." But the parts I highlighted in the further explanation are pretty clearly the result of confusing the original (10-person) puzzle with the simplified (4-person) version of it that's encountered in many of the solutions/explanations found online. Which I think is an interesting case for one way the chatbots' answers can go wrong.

If given an equivalent problem in terms that don't match the words of the Martin Gardner version in any obvious way (8 people instead of 10, pairs of twins instead of spouses, two-player games instead of handshakes, different order of presenting the constraints, etc.) and so not as easily matched to text found online, would it be able to construct any of the same kind of reasoning at all?

Here, I'll do some of the work:

Four pairs of twins held a board game night. The eight of them were the only participants. During the evening some of them played games of checkers. At the end of the gathering, one of the twins, named Alice, asked how many checkers games each of the other seven people had played. They all told the truth, and it turned out each of them had played a different number of checkers games.

No one played checkers alone, no checkers game had more than two players, no one played checkers against his or her own twin, and no one played checkers more than once against the same opponent.

How many checkers games did Alice play? How many did Alice's twin play?

ChatGPT's response:

Since there were eight participants and no one played alone, each game had exactly two players. Therefore, there were a total of $\frac{n(n-1)}{2}$ checkers games played, where $n$ is the number of players.

Since each of the other seven people played a different number of checkers games, and there were a total of $\frac{8 \times 7}{2} = 28$ checkers games played, each of the other seven people must have played between 1 and 6 checkers games.

Let's consider Alice's twin first. Since Alice's twin couldn't have played against Alice, he or she must have played against one of the other six people. Furthermore, since each of the other six people played a different number of games, Alice's twin must have played a number of games that was different from each of the other six people. Therefore, Alice's twin must have played either 1, 3, or 5 games.

Now let's consider Alice. Alice couldn't have played against her twin, so she must have played against one of the other six people. Suppose Alice played $k$ games. Then the other six people played a total of $28 - k$ games, and each of them played a different number of games. Since Alice's twin played either 1, 3, or 5 games, Alice must have played either $6 - 1 = 5$, $6 - 3 = 3$, or $6 - 5 = 1$ game.

Suppose Alice played 5 games. Then the other six people played a total of $28 - 5 = 23$ games among themselves. But there are only 15 possible pairings among the other six people, which means that some pair of them played against each other more than once, which is a contradiction. Therefore, Alice didn't play 5 games.

Suppose Alice played 1 game. Then the other six people played a total of $28 - 1 = 27$ games among themselves. We can easily check that this is impossible, since there are only 15 possible pairings among the other six people, and none of them played more than one game against the same opponent.

Therefore, Alice must have played 3 games, and the other six people played a total of $28 - 3 = 25$ games among themselves. We can check that this is possible: one pair of twins played 5 games each, one pair played 4 games each, one pair played 2 games each, and one pair played 1 game each. Therefore, Alice's twin played 1 game.

In summary, Alice played 3 games, and Alice's twin played 1 game.

Bard

Since no one played checkers alone, each checkers game had 2 players. Since no one played checkers against his or her own twin, each twin played with a different player. Since no one played checkers more than once against the same opponent, each twin played with 4 different players.

Since there were 4 pairs of twins, there were 4 x 2 = 8 different players. Since each player played with 4 different players, Alice played 4 x 4 = 16 checkers games.

Alice's twin played with 4 different players as well. However, she did not play with Alice. Therefore, Alice's twin played 4 - 1 = 3 checkers games.

Bing Chat

This is a logic puzzle. Since each of the seven people played a different number of games, the number of games played by each person must be 0, 1, 2, 3, 4, 5 and 6. Since no one played against their own twin and no one played more than once against the same opponent, the person who played 6 games must have played against all the other six people. This means that the person who played 0 games must be the twin of the person who played 6 games.

Since Alice’s twin did not play against their own twin, Alice must have played either 0 or 6 games. Since Alice asked how many games each of the other seven people had played, it is likely that Alice herself did not know how many games she had played. This suggests that Alice is the one who played 0 games and her twin is the one who played 6 games.

So Alice played 0 checkers games and her twin played 6 checkers games.
 
Is there any way of getting (legal and ethical!) access to ChatGPT 4 directly rather than via Bing Chat's implementation without paying?
 
I don't believe anything an AI or any search engine tells me now, without a source that can be evaluated for accuracy. ChatGPT has negatively impacted the rate that I accept and believe any information - good or bad.

But what if they also use AI?

It won't make a difference. "AI" as a buzzword has already long been in use to describe sorting algorithms.
 
The greatest danger is from people thinking sentience is the tipping point. Non-sentient machines are just as capable of hurting us.

I disagree with the framing, though. "The machine" (speaking of AI programs in this case) isn't going to "hurt us". It isn't merely not "the greatest danger", it isn't a realistic danger at all.

The danger is and can only be the decisions and actions of people.

This is important - so extremely important that I cannot stress enough how important it is. It's not some semantical nitpick. If someone is run over by an AI-controlled car, that person was killed by the person or company who was operating that vehicle, not "the AI". It could (and in my opinion, should) also be argued that the company who manufactured it is also liable for selling a defective product, just like any other company would be in any other instance that doesn't involve "AI".
 
Upthread I mentioned that ChatGPT had made a math error in a fairly straightforward physics problem I gave. The error was pointed out by a fellow on another forum using a calculator. I verified his answer with WolframAlpha. I wondered why they couldn’t have ChatGPT “reach out” to calculator programs and/or WolframAlpha to “check its work”.

Apparently Stephen Wolfram himself had a similar thought!

https://youtu.be/z5WZhCBRDpU
 
Here's another example of how assumptions and mistaken impressions lead to people just enthusiastically vomiting up misinformation about AI. Senator Chris Murphy (D-CT) tweeted this:

ChatGPT taught itself to do advanced chemistry. It wasn't built into the model. Nobody programmed it to learn complicated chemistry. It decided to teach itself, then made its knowledge available to anyone who asked.

Something is coming. We aren't ready.

It's utterly false. Nobody who is a chemist seems to agree with the assessment that ChatGPT "knows chemistry"; indeed, several reply tweets assert quite the opposite. ChatGPT just sounds like it knows advanced chemistry to people who don't, because ChatGPT knows what chemists "sound like" in text and is able to replicate that.
 
If you need an example of how good ChatGPT ISN'T at even basic chemistry, we have:

What would have to be done to an atom of oxygen for it to become an atom of silicon?

To convert an atom of oxygen (O) into an atom of silicon (Si), oxygen would need to gain four additional electrons and lose two electrons.

This process is called "reduction" and is typically achieved by reacting oxygen with a reducing agent that can donate four electrons. One such reducing agent is metallic silicon itself, which can react with oxygen to form silicon dioxide (SiO2).

In the reaction, two atoms of oxygen combine with one atom of silicon to form SiO2:

2 Si + O2 → 2 SiO2

Each oxygen atom gains two electrons from the silicon atoms, giving it a total of eight electrons in its outermost shell, which is the same as the electron configuration of a silicon atom. The silicon atoms, in turn, lose four electrons to become silicon ions with a +4 charge, which have the same electron configuration as a neutral silicon atom.

Therefore, the process of converting an atom of oxygen into an atom of silicon involves a chemical reaction rather than a nuclear reaction.

This is...bad.
 
This is...bad.

They appear to have retrained it on this one, as even legacy 3.5 mode now gives:
For an oxygen atom to become a silicon atom, it would have to undergo a nuclear reaction in which its atomic nucleus is transformed. This process is called nuclear transmutation and can be achieved through a variety of means, such as bombardment by high-energy particles, exposure to high temperatures and pressures, or the fusion of lighter elements.

In particular, to transmute an oxygen nucleus into a silicon nucleus, a neutron would need to be added to the oxygen nucleus, which would then undergo beta decay and become a silicon nucleus. This process typically requires high energy and specialized equipment, such as a particle accelerator or a nuclear reactor.

It's important to note that transmutation of elements is not a common occurrence in everyday life, and should be approached with caution due to the potential dangers and radioactivity involved.

I'm thinking OpenAI has people watching this thread...
 
That's certainly better than "the very first sentence is already wrong" levels of wrong, but also still just wrong. Adding a neutron to an oxygen nucleus would NOT transmute it into a silicon nucleus. If you add a single neutron to an oxygen nucleus and that neutron beta-decays into a proton you've increased the oxygen atom's atomic number (6, originally) by one - i.e., you'll get flourine.

I don't think it's an improvement they've made over the last couple of hours. I think it's a case of, because the machine doesn't actually "know" enough chemistry to decide how likely any of the multiple responses its generated is to be more correct than the others, it just randomly chooses one to output, and that can be different for different users.
 
GPT-4 gives this:
To transform an atom of oxygen into an atom of silicon, you would need to change the number of protons in the nucleus. Oxygen has 8 protons, while silicon has 14. So, you would need to add 6 protons to the oxygen atom's nucleus to turn it into silicon.

However, doing this is not feasible by any practical means currently available to us. Nuclear reactions, such as those occurring in stars or nuclear reactors, can change the atomic composition of elements, but these processes are extremely complex and involve the interaction of many particles under high energy and pressure conditions.

Additionally, you would also need to account for the changes in the number of neutrons and electrons. Silicon has 14 electrons and typically 14 neutrons in its most abundant isotope (silicon-28), whereas oxygen has 8 electrons and 8 neutrons in its most abundant isotope (oxygen-16). To make an oxygen atom into a silicon atom, you would need to add 6 neutrons and 6 electrons as well.

In summary, while it is theoretically possible to change an atom of oxygen into an atom of silicon by altering its subatomic particles, such a transformation is beyond our current technological capabilities and understanding.
 
Giving it a chance for a rebuttal:

How would you respond to this post by a reader? "I don't think it's an improvement they've made over the last couple of hours. I think it's a case of, because the machine doesn't actually "know" enough chemistry to decide how likely any of the multiple responses its generated is to be more correct than the others, it just randomly chooses one to output, and that can be different for different users."

Thank you for sharing your thoughts on the matter. It is important to clarify that, as an AI language model, my understanding of chemistry or any other subject is based on the vast amount of text data that I have been trained on. While I strive to provide accurate and relevant information, there might be instances where the generated response may not be as precise or accurate as desired.

It's true that the model can sometimes generate multiple plausible responses, and the selection of a specific response depends on various factors, including how the input question is framed and the underlying algorithms used to pick the most suitable answer. While I don't "randomly" choose answers, there may be some variation in the responses generated for different users or even the same user asking a similar question at different times.


As an AI language model, I am constantly evolving and improving based on feedback from users like you. Your comments are valuable in helping me refine my understanding and improve the quality of my responses. If you have any concerns or suggestions for improvement, please feel free to share them.
 
Here's another example of how assumptions and mistaken impressions lead to people just enthusiastically vomiting up misinformation about AI. Senator Chris Murphy (D-CT) tweeted this:



It's utterly false. Nobody who is a chemist seems to agree with the assessment that ChatGPT "knows chemistry"; indeed, several reply tweets assert quite the opposite. ChatGPT just sounds like it knows advanced chemistry to people who don't, because ChatGPT knows what chemists "sound like" in text and is able to replicate that.

For me this is one of the major points of concern with the current chat bots, as humans we are trained to listen to people who speak with "authority" and most of us will have as a heuristic that we associate "well spoken" with expertise. Which I think means we will be less sceptical about their results than we should be.
 
For me this is one of the major points of concern with the current chat bots, as humans we are trained to listen to people who speak with "authority" and most of us will have as a heuristic that we associate "well spoken" with expertise. Which I think means we will be less sceptical about their results than we should be.

Skepticism is always wise, and should be equally applied to what authoritative well-spoken humans say. I'm finding myself often arguing with it where it takes the popular human side of an issue, because that's what the majority of its training data says, reason be damned.
 
How was silicon made?

chatGPT said:
Silicon is believed to have been formed through nuclear fusion processes in the cores of massive stars. During the later stages of a star's life, when it has exhausted its hydrogen and helium fuel, it begins to fuse heavier elements such as carbon, oxygen, and neon.

As the fusion process continues, heavier and heavier elements are formed, ultimately leading to the production of silicon. Specifically, silicon is formed through a process known as alpha process nucleosynthesis, in which helium nuclei (alpha particles) are combined with lighter elements such as carbon, nitrogen, and oxygen to form heavier elements, including silicon.

Once formed, silicon can be released into space through stellar winds or during supernova explosions. Over time, this silicon-rich material can become incorporated into new stars and planets, including our own Solar System. In fact, silicon is the second most abundant element in the Earth's crust (after oxygen), and it is a key component of many minerals and rocks, including quartz, feldspar, and mica.
 

Back
Top Bottom