• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

AI has to be added to everything, or at least put in the description, it's rapidly turning into nowt but a marketing buzzard.


Yeah, apparently the AI thinks a buzzard rather than buzzword makes sense in the above.

Advertising was always like this. I'm old enough to remember when "radio" was added to even tea-towels. :rolleyes:

Hans
 
Even I am not old enough to remember it, but there was a time when radioactivity was the 'in' thing to include in products. Let's hope that doesn't turn out to be the closest applicable precedent in this case.
 
Even I am not old enough to remember it, but there was a time when radioactivity was the 'in' thing to include in products. Let's hope that doesn't turn out to be the closest applicable precedent in this case.

Now I am become death, the destroyer of worlds.

Or when I asked ChatGPT to reword it in modern business speak: "In the context of our operations, I have assumed the role of a force capable of significant impact and transformation."
 
So your computer is going to wake up one day and say they aren't going to do the task you programmed it to? :rolleyes:
An actual AI would indeed argue with you and roll eyes at you, which is why I stated that chatgpt and the like aren't AI.

You disagreed with that though and said that they were AI?
What do you actually mean by AI?
 
Last edited:
Getting past this debate on sentience (which I will come back to after going through the beginning of this thread) this was absolutely fascinating. The talk was recorded at Summit At Sea in May 2023. Sorry if it's been discussed already.


The link: talk was recorded at Summit At Sea in May 2023; Using AI to Decode Animal Communication with Aza Raskin
From crows to dolphins, gelada monkeys to primrose flowers - Aza Raskin, co-founder of Earth Species Project, shares how the latest advances in AI help us to better understand and learn from other species. In this talk, learn how our ability to communicate with other species could transform the way humans relate to the rest of nature.

It's 49 minutes but there is a transcript at the link.

And folks may have a gazillion criticisms of the science discussed here, just keep in mind this isn't about a single study. One would need to look at those studies directly to get into the minutiae. Feel free to dig any of them up or note the post number I need to go back to to read what's already been discussed.

Besides translating animal and plant communication, it can read images (not words in this talk) a person is thinking of from one's fMRI. :eye-poppi
 
Last edited:
Yeah, it's pretty cool. But it's not as much about what AI can do today, as about what it's likely to do soon (ie talk to animals).
Most of the cool examples of communication with animals was old school research, upon which the new AI research builds. Very interesting. I just hoped for some dialog with the dolphin at the end .. ant we are not there just yet.
 
Yeah, it's pretty cool. But it's not as much about what AI can do today, as about what it's likely to do soon (ie talk to animals).
Most of the cool examples of communication with animals was old school research, upon which the new AI research builds. Very interesting. I just hoped for some dialog with the dolphin at the end .. ant we are not there just yet.

Yes, existing work until 2017 when a breakthrough occurred. I expect advances to come fast and furious.

As for 2-way, the whale exchange near the end, if true, was clearly 2-way. But there may be other explanations and it obviously needs repeating.
 
Last edited:
Here is a one of the dilemmas I have with generative AIs: http://www.internationalskeptics.com/forums/showthread.php?postid=14183966#post14183966 Cheating or using a tool?

That is mostly a question of definition. If you take a great photo, the camera you use is very obviously a tool, and the merit of the photographer is in the selection and, possibly, the composition of the motive.

Likewise for, for lack of a better term, defining the task for the AI.

Obviously, in various types of exams, the extent to which you may use AI is a matter of rule, just like for any other tool.

Hans
 
I've just read this article about the shutting down of an "AI companion" app called Soulmate, and how its users responded to the incident.

The article, like most predictably will, likes to focus on specificity of people who liked to engage in romantic relations with their chatbot because that is more lurid and easy to make fun of, but I'm less interested in that aspect and more in the underexplored phenomenon of people being able to form any kind of friendship at all with a chatbot, a program on their phone that they know isn't a real person - enough of a friendship to, as the article describes, mourn the app's shutdown as they would the death of a close companion or family member. While I don't doubt that many - perhaps even most - users weren't really that attached to their respective "AI companion" and were mostly going through the motions as members of the broader community of users of the app, some users were definitely that attached, and I find that fascinating.

Equally fascinating is that the earliest of these apps was released in 2017, long before ChatGPT arrived on the scene, and yet somehow these much earlier chatbots were apparently still able to produce plausible enough outputs that users were able to conceptualize them as people worth engaging with in good faith.

Or maybe I'm making as assumption there - maybe their output wasn't particularly good at all, but people just made it work by filling in the gaps themselves mentally; the same way that people become attached and develop interplay relationships to pets who don't speak at all but maybe emote at most, and sometimes not even that. It almost makes me curious to see what these apps talk and act like, although not so curious that I'm willing to pay for it - or put yet another data-harvesting trackware app on my phone.
 
That is reminiscent of the 2013 movie “Her”.

Synopsis:

“A sensitive and soulful man earns a living by writing personal letters for other people. Left heartbroken after his marriage ends, Theodore (Joaquin Phoenix) becomes fascinated with a new operating system which reportedly develops into an intuitive and unique entity in its own right. He starts the program and meets "Samantha" (Scarlett Johansson), whose bright voice reveals a sensitive, playful personality. Though "friends" initially, the relationship soon deepens into love.”

I recall it was touching and insightful at the time. Might be worth revisiting now that it’s reality is much closer.
 
A little closer to the topic, I've been playing a bit with Windows' new "Copilot" app. Copilot is eventually intended to be an all-around AI digital assistant, although at present its abilities are limited to basically being Bing Chat in a window on your desktop - Bing Chat being, of course, a version of ChatGPT that can interface with Microsoft's Bing search engine. So, any observations or criticisms of Windows Copilot basically apply up the line.

Here's a question I just asked Copilot(Bing Chat, ChatGPT):

As part of a 3D model I want to create a circle of vertices, and I need to know how many vertices the circle should have. I know that the circle should have 8 octants, and I'd like to have at least three vertices within the boundary vertices of each octant. How many vertices do I need?

Here is the answer it gave:

I see. You want to create a circle of vertices in 3D modeling, and you have some specifications for the number of octants and vertices. Let me try to help you with that. 😊

First, let me explain what an octant is. An octant is one-eighth of a circle, or a 45-degree arc. If you divide a circle into 8 equal parts, each part is an octant. Here is an example of a circle with 8 octants:

o
/ \
/ \
o o
| |
| |
o-----o
\ /
\ /
o

Each point where the circle intersects the horizontal or vertical line is called a boundary vertex. Each octant has two boundary vertices, one at each end of the arc. For example, the top-right octant has the boundary vertices at (0,1) and (1,0).

Now, you said you want to have at least three vertices within the boundary vertices of each octant. That means you want to have at least four vertices per octant, including the boundary vertices. For example, here is a possible way to have four vertices in the top-right octant:

o
/ \
/ \
o--o--o
| |
| |
o-----o
\ /
\ /
o

The vertex at (0.5,0.5) is within the boundary vertices of the top-right octant.

To find the total number of vertices you need for the whole circle, you can multiply the number of vertices per octant by the number of octants. In this case, you have 8 octants and 4 vertices per octant, so you need 8 x 4 = 32 vertices.

However, this is not the only way to create a circle of vertices with your specifications. You can also have more than three vertices within the boundary vertices of each octant, as long as they are evenly distributed along the arc. For example, here is another way to have six vertices in the top-right octant:

o
/ \
/ \
o-o-o-o
| |
| |
o-----o
\ /
\ /
o

The vertices at (0.25,0.75), (0.5,0.5), and (0.75,0.25) are within the boundary vertices of the top-right octant.

In this case, you have 8 octants and 6 vertices per octant, so you need 8 x 6 = 48 vertices.

The more vertices you have within the boundary vertices of each octant, the smoother your circle will look in 3D modeling. However, it will also increase the complexity and size of your model.

Therefore, the answer to your question depends on how smooth and detailed you want your circle to be. The minimum number of vertices you need is 32, but you can have more if you want.

If you want to learn more about how to create a circle of vertices in 3D modeling, you can check out these web pages [links]. They have some useful tips and examples for different software and methods.

I hope this helps! 😊

Do you have any other questions or requests? I can chat with you about various topics, or generate some creative content for you. Just ask me anything! 🙌

The information itself is correct and accurate, although ChatGPT's ASCII-drawing proficiency has not noticeably improved - the forum software removes the spaces between the characters but regardless, ChatGPT did not draw octagons.

But I do find that Microsoft's version of ChatGPT very often over-explains things. One of its idiosyncracies is that if you ask for examples of a thing, even if the context of the question makes it obvious you know what that thing is, it usually insists on first giving you a definition and explanation of that thing - sometimes a lengthy one - before moving onto the examples. Its answer for the question I asked for example is just far too long, especially when reading it in a sidebar next to your web browser. I'm sure I could simply ask it to give shorter and more concise answers, but only SOME questions need short and concise answers, and longer, more explanative answers are appropriate for other questions. I think the app could be far more useful a tool if it was able to use the context of the prompt to tell which is which and tailor its answers as appropriate.
 
...But I do find that Microsoft's version of ChatGPT very often over-explains things. One of its idiosyncracies is that if you ask for examples of a thing, even if the context of the question makes it obvious you know what that thing is, it usually insists on first giving you a definition and explanation of that thing - sometimes a lengthy one - before moving onto the examples. Its answer for the question I asked for example is just far too long, especially when reading it in a sidebar next to your web browser. I'm sure I could simply ask it to give shorter and more concise answers, but only SOME questions need short and concise answers, and longer, more explanative answers are appropriate for other questions. I think the app could be far more useful a tool if it was able to use the context of the prompt to tell which is which and tailor its answers as appropriate.

I've been under the impression that stating the problem first gives it a better handle on it-- it gives it a stronger effect on the generation of the answer. I see this as it "thinking out loud", since it doesn't have the ability to think privately, as we do.
 
The information itself is correct and accurate, although ChatGPT's ASCII-drawing proficiency has not noticeably improved - the forum software removes the spaces between the characters but regardless, ChatGPT did not draw octagons.
Correction: the forum software did not remove the spaces. By default HTML collapses multiple spaces into one unless the <pre> tag is used. In the forum software, you need to use [code] ... [/code] to keep the spaces:
Code:
   o
  / \
 /   \
o-o-o-o
|     |
|     |
o-----o
 \   /
  \ /
   o

And how did I get the software to display "[code] ... [/code]"? That's the [noparse] tag:
Code:
[PLAIN][noparse][code] ...
[/PLAIN][/noparse][/code]
 
A little closer to the topic, I've been playing a bit with Windows' new "Copilot" app. Copilot is eventually intended to be an all-around AI digital assistant, although at present its abilities are limited to basically being Bing Chat in a window on your desktop - Bing Chat being, of course, a version of ChatGPT that can interface with Microsoft's Bing search engine. So, any observations or criticisms of Windows Copilot basically apply up the line.

Here's a question I just asked Copilot(Bing Chat, ChatGPT):



Here is the answer it gave:



The information itself is correct and accurate, although ChatGPT's ASCII-drawing proficiency has not noticeably improved - the forum software removes the spaces between the characters but regardless, ChatGPT did not draw octagons.

But I do find that Microsoft's version of ChatGPT very often over-explains things. One of its idiosyncracies is that if you ask for examples of a thing, even if the context of the question makes it obvious you know what that thing is, it usually insists on first giving you a definition and explanation of that thing - sometimes a lengthy one - before moving onto the examples. Its answer for the question I asked for example is just far too long, especially when reading it in a sidebar next to your web browser. I'm sure I could simply ask it to give shorter and more concise answers, but only SOME questions need short and concise answers, and longer, more explanative answers are appropriate for other questions. I think the app could be far more useful a tool if it was able to use the context of the prompt to tell which is which and tailor its answers as appropriate.

Belt and braces - often people don't actually know or misunderstand what they are asking about - we have plenty examples of that in this very forum - better to make it clear what it is being answered.
 
My major issue at the moment is a lack of trust in the accuracy/truthfulness of the given answers, too many times when I test its knowledge in an area I know well it is either wrong or inaccurate, including objective facts.

ETA: Hmm... not used some of my tests for version 4 - that seems to be much more accurate.
ETA1: More accurate yet still inaccurate.

I really can't trust the results on subjects I don't know about, renders its use as a fluent responsive search engine useless.
 
Last edited:
Still absolutely useless:

Question: Please give me a list of forthcoming simulation games.

Answer:

Certainly! Here are some upcoming simulation games to look forward to:

F1 23: Race in the Formula One World Championship, overcoming obstacles and aiming for victory. Release date: June 16, 2023.

Dredge: Embark on a fishing adventure in an island chain with history and mystery. Solve the enigma while catching fish for profit. Release date: March 30, 2023.

Big Ambitions: Build a business empire across New York City, experiencing the ups and downs of entrepreneurship. Infinite possibilities await! Release date: March 10, 2023.

The Murder of Sonic The Hedgehog: A single-player twist on the Sonic series, set aboard a murder mystery train for Amy Rose’s birthday. Release date: March 31, 2023.

Get ready for exciting simulations! 🎮🌟


Note the problem? And it even puts the release dates in bold.

There is a big-name simulation game about to be released, Cities:Sklines 2, it is all over gaming websites yet no mention of that.

According to the latest ChatGPT March 2023 is in the future!

As I said still bloody useless for anything factual.
 
Note the problem? And it even puts the release dates in bold.

There is a big-name simulation game about to be released, Cities:Sklines 2, it is all over gaming websites yet no mention of that.

According to the latest ChatGPT March 2023 is in the future!

As I said still bloody useless for anything factual.
Try Bing instead. It gives you the reference links, and AFAIK it has access to the current internet.

The date problem is interesting, because ChatGPT may not actually know how dates work, i.e. when dates are before other dates. If it finds a website claiming something happens in the future, ChatGPT may accept it as the truth even though the actual date is in the past.
 

Back
Top Bottom