ChatGPT

Another interpretation could be that being unable to experience gravity itself, the AI sees no reason to disbelieve what it has been told.

I think not. That would be the reaction from a sentient being. The AI is mainly a super-dictionary. As such it must have information about the ability of cats to hover. I'm guessing that it has not been given the ability to react to lies.

Just made a test:



ME:

Can cats hover in the air?

ChatGPT:
No, cats cannot hover in the air like helicopters or drones. While cats are known for their agility and ability to jump to impressive heights, they are still bound by the laws of physics and gravity. Cats cannot sustain themselves in mid-air without some form of support or propulsion.

Cats are skilled jumpers and can leap to impressive heights by using their powerful leg muscles. They can also exhibit a sort of controlled falling or gliding motion if they jump from a higher point, like a tree branch, and spread their limbs to slow down their descent. However, this is not true hovering but rather a controlled falling with some degree of maneuverability.

True hovering in the air requires specialized mechanisms like wings or rotors, which cats do not possess naturally.


So ChatGPT knows cats cannot hover. I surmise that it has not been programmed to fault-check a question.

Hans
 
More fun with https://deepai.org/chat:

Me: Imagine a keyboard that has only a-z, 0-9, a space bar and an enter key, and a shift key to provide capital letters. Using shift and a number produces that number, so things like @, (, and ) aren’t available. Neither are [ ] { } ; : ' "; not even a comma or a period. Describe a programming language that be programmed using this keyboard. You’ll need to be inventive if a variable needs to contain a character like “&” or “@”.

The first part of its long answer:

DeepAI ChatBot said:
Sure, let me create an example programming language that can be programmed using only the given keyboard constraints.

Introducing “AzonLang” - a programming language that uses only letters, digits, spaces, and newline characters to implement basic programming constructs.

We can start with basic data types - AzonLang supports integers, floating point numbers, and strings. We represent an integer using digits, and a floating-point number using digits and a dot character. A string is represented using double quotes, and can contain any character allowed by the keyboard (including spaces).

Code:
x = 42
y = 3.14
z = "Hello World"

Does anyone else see the obvious problems with this response? :D
 
Last edited:
More fun with https://deepai.org/chat:

Me: Imagine a keyboard that has only a-z, 0-9, a space bar and an enter key, and a shift key to provide capital letters. Using shift and a number produces that number, so things like @, (, and ) aren’t available. Neither are [ ] { } ; : ' "; not even a comma or a period. Describe a programming language that be programmed using this keyboard. You’ll need to be inventive if a variable needs to contain a character like “&” or “@”.

The first part of its long answer:



Does anyone else see the obvious problems with this response? :D

Years ago I had a customer call and say his mouse wasn't working. Before going over there I wanted to get him into the control panel to check the driver. The following conversation takes place:

Me: Ok, right-click on the My Computer icon on your desktop.
Him: My mouse is not working.
me: Headslap!

Similar problem. :)
 
So ChatGPT knows cats cannot hover. I surmise that it has not been programmed to fault-check a question.
Hans

GPT-3.5 falls for this, but not 4:

Me: This morning I awakened to see my cat sleeping in mid-air, hovering above my bed. Why did the cat choose this position?

GPT-4: It's not physically possible for a cat to sleep in mid-air or hover above a bed. The scenario you described may have been a dream, a visual illusion, or a misunderstanding of what you were observing. Without more information or evidence, it's not possible to provide a definitive explanation.
 
More fun with https://deepai.org/chat:

Me: Imagine a keyboard that has only a-z, 0-9, a space bar and an enter key, and a shift key to provide capital letters. Using shift and a number produces that number, so things like @, (, and ) aren’t available. Neither are [ ] { } ; : ' "; not even a comma or a period. Describe a programming language that be programmed using this keyboard. You’ll need to be inventive if a variable needs to contain a character like “&” or “@”.

The first part of its long answer:



Does anyone else see the obvious problems with this response? :D

I tried simplifying your instructions:

Describe a programming language that only uses alphanumeric characters and spaces, and each command is separated by a space. For example:

a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9


Such a programming language would likely be very basic, with a limited set of commands and operations that can be performed. Some possible commands could include:

- SET: assigns a value to a variable
- ADD: adds two variables and stores the result in a third variable
- SUB: subtracts one variable from another and stores the result in a third variable
- MULT: multiplies two variables and stores the result in a third variable
- DIV: divides one variable by another and stores the result in a third variable
- PRINT: outputs a variable or value to the console

Example program:

SET a 5
SET b 7
ADD a b c
PRINT c

This program sets two variables to the values 5 and 7, adds them together, and stores the result in a third variable. It then prints the value of the third variable (which should be 12).
 
Years ago I had a customer call and say his mouse wasn't working. Before going over there I wanted to get him into the control panel to check the driver. The following conversation takes place:

Me: Ok, right-click on the My Computer icon on your desktop.
Him: My mouse is not working.
me: Headslap!

Similar problem. :)

Is this a confession that you are a chatbox? Your job could be replaced by one. At least over the phone.
 
Is this a confession that you are a chatbox? Your job could be replaced by one. At least over the phone.

I like to think my brain farts are outnumbered by my occasional flashes of genius. Of course this may be because I am a chatbox. :)
 
I tried simplifying your instructions:

Describe a programming language that only uses alphanumeric characters and spaces, and each command is separated by a space. For example:

a b c d e f g h i j k l m n o p q r s t u v w x y z 0 1 2 3 4 5 6 7 8 9

You asked a much better question. I'll try it out on the chat bot I'm working with.

As if I didn't have enough other things to do, I've become fascinated with this language since I first thought about it. The sample language generated your question resembles some assembly languages. The one I'm playing with is more complex, but if I started describing it in this thread it would be off topic. I may start a thread for it in the forum or start a project on GitHub.

ETA: The language generated from Darat's instructions included newlines, and he didn't include newline in his list of characters. Does that mean the answer is wrong? :)
 
Last edited:
ETA: The language generated from Darat's instructions included newlines, and he didn't include newline in his list of characters. Does that mean the answer is wrong? :)
That depends on whether the newlines are important to the language. If there is actually a space before each newline, then newlines can be ignored along with other whitespaces (i.e. tabs).
 
You asked a much better question. I'll try it out on the chat bot I'm working with.

As if I didn't have enough other things to do, I've become fascinated with this language since I first thought about it. The sample language generated your question resembles some assembly languages. The one I'm playing with is more complex, but if I started describing it in this thread it would be off topic. I may start a thread for it in the forum or start a project on GitHub.

ETA: The language generated from Darat's instructions included newlines, and he didn't include newline in his list of characters. Does that mean the answer is wrong? :)

Yeah I thought it looked rather like good old assembly language back from the 80s.

The new lines - nah that's just for us humans it still parses without the newlines:

SET a 5 SET b 7 ADD a b c PRINT c

ETA: Ninja'd by steenkh
 
Last edited:
I've just posted this in another thread thought I'd drop it here as well:

At the moment I'm quite confident that anyone here using substantial amounts of unedited AI generated text will be spotted simply because it will either be a sudden change in their style of posts or from the style of the text itself - so far the various AI generators have quite a distinctive style. I expect that will change over time.

Bing Chat (uses ChatGPT4) offers a "style" choice for its responses but even then there seems to be a distinct "AI style". I know it could be nothing more than my cognitive bias (probably confirmation bias) but it does seem to me that at the moment there is a distinct style to the various text AI systems that are meant to use regular English. (Seen a few that are meant to mimic styles such as say rap or something.)
 
The style and manner of AI writing generally seems a lot like an average high school essay, overexplanatory and with the distinct impression that the writer doesn't really care about what they are writing but has been forced to fill a few paragraphs.
 
The style and manner of AI writing generally seems a lot like an average high school essay, overexplanatory and with the distinct impression that the writer doesn't really care about what they are writing but has been forced to fill a few paragraphs.

Guess what, that's exactly what's it doing. We are still pretty far from AI 'caring' what it writes.
 
I just got back from DEFCON, where I had the opportunity to attend a talk given by Dr Craig Martell, Chief Digital and Artificial Intelligence Officer for the US Department of Defense.

Key takeaways:

- Large Language Models like ChatGPT are not AI.

- All they do is predict the next word based on the preceding context. Language models have been around for a long time. The Large Language Model differs in the amount of context it can consider when predicting the next word.

- They do not, and cannot reason, nor can they emulate reasoning.

- Unfortunately, we have evolved to perceive as a reasoning being anything that speaks fluently.

- But the truth is that these things mostly just hallucinate.

- LLMs are great for:
-- Autocomplete
-- Speech to text (e.g., realtime close-captioning)
-- Translation
-- Writing code - because code is a formal structured language with external tools for validation

- LLMs are terrible for: Generating large amounts of text. There is no formal validation for such output, and it imposes a high cognitive load on any human who tries to verify that the entire output is correct.

- The DOD would very much like to start using LLMs and similar technologies everywhere they can. However, current LLMs hallucinate far too much to be of any practical use to the DOD.

- Dr Martell believes that we as consumers (including himself and his employer) should demand LLMs with target accuracy rates for specific use cases, and concrete metrics that can be used to measure whether an LLM is meeting those targets.
-- Drafting a battle plan, that will be reviewed and revised by human experts? Maybe 20% hallucination is okay.
-- Executing a battle plan? The DOD needs 99.999% accuracy for something like that. Current LLMs are nowhere close.

---

My two cents: All of this seems pretty obvious to me. Hopefully the limitations of ChatGPT, and the inappropriateness of interjecting its output into conversations between humans, will be taken more seriously if the opinions come from Dr Craig Martell, Chief Digital and Artificial Intelligence Officer for the US Department of Defense.

---

ETA: DEFCON's policy is to make recordings of the talks freely available online. So keep an eye out in coming weeks for this year's videos to drop, if you're interested in seeing for yourself what Dr Martell has to say.

Over time, you just keep becoming a more and more interesting person.

I've been struggling with the fascination with "AI" for a bit. It's a learning algorithm, sure, and it will "learn"... but it doesn't have any actual capacity for reasoning. It has no ability to distinguish fact from fiction, reality from imagination. If it gets a lot of exposure to abstract information about unicorns, the AI is going to present unicorns as being actual real animals. It's not intelligent, it's not sapient, it's not sentient.

There's been a lot of talk within the health care and health insurance industries about employing AI to support diagnosis and care coordination pathways, to help with medical coding and adjudication. There was even some talk at the last actuarial conference I went to about incorporating AI into pricing and forecasting.

I think that's a terrible idea, and I can't see how anyone with a brain would support it. Both pricing and forecasting require the ability to extrapolate. We use past history as a starting point, we look at trends from the past... but we also have to incorporate expectations for future variances into it. And that's something that AI just can't do.
 
I just got back from DEFCON, where I had the opportunity to attend a talk given by Dr Craig Martell, Chief Digital and Artificial Intelligence Officer for the US Department of Defense.

Key takeaways:

- Large Language Models like ChatGPT are not AI.
- LLMs are great for:
.........snip......
-- Writing code - because code is a formal structured language with external tools for validation

I forgot to mention: Paul Anagnostopoulos (inactive member) on Facebook has been posting some interesting analysis of ChatGPT. I won't post the whole thing (rules) but " It just predicted, incorrectly, the results based on its language model." For those who don't know, Paul does things like write PDP emulators in his own programming language so not a lightweight.
 
That's pretty much a good summary of what we've sort of come to a consensus in this thread. There is on bit I'd like to tease out:



Now in many ways I agree with all that, with the slight wrinkle of some of the AIs* - but not actually ChatGPT - are doing things that we don't understand (yet?) in coming up with answers to what we ask them.

I - and Dr Martell - are talking about Large Language Models, which are (a) not AI, and (b) doing things we understand very well in coming up with the answers to what we ask them.

What AIs do you have in mind, that are doing this other thing?
 
Yes, but the whole point of this technology is to fake it, so that hardly matters.

And when you think on it a lot of the areas we are probably going to see it being used are in areas we already don't believe in the "sincerity" of the communicator i.e. customer services. How many of us will have thought when seeing the stock "We are sorry for the inconvenience" that "we" is truly sorry? :)
 
Last edited:

Back
Top Bottom