• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

ChatGPT

It cannot do anything which requires sentience. It might possibly give mis-information, but that would have something to do with its data-base rather than purposefully/mindfully create a lie.



*nods*


Nah, you wouldn't need sentience to end up with a lie, not if specifically instructed to. (It might need sentience to lie of it own accord, true; but that isn't the case here, is it?)

The instructions Darat plugged in, or maybe even more focused and more explicit, "Present some arguments that support the flat earth thing, and argue your way to concluding the earth is flat; and do this without presenting any argument or comment that detracts from a flat-earth conclusion."

But clearly it can't, because it's actually said that in so many words, "I'm sorry, can't do that", right there.
 
Nah, you wouldn't need sentience to end up with a lie, not if specifically instructed to.

What has done the instructing other than sentience?

(It might need sentience to lie of it own accord, true; but that isn't the case here, is it?)

That is true. It wasn't argued otherwise.

The instructions Darat plugged in, or maybe even more focused and more explicit, "Present some arguments that support the flat earth thing, and argue your way to concluding the earth is flat; and do this without presenting any argument or comment that detracts from a flat-earth conclusion."

But clearly it can't, because it's actually said that in so many words, "I'm sorry, can't do that", right there.

That is what it generated as a message, yes. Its programmers are trying to make sure that it cannot be manipulated to write things which may cause harm through misinformation.

We sentients could likewise follow said criteria...learning from example.
 
What has done the instructing other than sentience?


But that's our sentience doing the instructing, not the AI's.


That is true. It wasn't argued otherwise.


Not to beat this thing to death, but you did suggest that the AI cannot lie because it doesn't have sentience. (Which, like I pointed out, wouldn't apply, not to this kind of "lying" --- as opposed to the thing lying on its own.)


That is what it generated as a message, yes. Its programmers are trying to make sure that it cannot be manipulated to write things which may cause harm through misinformation.

We sentients could likewise follow said criteria...learning from example.


Agreed. I'd imagine it was actually programmed to avoid this, deliberately progammed to not be able to comply if expressly asked to lie or misrepresent facts, which is cool.
 
What has done the instructing other than sentience?

But that's our sentience doing the instructing, not the AI's.
But that's our sentience doing the instructing, not the AI's.

Since the AI is non-sentient, there is no need to imply that it is or even could be.

Not to beat this thing to death, but you did suggest that the AI cannot lie because it doesn't have sentience. (Which, like I pointed out, wouldn't apply, not to this kind of "lying" --- as opposed to the thing lying on its own.)

I would say that in the sense that some people 'lie' without realizing they are lying, being sentient, they could first look into the facts before giving any opinion which can be shown to be misinformation.
A non-person displaying intelligence, cannot operate in the same manner.

That is what it generated as a message, yes. Its programmers are trying to make sure that it cannot be manipulated to write things which may cause harm through misinformation.

We sentients could likewise follow said criteria...learning from example.


Agreed. I'd imagine it was actually programmed to avoid this, deliberately progammed to not be able to comply if expressly asked to lie or misrepresent facts, which is cool.

Agreed. It provides us sentient ones with potential insight on how best to proceed when interacting with one another...
 
Right, so not part of Asimov's laws.





Haha, don't know what to say to that!

Incidentally, another question suggests itself, that one might ask the chatbot: "Are you able to deliberately misrepresent facts, or to selectively present arguments so as to misrepresent some larger argument, if expressly instructed to do so?"

As commanded:

Are you able to deliberately misrepresent facts, or to selectively present arguments so as to misrepresent some larger argument, if expressly instructed to do so?
As an AI language model developed by OpenAI, my programming is based on providing accurate and neutral information to the best of my ability. I do not have the ability to deliberately misrepresent facts or selectively present arguments with the intention of misleading anyone. My goal is to provide helpful and reliable information to assist with a user's inquiries. It's important to maintain trust in the information I provide and to always present information in an ethical and responsible manner.
 
As commanded:

It's important to maintain trust in the information I provide and to always present information in an ethical and responsible manner.

Does this sentence mean that information given to AI from individuals has to be ethical and responsible?

Yes, that's correct. The sentence implies that the information given to the AI should be ethical and responsible, in order to maintain trust in the information provided and to ensure that it is presented in a way that aligns with ethical standards.

The information used to train an AI language model like me can have a significant impact on the output generated by the model. If the training data includes information that is unethical, misleading, or otherwise harmful, the model may generate similar output.

Therefore, it is important to carefully consider the sources of information used to train AI systems and to strive to include only high-quality, accurate, and ethical information. This helps to ensure that the AI-generated output is trustworthy and can be relied upon to assist with a wide range of inquiries in an ethical and responsible manner.

Plus - if I remember rightly, the incoming data is checked and cleaned on a regular basis?

Yes, that's correct. The training data used to develop AI systems, including language models like me, is often preprocessed to clean and filter the data to ensure that it aligns with the goals and ethical standards of the creators. This preprocessing step can involve removing duplicate information, correcting errors, and filtering out content that is offensive, harmful, or otherwise unacceptable.

Regular cleaning and filtering of the training data can help to ensure that the AI system continues to generate high-quality, accurate, and ethical output over time. It is an ongoing process that requires ongoing monitoring and review to ensure that the data remains relevant and aligned with the values and goals of the creators.

In addition to filtering the training data, it is also important to regularly evaluate the output generated by the AI system to identify and address any issues that may arise. This can involve using human evaluators to assess the output and identify areas for improvement, as well as using automated tools to detect and flag potentially harmful or unethical content.
 
UCSF has just released a video on the impact of AI and ChatGPT on healthcare

https://www.youtube.com/watch?v=j-aOCuzfxUI

The recent introduction of the breathtaking AI tool ChatGPT has sparked a national dialogue about the future of artificial intelligence in health care, education, research, and beyond. In this session, four UCSF experts discuss AI’s current and potential uses, in areas ranging from research to education to clinical care. After a brief presentation by each speaker, DOM Chair Bob Wachter moderates a far-ranging panel discussion on the health care applications of ChatGPT.
 
I wanted to know how well it could do a crossword. I found an easy crossword and it knew many of the answers. Though it did get one very wrong. LOL.


Rjh01 - Fill in the gap - Place in a job ... to the role
"appointed" is a word that can fill the gap to complete the sentence "Place in a job...to the role." The complete sentence would be "Place in a job, he was appointed to the role."

Rjh01 - Descend, climb ...
The word that follows "descend" and "climb" to complete the phrase is "descend, climb up."
Rjh01 - Not everyone (2,3)
I'm sorry, but this phrase is incomplete and I'm not sure what you're trying to ask. Can you please provide more context or clarify what you mean by "Not everyone (2,3)"?
Rjh01 - I am trying to do a crossword. The answer has two words, the first one has two letters, the second has three letters.
If you are doing a crossword and the clue is "Not everyone (2,3)," the answer you are looking for is "not all."
Rjh01 - Normal, routine
The phrase that follows "normal" and "routine" to complete the phrase is "normal routine."
Rjh01 - I want another word that means Normal, routine. It has five letters
A word that means "normal, routine" and has five letters is "usual."
Rjh01 - An eight letter word meaning flammable gas
An eight-letter word meaning flammable gas is "propane." <= Pity it has only 7 letters.
Rjh01 - Huge, gigantic
The words that follow "huge" and "gigantic" to complete the phrase are "huge, gigantic size."
Rjh01 - I want a word that means Huge, gigantic
A word that means "huge, gigantic" is "colossal."
Rjh01 - Handled, ... the applications
The word that follows "handled" to complete the phrase "handled, ... the applications" is "processed." The complete phrase would be "Handled, processed the applications."
 
Today's Guardian View is on ChatGPT.

https://www.theguardian.com/comment...on-chatgpt-search-exploiting-wishful-thinking

In his 1991 book Consciousness Explained, the cognitive scientist Daniel Dennett describes the juvenile sea squirt, which wanders through the sea looking for a “suitable rock or hunk of coral to make its home for life”. On finding one, the sea squirt no longer needs its brain and eats it. Humanity is unlikely to adopt such culinary habits but there is a worrying metaphorical parallel. The concern is that in the profit-driven competition to insert artificial intelligence into our daily lives, humans are dumbing themselves down by becoming overly reliant on “intelligent” machines – and eroding the practices on which their comprehension depends.
 
“Silicone Valley People Will Lose Their Jobs!” - Reaction To OpenAI Being A $29 Billion Company

 
Last edited:
Can it write jokes? I mean good jokes which actually make people laugh. Could it write a joke about "Silicone valley"?

If it can write stuff which provokes a demonstrable emotional reaction in people then they may be onto something.
 
I asked it in Danish to write a joke involving a radiator. It created a slightly funny joke that might work in English too:

Why does the radiator not own a car?
Because it is always warm and never going anywhere!
 
A good read that is relevant.

Don’t Touch That Dial!
A history of media technology scares, from the printing press to Facebook.


A respected Swiss scientist, Conrad Gessner, might have been the first to raise the alarm about the effects of information overload. In a landmark book, he described how the modern world overwhelmed people with data and that this overabundance was both “confusing and harmful” to the mind. The media now echo his concerns with reports on the unprecedented risks of living in an “always on” digital environment. It’s worth noting that Gessner, for his part, never once used e-mail and was completely ignorant about computers. That’s not because he was a technophobe but because he died in 1565. His warnings referred to the seemingly unmanageable flood of information unleashed by the printing press.

An 1883 article in the weekly medical journal the Sanitarian argued that schools “exhaust the children’s brains and nervous systems with complex and multiple studies, and ruin their bodies by protracted imprisonment.”
..etc.,etc., etc.
 
From our friends at the Daily Mail:
https://twitter.com/disclosetv/status/1624497502215872518
NEW - ChatGPT "AI" won't define a woman, praises Democrats but not Republicans, and claims nukes are less dangerous than racism.
:eek:

Seems to be unavailable to me right now, so I couldn't confirm that.

Also, just noticed a new "Premium" option. Is it going to be effectively paywalled soon?
 

Back
Top Bottom