Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.
That's pretty old. It comes from the fact large language models are not primarily designed to be chat bots. Main way to modify their behavior is adding things to the prompt. There are no levels of access, no user rights. Whatever the administrator can do, user can do (and undo) too.
Not sure it can be completely avoided, as inner working of AIs, and LLMs especially is impossible to understand. Language itself is not exact and full logical paradoxes. We can never be certain how model would react.
I've seen that Bing Chat, which uses the ChatGPT model for instance, can sometimes start to produce an answer that violates the rules, but only gets a couple of lines into it before the entire response disappears and is replaced by a message saying that a valid response can't be given. I don't have any technical insight, but I'm inferring that Bing Chat's responses are being proofed in realtime by a second AI program that erases messages when it detects rule violations.
Well yes in theory that is correct but in reality people do not have an infinite amount of time or resources to do so. This of course includes any form of AI.
And that's not even bringing the problem of saboteur being smarter than the designer.
It is literally dangerous for "good" journalists as much as it is for "crap" journalists because the business decision to replace a journalist with AI has nothing to do with the quality of the journalist's work
Really? If a company has to fire people, with everything else being equal, a great journalist is as likely as a bad one to be fired? Am I understanding you correctly?
Well AI will be used in applications where it would be impossible for humans to watch over them. For example AIs are easy to clone. So they are great for tasks, where you would need 100.000 people. You can have 100.000 AIs. And over time, AIs will be used for tasks where people are too slow, or where people don't have enough information .. like nobody really knows all US laws by hearth .. AI can. We especially want to use AIs where it would be hard for people to watch over them.
Well yes in theory that is correct but in reality people do not have an infinite amount of time or resources to do so. This of course includes any form of AI.
Doesn't matter how smart you are if you are restricted by the laws of physics like everyone else. You can't outthink the laws of physics.
The question is how far from the possible processing power our brains are. They are pretty good .. ompared to other brains which were created by evolution. Maybe we already sit at some theoretical limit, and it's simply impossible to be faster. But maybe we are orders below the limit. For example signal propagation speed in brain is painfully slow. If nothing else, AIs will be faster, and likely greatly faster.
Jesus has been portrayed in many different ways: from a prophet who alerts his audience to the world's imminent end to a philosopher who reflects on the nature of life.
But no one has called Jesus an internet guru — that is, until now.
In his latest role as an "AI Jesus", Jesus stands, rather awkwardly, as a white man, dressed in a hooded brown-and-white robe, available 24/7 to answer any and all questions on his Twitch channel, "ask_jesus."
Questions posed to this chatbot Jesus can range from the serious — such as asking him about life's meaning — to requesting a good joke.
While many of these individual questions may be interesting in their own right, as a scholar of early Christianity and comparative religion, I argue that the very presentation of Jesus as "AI Jesus" reveals a fascinating refashioning of this spiritual figure for our AI era...
Really? If a company has to fire people, with everything else being equal, a great journalist is as likely as a bad one to be fired? Am I understanding you correctly?
And I'm surprised to see you attempting to frame it as a strange or illogical proposition as if this is something that isn't already fairly common - as if companies firing their best or highest-skilled employees in favor of hiring new, less-skilled and less-experienced labor at a lower cost isn't something that as a matter of simple fact happens all the time and has been happening for decades.
ETA just for a single example of this, at the end of this past June National Geographic announced that it had eliminated every single one of its staff writers and all but two editors, and from now on will rely exclusively on submitted articles from gig writers. The cuts weren't dictated by the magazine itself, they were ordered by parent company Disney, as part of a portfolio-wide paring in order to reduce costs. How good or experienced the writers were made absolutely no difference when the axe came down because it wasn't about "improving the product", it was about saving money.
Though Arcade22 really wasn't understanding you correctly then, since everything else is not equal between a good and bad journalist if their pay is different.
But you're right about them jettisoning expensive employees. I've seen that happen often enough.
I just got back from DEFCON, where I had the opportunity to attend a talk given by Dr Craig Martell, Chief Digital and Artificial Intelligence Officer for the US Department of Defense.
Key takeaways:
- Large Language Models like ChatGPT are not AI.
- All they do is predict the next word based on the preceding context. Language models have been around for a long time. The Large Language Model differs in the amount of context it can consider when predicting the next word.
- They do not, and cannot reason, nor can they emulate reasoning.
- Unfortunately, we have evolved to perceive as a reasoning being anything that speaks fluently.
- But the truth is that these things mostly just hallucinate.
- LLMs are great for:
-- Autocomplete
-- Speech to text (e.g., realtime close-captioning)
-- Translation
-- Writing code - because code is a formal structured language with external tools for validation
- LLMs are terrible for: Generating large amounts of text. There is no formal validation for such output, and it imposes a high cognitive load on any human who tries to verify that the entire output is correct.
- The DOD would very much like to start using LLMs and similar technologies everywhere they can. However, current LLMs hallucinate far too much to be of any practical use to the DOD.
- Dr Martell believes that we as consumers (including himself and his employer) should demand LLMs with target accuracy rates for specific use cases, and concrete metrics that can be used to measure whether an LLM is meeting those targets.
-- Drafting a battle plan, that will be reviewed and revised by human experts? Maybe 20% hallucination is okay.
-- Executing a battle plan? The DOD needs 99.999% accuracy for something like that. Current LLMs are nowhere close.
---
My two cents: All of this seems pretty obvious to me. Hopefully the limitations of ChatGPT, and the inappropriateness of interjecting its output into conversations between humans, will be taken more seriously if the opinions come from Dr Craig Martell, Chief Digital and Artificial Intelligence Officer for the US Department of Defense.
---
ETA: DEFCON's policy is to make recordings of the talks freely available online. So keep an eye out in coming weeks for this year's videos to drop, if you're interested in seeing for yourself what Dr Martell has to say.
post moved over from the charGPT and rule 4 discussion
I just got back from DEFCON, where I had the opportunity to attend a talk given by Dr Craig Martell, Chief Digital and Artificial Intelligence Officer for the US Department of Defense.
Key takeaways:
- Large Language Models like ChatGPT are not AI.
- All they do is predict the next word based on the preceding context. Language models have been around for a long time. The Large Language Model differs in the amount of context it can consider when predicting the next word.
- They do not, and cannot reason, nor can they emulate reasoning.
- Unfortunately, we have evolved to perceive as a reasoning being anything that speaks fluently.
- But the truth is that these things mostly just hallucinate.
- LLMs are great for:
-- Autocomplete
-- Speech to text (e.g., realtime close-captioning)
-- Translation
-- Writing code - because code is a formal structured language with external tools for validation
- LLMs are terrible for: Generating large amounts of text. There is no formal validation for such output, and it imposes a high cognitive load on any human who tries to verify that the entire output is correct.
- The DOD would very much like to start using LLMs and similar technologies everywhere they can. However, current LLMs hallucinate far too much to be of any practical use to the DOD.
- Dr Martell believes that we as consumers (including himself and his employer) should demand LLMs with target accuracy rates for specific use cases, and concrete metrics that can be used to measure whether an LLM is meeting those targets.
-- Drafting a battle plan, that will be reviewed and revised by human experts? Maybe 20% hallucination is okay.
-- Executing a battle plan? The DOD needs 99.999% accuracy for something like that. Current LLMs are nowhere close.
---
My two cents: All of this seems pretty obvious to me. Hopefully the limitations of ChatGPT, and the inappropriateness of interjecting its output into conversations between humans, will be taken more seriously if the opinions come from Dr Craig Martell, Chief Digital and Artificial Intelligence Officer for the US Department of Defense.
---
ETA: DEFCON's policy is to make recordings of the talks freely available online. So keep an eye out in coming weeks for this year's videos to drop, if you're interested in seeing for yourself what Dr Martell has to say.
I just got back from DEFCON, where I had the opportunity to attend a talk given by Dr Craig Martell, Chief Digital and Artificial Intelligence Officer for the US Department of Defense.
Key takeaways:
- Large Language Models like ChatGPT are not AI.
- All they do is predict the next word based on the preceding context. Language models have been around for a long time. The Large Language Model differs in the amount of context it can consider when predicting the next word.
- They do not, and cannot reason, nor can they emulate reasoning.
- Unfortunately, we have evolved to perceive as a reasoning being anything that speaks fluently.
- But the truth is that these things mostly just hallucinate.
- LLMs are great for:
-- Autocomplete
-- Speech to text (e.g., realtime close-captioning)
-- Translation
-- Writing code - because code is a formal structured language with external tools for validation
- LLMs are terrible for: Generating large amounts of text. There is no formal validation for such output, and it imposes a high cognitive load on any human who tries to verify that the entire output is correct.
- The DOD would very much like to start using LLMs and similar technologies everywhere they can. However, current LLMs hallucinate far too much to be of any practical use to the DOD.
- Dr Martell believes that we as consumers (including himself and his employer) should demand LLMs with target accuracy rates for specific use cases, and concrete metrics that can be used to measure whether an LLM is meeting those targets.
-- Drafting a battle plan, that will be reviewed and revised by human experts? Maybe 20% hallucination is okay.
-- Executing a battle plan? The DOD needs 99.999% accuracy for something like that. Current LLMs are nowhere close.
---
My two cents: All of this seems pretty obvious to me. Hopefully the limitations of ChatGPT, and the inappropriateness of interjecting its output into conversations between humans, will be taken more seriously if the opinions come from Dr Craig Martell, Chief Digital and Artificial Intelligence Officer for the US Department of Defense.
---
ETA: DEFCON's policy is to make recordings of the talks freely available online. So keep an eye out in coming weeks for this year's videos to drop, if you're interested in seeing for yourself what Dr Martell has to say.
I had a bit of fun with the chatbot at deepai.org (highlighting mine):
DeepAI chatbot said:
Me: This morning I awakened to see my cat sleeping in mid-air, hovering above my bed. Is this a good omen?
As an AI language model, I am not capable of interpreting omens or beliefs related to them. It’s possible that your cat was simply in a very comfortable position or experiencing a dream. From a scientific perspective, cats are known to be agile and flexible animals, and they can often sleep in unusual positions, including mid-air. It’s also important to consider that cats have a natural grace and balance that enables them to maintain their position in unusual ways. So it is not necessarily an unusual or extraordinary event. If you have concerns or want to learn more, I suggest consulting with your local veterinarian or animal expert who can provide more information.
That was completely hilarious, the portion you highlighted: "From a scientific perspective, cats are known to be agile and flexible animals, and they can often sleep in unusual positions, including mid-air." ...They've modeled AI on Deepak Chopra!
eta: This actually highlights, better than any amount of detailed analysis about what AI can or cannot do, that these "language model" chatbots, at any rate, should not be used for anything other than simply entertainment.
That is not a very reasonable question. So I improved the question.
Me: This morning I awakened to see my cat sleeping in mid-air, hovering above my bed. Why did the cat choose this position?
ChatGPT
Cats can sometimes exhibit unusual behavior, and seeing your cat apparently sleeping in mid-air is certainly a curious sight. However, it's important to note that cats cannot actually hover or float in mid-air. If you observed your cat in such a position, it might have been engaging in one of the following behaviors:
Dreaming or REM Sleep: Cats experience different sleep cycles, including REM (Rapid Eye Movement) sleep, which is associated with dreaming. During REM sleep, cats may twitch, move their paws, or even make vocalizations. It's possible that your cat was in a dream state and was moving its body while remaining partially asleep.
Adjusting Position: Cats are known for their flexibility and agility. Your cat may have been in the process of adjusting its sleeping position or stretching its muscles. This movement could have given the appearance of hovering or floating for a brief moment.
Optical Illusion: Sometimes, the way light and shadows interact in a room can create optical illusions. Depending on the lighting and your perspective, it might have seemed like your cat was floating, when in reality, it was just in an elevated position that made it appear that way.
Jumping Down: Cats are skilled jumpers, and they can leap from high places. If your cat was previously on a higher surface and jumped down while you were waking up, it might have appeared as if it was suspended in mid-air for a moment.
Playful Behavior: Cats are known for their playful nature. It's possible that your cat was engaging in some form of playful activity, such as pouncing or leaping, which gave the impression of mid-air hovering.
Visual Distortion: Sometimes, our eyes can play tricks on us, especially when we are just waking up or not fully awake. Your perception of your cat's position might have been momentarily distorted.
Remember that cats often do quirky things, and their behaviors can be influenced by a variety of factors including their mood, physical comfort, and environment. If your cat's behavior continues to be unusual or concerning, it's always a good idea to consult with a veterinarian to ensure your cat's well-being.
That is not a very reasonable question. So I improved the question.
Me: This morning I awakened to see my cat sleeping in mid-air, hovering above my bed. Why did the cat choose this position?
Both replies from AI are interesting. They are apparently either unable to, or programmed not to, consider that a human statement is simply false. A truely sentient being would refuse to belive the claim.
Both replies from AI are interesting. They are apparently either unable to, or programmed not to, consider that a human statement is simply false. A truely sentient being would refuse to belive the claim.
Both replies from AI are interesting. They are apparently either unable to, or programmed not to, consider that a human statement is simply false. A truely sentient being would refuse to belive the claim.
Hans
Hans - how long have you been on this forum? We've had many people make equally ******* claims here and even dafter ones, apparently completely serious and sincere.
The first result posted is obviously wrong, cats don't float/hover - but the second one is pretty much what we would have posted to someone claiming they saw their cat floating. Granted our replies would probably have had a bit more sarcasm and snarky comments.
I just got back from DEFCON, where I had the opportunity to attend a talk given by Dr Craig Martell, Chief Digital and Artificial Intelligence Officer for the US Department of Defense.
Key takeaways:
- Large Language Models like ChatGPT are not AI.
- All they do is predict the next word based on the preceding context. Language models have been around for a long time. The Large Language Model differs in the amount of context it can consider when predicting the next word.
- They do not, and cannot reason, nor can they emulate reasoning.
...snip..
Now in many ways I agree with all that, with the slight wrinkle of some of the AIs* - but not actually ChatGPT - are doing things that we don't understand (yet?) in coming up with answers to what we ask them.
However how do we know what we have long considered to be our "reasoning" or part of our reasoning is not as we thought it was and that we are doing something similar to what these large model AIs are doing?
Some of our members here - and actually theprestige is one of them - are very fluent and can create great prose, and I'm not trying to denigrate them but could they have better "large language models" than some of us? I am sure they didn't start with being able to produce such great prose, they learned or should I say were trained with lots of input and feedback on what makes good prose.
We obviously have more aspects to our "reasoning" than just the prose we produce so the AIs are not replicating exactly what we are doing but perhaps they are giving us some new avenues to explore to try and better understand ourselves?
*I still think AI is an appropriate general descriptor for the sake of these discussions
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.