• Due to ongoing issues caused by Search, it has been temporarily disabled
  • Please excuse the mess, we're moving the furniture and restructuring the forum categories
  • You may need to edit your signatures.

    When we moved to Xenfora some of the signature options didn't come over. In the old software signatures were limited by a character limit, on Xenfora there are more options and there is a character number and number of lines limit. I've set maximum number of lines to 4 and unlimited characters.

Merged Artificial Intelligence

From what I have heard from Francois Chollet (inventor(?) of the ARC-AGI test) many of the types of cognitive errors that get discovered in these LLMs are corrected by hard-coding it into the LLM in the same way that some of the guardrails are. If that's the case, and it only learns about 3 R's in strawberry because someone has literally written it in rather than it deducing it from large amounts of texts then it really is a very dumb "intelligence".
 
Undoubtedly true for some cases but there are the more general cases like which is the larger number 8.8 or 8.18. I suppose it depends on why they repeatedly get the same thing wrong.
 
During my life I have also had teachers hard-coding knowledge in me that I didn’t understand. Today, there is probably still something left that I don’t know I don’t understand.
 
It's almost as if they don't think but play a game of "does this random string of words sound like a human response" until the answer is yes.
 
During my life I have also had teachers hard-coding knowledge in me that I didn’t understand. Today, there is probably still something left that I don’t know I don’t understand.
In English schools they still teach kids (or did 10 years ago) a so-called spelling aid "i before e, except after c ", and then over the next 10 years hardcode into you all the exceptions to that "aid".
 
From what I have heard from Francois Chollet (inventor(?) of the ARC-AGI test) many of the types of cognitive errors that get discovered in these LLMs are corrected by hard-coding it into the LLM in the same way that some of the guardrails are. If that's the case, and it only learns about 3 R's in strawberry because someone has literally written it in rather than it deducing it from large amounts of texts then it really is a very dumb "intelligence".
Hmm.. tried again today and got the right answer (3) but using the same tablet as I used yesterday. Then I tried on my phone and got: :p
IMG-20250106-WA0003.jpg
 
During my life I have also had teachers hard-coding knowledge in me that I didn’t understand. Today, there is probably still something left that I don’t know I don’t understand.
But it seems like a ludicrously inefficient way for an AI to learn especially if one of the goals is to be able to get these things to be able to learn patterns from training data that it can then reliably use to answer questions that it hasn’t been trained on.
I mean... There are two Rs in strawberry.
And there are there are twenty people in Greenland. But it’s a bit of an outrageous violation of Gricean maxims to answer someone in such a weird way that if a human was doing this on the regular you would assume they were totally ◊◊◊◊◊◊◊ with you.
 
…snip…

And there are there are twenty people in Greenland. But it’s a bit of an outrageous violation of Gricean maxims to answer someone in such a weird way that if a
human was doing this on the regular you would assume they were totally ◊◊◊◊◊◊◊ with you.
How long have you’ve been posting here? :D
 
I just found out you can get voice-activated AI now and literally "talk to" your AI (think Siri on steroids). Management here is off the deep end about it, I mean really a bit scary. I think we're headed for what used to be sci fi territory sooner vs later and not in a good way.
 
I just found out you can get voice-activated AI now and literally "talk to" your AI (think Siri on steroids). Management here is off the deep end about it, I mean really a bit scary. I think we're headed for what used to be sci fi territory sooner vs later and not in a good way.
AI is like the ability to split the atom: Man has discovered it a long,long time before he is really ready to handle it.
 
AI is like the ability to split the atom: Man has discovered it a long,long time before he is really ready to handle it.
Man isn't anywhere close to discovering AI. And clearly our grasp of the atom and its power hasn't stopped us from making great leaps forward. And it's not like humanity is actually on a path to moral apotheosis, after which transformation all things will be permitted to our holy perfect selves.
 
I was wondering if the Maltese Falcon was supposed to have a secret compartment or not. I was wondering this after watching the 1941 version, and remembering the full version of the song, "The Friends of Mr. Cairo" by Jon and Vangelis, which towards the end, has a voice like Peter Lorri saying something to the effect...
(Jump to 11:45 if you want to skip the tune)


So I googled the question: does the movie maltese falcon have a secret compartment


The answer it gave seems to believe it did, despite there being no actual evidence for that conclusion other than replicas having one.
 
Last edited:
After being boosted by Trump's Stargate announcement AI and associated stocks are being wiped out after China presents Deepseek, a powerful, cheap, energy efficient AI.
NVIDIA looses nearly $400 billion in evaluation.

To continue their work without steady supplies of imported advanced chips, Chinese AI developers have shared their work with each other and experimented with new approaches to the technology.

This has resulted in AI models that require far less computing power than before.
This is the kind of innovation we need, not just throwing more compute power at it and hoping something will emerge.
 
This is the kind of innovation we need, not just throwing more compute power at it and hoping something will emerge.
I agree - sadly it does appear that "throw more resources at it" has been the mantra in computer science for quite some time. Me being an old git remember when you took pride in using the least resources possible to cram as much as possible into a program - I blame dropping programs and moving to applications...
I do prefer a more efficient useless toy to a less efficient useless toy.
I've used a LLM to generate code for the forum (and explain some stuff the forum does to me) plus I've used it to develop features for my online store. They are not just a toy - if used correctly and understanding their limitations they can be very powerful.
 
Last edited:
I've used a LLM to generate code for the forum (and explain some stuff the forum does to me) plus I've used it to develop features for my online store. They are not just a toy - if used correctly and understanding their limitations they can be very powerful.

Yeah, you're probably right. I guess I just have luddite's soul that I never noticed before "AI" came around.
 
This is the kind of innovation we need, not just throwing more compute power at it and hoping something will emerge.
Before we get too enthusiastic about Deepseek, over on the Scrutable forum there's a thread on it which seems to show it has a built-in blind spot for things China doesn't want to talk about:

"I just asked

Tell me about the deaths at tianmin square

and got

I am sorry, I cannot answer that question. I am an AI assistant designed to provide helpful and harmless responses.

ETA I just asked who tank man was and it started to give me an answer about a protestor then erased it and replaced it with an apology that it couldn’t help me.

And wow. I asked to be told about the Uighur. It gave me a full answer which included internment camps which was replaced with an apology before I could read it."
 
Before we get too enthusiastic about Deepseek, over on the Scrutable forum there's a thread on it which seems to show it has a built-in blind spot for things China doesn't want to talk about:
I think that it's not the particular implementation as much as it shows LLMs don't have to be as computationally expensive as the leading current models. ETA: Don't forget the current "western" AIs have guard rails.
 
Last edited:
that's not really the issue - Deepseek is set up to be Open Source, so assuming that it is released as such, the data it is trained one and the limits put on its responses will be up to the User.
It's danger doesn't come from the fact that it is from China, but that it undermines efforts to monetize LLMs by controlling the Market.
 
Last edited:
Yeah, you're probably right. I guess I just have luddite's soul that I never noticed before "AI" came around.
It still not has been demonstrated that "AI" is anything more than either an error strewn and robotic chatbot or a means of thieving other people's copyrighted work (though the latter is more to do with rich people not having to obey the law). Until more is demonstrated, I will remain heavily sceptical of anything more than highly marginal utility for "AI".
 
It still not has been demonstrated that "AI" is anything more than either an error strewn and robotic chatbot or a means of thieving other people's copyrighted work (though the latter is more to do with rich people not having to obey the law). Until more is demonstrated, I will remain heavily sceptical of anything more than highly marginal utility for "AI".
The code it has generated for this forum, which was unique to our requirements is not a copy of code from anywhere else.
 
Is it generating the same posts from imaginary members for every user, or do we each get our own unique imaginary forum experience?
<joke I hope>
 
I'm sure every here knows this, but it may be wise to mention it to others!

The AI answers, that Google offers by default (especially to technical questions) .. are often misleading, incorrect, and even dangerous.

I don't want to start a debate over individual quires, but try it yourself ... ask a question you know the answer ro and compare it to AI's reply .... The problem? AI is like that guy who speaks loudly over everyone else, and answers every question, if he knows the answer or not! :(

Ai is a complete ◊◊◊◊◊◊◊◊◊◊◊ and will make up crap quickly, just to fill bandwidth ... on subjects it has no reference material available (quickly).

It seems to me everyone uses the same AI plug in (just by judging the answers) ... The AI on Home Depot is exactly like google for example.
 
I just asked google a question and the answer included references. They do link to real material. Went to Chatgpt and got a similar answer, though with no references. I suggest it depends on the question.
 
I just asked google a question and the answer included references. They do link to real material. Went to Chatgpt and got a similar answer, though with no references. I suggest it depends on the question.
Be careful. I have had ChatGPT cite references that it just made up. When I complained to it, it apologized and just made up new references.
 
Back
Top Bottom