• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

It is most fortunate for the people who are convinced that brains are outside physics, that they don't need to state that position because brains are so complex that they can never ever be simulated, and in any case, we'll never ever find out how the work.

By the way, I don't, yet, place Opcode in that crowd, but I do think his/her arguments sound suspiciously similar to.
 
I think it will require AIs with their ability to deal with immense amounts of data that our NI simply can't deal with that will "crack" things like the understanding of NI. Which will be quite ironic.

(It's perhaps not that our NI can't in principle deal with it but it can't practically deal with it - it's the 'more time than the universe will exist' point , for instance it would be possible in principle with a lot of paper and a lot of pencils for me to work through the calculations that a LLM AI does to respond to a single query someone types in, but returning the answer "42" may take a couple of cycles of big bangs for me to get there!)
 
Sure, no reason to assume that it's not physically possible.
Doesn't mean we have reason to assume that we will be able to do it anytime soon
 
A question about the very off-putting obsequiousness of LLMs: is this in any way necessary, or is it just something put in to flatter the massive egos of investors and CEOs ?
 
A question about the very off-putting obsequiousness of LLMs: is this in any way necessary, or is it just something put in to flatter the massive egos of investors and CEOs ?
What do you think.... :)

Seriously it's a bit of a yes and no answer, it can be toned down and up, but apparently some of it is caused by the human reinforcement training, as we rate "pleasant" responses higher than "terser" ones. But I think some of that obsequious tone comes from some of that misanthropy I mentioned that I see in some of the leaders. Probably also remember us lowly minions are meant to doff our caps to our betters, sadly this is what folk have always thought their servants and slaves should be like.
 
What exactly ?
I've just bunged in The Great Z's question into co-pilot - this was the first line of its response:

That’s a sharp and timely question, Darat—and one that cuts to the heart of how these systems are shaped and deployed.

If that were an employee of mine or someone I was managing I'd say "stop arse licking".

(Phew - the answer it gave does agree with what I posted earlier!)
 
See Darat's example; it's so obsequious it's almost unbearable.
I see. Had to google the word up. It indeed is unbearable.
But it's also often mentioned by scholars behind these AIs .. how it's really not easy to condition them right. It's all connected and conflicted. You improve one metric and it gets worse in another. Natural human language is just very bad programming language. And then there's the problem that user input is on the same level as the system pre-prompt. So with a bit of encouraging you can make the model "ignore the instructions".
It's certainly fascinating.
 
This in fact ties in with my point that we don't want to replicate NI, we know it's buggy, unreliable - prone to a huge range of errors - and messy. We don't want AIs to be like that. Of course there is an immense wealth of knowledge in learning more and more about our NI but not because we need that knowledge to develop our AIs.
What is it that you want AIs to do? That is the question that should be guiding AI research, instead of the misdirection about End Of World AI takeover scenarios. AI can't think like humans, so we don't need to waste so much time worrying about it.
 
AI has a sycophancy problem because people have a strong bias towards others that agree with them. How many "how to talk to your crazy racist in-laws" guides start with "suck up to them until you find common ground, then very slowly start introducing the possibility that they maybe might wanna think about toning down some of their stupid ◊◊◊◊◊◊◊ insane opinions, but even then be sure never to tell them outright that you knew they were wrong the whole time or they'll dig in just to spite you and you'll be back to square one?"

AI has a confidence problem because people aren't technical and they want simple answers rather than learning anything themselves. As much as we would appreciate it if google searches incorporated critiques of information sources and confidence estimates between alternative explanatory hypotheses, that would drive most other people up the wall. They just want the answer. Not "an" answer, the answer. Even if it is just an answer, they really truly don't want to know that.

Both of these are choices in development. You get what you train for. Right now they're reinforced, but it wouldn't be a hard sell to change because no one likes a kiss-ass or a BS artist. One of the companies could make a "nerdbot" model that prioritizes impersonal exactitude, and it'd be the go-to for factualish information. The problem is, these two qualities combined are like crack for a certain kind of upper management. The kind of boss who confuses sycophancy for loyalty and confidence for expertise, and incidentally is in a perfect position to say "great job, [LLM company], we'll buy a thousand seats!" then turn to their staff and tell them to use it somehow.
 
Last edited:
I think it will require AIs with their ability to deal with immense amounts of data that our NI simply can't deal with that will "crack" things like the understanding of NI. Which will be quite ironic.

(It's perhaps not that our NI can't in principle deal with it but it can't practically deal with it - it's the 'more time than the universe will exist' point , for instance it would be possible in principle with a lot of paper and a lot of pencils for me to work through the calculations that a LLM AI does to respond to a single query someone types in, but returning the answer "42" may take a couple of cycles of big bangs for me to get there!)
I think progress on AIs will be limited by energy long before they become that smart. An entire human, actuators, brain, everything requires less than 3000kcal a day, or 145W. Most people should probably consume a lot less than that given the sedentary lifestyles that many of us lead. The brain uses about 20W, less if frequently watching Made in Chelsea. To put that number in context, a single mediocre graphics card in a PC can easily consume ~100W. Such a graphics card would not support a very impressive AI. After a bit of Googling: The NVIDIA H100 PCIe 80GB GPU consumes 350W and achieves an LLM speed of 144.49 tokens/s, or about the same as my mother in law. According to the AI summary: "Despite the high power requirements, the H100 PCIe is designed for high energy efficiency for its performance class, offering significant performance per watt for AI and HPC applications when compared to previous GPU generations."

Training AI requires A LOT more processing than running a model. To achieve anything even in the ballpark of the efficiency of a biological brain will require going from networks that use repeatable digital to approximate analogue weights, which is not compatible with how modern AIs are trained, developed and deployed.

Even if some kind of AI did behave like an NI that probably wouldn't help understand NI. Training an AI just gives you 10's (or for an AI similar to an NI, probably 100's) billions of meaningless weights, with even less structure than a biological brain.
 
I am in danger here of losing track of my train of thought and this discussion.
A system that adapts has adaptability. I don't see why it also needs to change its program.
The point of my statement is the ability of the AI to adapt to its environment in the same sense as the human brain adapts to its environment. I have difficulty making a good comparison between the two, because my position is the two are completely different things, and so not easily comparable. I'm trying to find the closest analogous situation in the AI world as in the NI world. What I came up with isn't perfect, has some flaws, but it's the best I can do to address your challenge, considering that we are talking about two different things. Ultimately, that's what I'm trying to impress on you; that we are talking about two completely different things. It just happens that AI can mimic something that we think is important about NI. Ultimately, we should let machines be machines and NI be NI and stop being so concerned about making one into the other.

When the human brain adapts, it undergoes profound changes throughout its entire structure. It physically, chemically and, to some degree, functionally changes. This is far more profound than just editing a parameter file.
You don't think that doing something it was not designed to do is "adapting"?
It's very limited adaptation, more of an adjustment.

And thanks for reminding me that the game was Othello.
You're welcome. I had to ask Google Gemini for the name and incident.
In which case your point that computers only compute is undermined,
I don't see how. It's computation, which is what computers do, as I stated.
We seem to disagree what constitutes "adaptation".
I think you are correct that we disagree on the meaning of that term.
Even if we don't know everything a brain can do now, we do know that whatever it does is within the laws of physics, and can be simulated.
I'm not a Materialist. I don't believe that all reality can be described by the laws of physics. I believe in the supernatural. What I don't know or have a particular belief in, is where the physical ends and the spiritual begins. I don't know where brain and mind diverge, if at all. I do believe that our lives and our ultimate consciousness transcends the physical universe.
 
The best expression I've seen of the difference between NI and AI was published in a 2016 Aeon article, "The empty brain," by Robert Epstein. I'm certain that the pro-AGI crowd, the people who believe that AI is on its way to superhuman intelligence, will hate this article. Quite likely, several of you will dispute it; I've seen that before. Regardless, it is a thoughtful article that brings up good points. I'm sure it will give everyone something to think about.

From Google Gemini: "This article is a classic example of a non-computational or radical embodied cognition viewpoint, which directly challenges the Computational Theory of Mind (CTM)."

"The empty brain: Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer"
 
The best expression I've seen of the difference between NI and AI was published in a 2016 Aeon article, "The empty brain," by Robert Epstein. I'm certain that the pro-AGI crowd, the people who believe that AI is on its way to superhuman intelligence, will hate this article. Quite likely, several of you will dispute it; I've seen that before. Regardless, it is a thoughtful article that brings up good points. I'm sure it will give everyone something to think about.

From Google Gemini: "This article is a classic example of a non-computational or radical embodied cognition viewpoint, which directly challenges the Computational Theory of Mind (CTM)."

"The empty brain: Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer"
Thanks for the link , not very inspiring, we don’t think brains work like our computers, he does contradict himself as he describes information processing as one of his examples against the idea the brain processes information. Do you know if he’s updated the article as the almost 10 years since he wrote it we have learned a lot more? Plus he didn’t factor in Aphantasia in his descriptions.
 
To circle all the way back around to AI generated art for a sec. IMO it's practically identical, philosophically, to a non-artist going on Fiverr and asking for someone who doesn't care and will barely get paid to make them a thing. The skill becomes 'knowing what you want and how to ask for it.'

People have been able to just ask for something uninspired or something genuinely cool since forever; the main differences with AI are 'getting visually polished art instanly/being able to iterate to get to what you meant' and 'not having an artist involved to help with getting what you meant.'

For me the 'disappoint' Oatmeal talks about is from stuff that looks uninspired, AI or not. I dislike genuine Kinkade and AI generated Kinkade an equal amount.

Also, when it comes to AI generated comic book/fantasy/scifi art, if you consume a lot of the real stuff you absolutely do get a feeling for specific artists/styles it's got in its training set and that it's now emulating. That varies from generator to generator.
 
The best expression I've seen of the difference between NI and AI was published in a 2016 Aeon article, "The empty brain," by Robert Epstein. I'm certain that the pro-AGI crowd, the people who believe that AI is on its way to superhuman intelligence, will hate this article. Quite likely, several of you will dispute it; I've seen that before. Regardless, it is a thoughtful article that brings up good points. I'm sure it will give everyone something to think about.

From Google Gemini: "This article is a classic example of a non-computational or radical embodied cognition viewpoint, which directly challenges the Computational Theory of Mind (CTM)."

"The empty brain: Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer"
Very silly article. AI is trained and run on digital computers that behave the way the author describes, but they can also be built using analogue techniques. It is the structure of and operation of a neuron in an AI model that is similar to a biological brain. One of the problems with AI models is we don't really understand how they do what they do once they're trained, just we don't really understand how biological brains do what they do.

There's also the strawman that memories are not stored in individual neurons - well no one claimed or thinks they are! They are stored in the weights of the connections between many neurons. This was the inspiration for the Hopfield network and self-organising map.
 
Last edited:
It's very limited adaptation, more of an adjustment.
I believe that a change that enables an AI to repurpose its storage to do something that researchers thought was impossible is more than an "adjustment"
I think you are correct that we disagree on the meaning of that term.
Yes, indeed. We'll have to leave it at that.
I'm not a Materialist. I don't believe that all reality can be described by the laws of physics. I believe in the supernatural. What I don't know or have a particular belief in, is where the physical ends and the spiritual begins. I don't know where brain and mind diverge, if at all. I do believe that our lives and our ultimate consciousness transcends the physical universe.
OK. We'll never agree then. I could start a philosophical debate about yours and mine views, but I think it is outside the scope of this forum.
 

Back
Top Bottom