• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Why not? Computers can already replicate the function of neurons. Why not just hook lots and lots of those together?
Because they wouldn't have the magic bean, of course. Once you understand that's the assumption he's building everything on, the rest of it makes more sense.

Opcode, as for the AI that this thread is about, be not afraid. The only people who actually think AI is anything resembling human brains are marketing drones, and they're all beholden to Mammon anyway. They'll say any damn thing if it gets you to bite.
 
Last edited:
Why not? Computers can already replicate the function of neurons. Why not just hook lots and lots of those together?
The best simulations of neurons that anybody has achieved are only approximations of actual neurons. What's more, neurons are only a very small part of the puzzle. The human brain is composed of two general categories of cells; neuron and glial cells. Neurons can be subclassified further as excitatory neurons and inhibitory neurons, and glial cells can be divided into astrocytes, oligodendrocytes, and microglia. A study of unique gene expression of all these cells shows that these subtypes further break down into more than three thousand types of brain cells. We don't know what all these types of cells do, but one presumes their tasks must be important. The human brain is composed of about 85 billion neurons and about 85 billion glial cells.

But, wait! In addition to the 3000 type of neurons and glial cells making up the 170 billion cells of the human brain, we also have about 10^14 synapses between neurons, and the types of synapses vary with the types of neurons. Additionally, the types of neurotransmitters, of which there are several, used by these cells varies by function. Nobody knows the functions of all these cells and new discoveries of ways different brain cells communicate and aid each other takes place continuously. So far, nobody has simulated the full function of even individual neurons, never mind groups of neurons.

Researchers have produced a complete anatomical map (connectome) for organisms like the C. elegans nematode worm's 302 neurons, but approximately simulating even that small number of neurons is challenging. Even the most powerful supercomputers today could barely be hoped to simulate the general function of a human brain, without any hope of simulating actual brain activity.
 
I agree that we are nowhere near simulating a brain. But we won't have to, the same way we don't have to simulate an eye to make a camera.
LLMs don't think the way we do, even to a level we do understand. The don't learn while they are queried. They have no agency of their own (they do nothing until queried). Very different. And there's a lot of things we don't understand both about brains and LLMs .. it might be similar, or different, who knows.
But they made one thing clear IMHO. They are complex enough. They can understand all the text produced by humans, they can analyze and generate images on levels similar to humans. We don't have to wait till computers are 10^12 times more powerful. Not even 1000 times more powerful.
 
I realize this isn’t your point but there are lots of reasons why the corporations funding AI want more of them. Sure they need hardware and power, but AI can do a job without pay or benefits. No need to fund their lives outside of the job. They won’t assert their rights or need time off. No worry that they will be unhappy and quit or vote against your interests.

Of course all of the out of work citizens will end up being a problem. Perhaps a new law could count AI as 3/5 of a person providing additional political representation to their owners.
Yep. I also think there's at least a whiff of misanthropy that you get from some of the "leaders".

On a related note. I just had a look for an article I read about a year ago but can't find it again, in it one of the young titans was explaining why we should be looking forward to more AI interactions in customer services. He explained how no one likes the "Press 1 if your phone service is not working, Press 2 if you think we should care" and how unreliable the older "please state your query in a short phrase of SQL compatible terms" systems were (I may be misrembering the examples he gave), he said with an AI agent you would be able to speak naturally and the AI could even ask you for more details so it could route your call accurately or even solve your query without having to transfer you! For some reason the journalist didn't ask the obvious question about this revolution in customer services: "oh you mean how customer services all used to work before some CEO decided they could save 0.1 of a penny per call and force us to use a system that everyone* hates?"

*Apart from CEOs who have 'people' to deal with such matters.
 
The best simulations of neurons that anybody has achieved are only approximations of actual neurons. What's more, neurons are only a very small part of the puzzle. The human brain is composed of two general categories of cells; neuron and glial cells. Neurons can be subclassified further as excitatory neurons and inhibitory neurons, and glial cells can be divided into astrocytes, oligodendrocytes, and microglia. A study of unique gene expression of all these cells shows that these subtypes further break down into more than three thousand types of brain cells. We don't know what all these types of cells do, but one presumes their tasks must be important. The human brain is composed of about 85 billion neurons and about 85 billion glial cells.

But, wait! In addition to the 3000 type of neurons and glial cells making up the 170 billion cells of the human brain, we also have about 10^14 synapses between neurons, and the types of synapses vary with the types of neurons. Additionally, the types of neurotransmitters, of which there are several, used by these cells varies by function. Nobody knows the functions of all these cells and new discoveries of ways different brain cells communicate and aid each other takes place continuously. So far, nobody has simulated the full function of even individual neurons, never mind groups of neurons.

Researchers have produced a complete anatomical map (connectome) for organisms like the C. elegans nematode worm's 302 neurons, but approximately simulating even that small number of neurons is challenging. Even the most powerful supercomputers today could barely be hoped to simulate the general function of a human brain, without any hope of simulating actual brain activity.
So why, in principle, can all of this not be simulated in a sufficiently large and powerful computer?
 
I agree that we are nowhere near simulating a brain. But we won't have to, the same way we don't have to simulate an eye to make a camera.
LLMs don't think the way we do, even to a level we do understand. The don't learn while they are queried. They have no agency of their own (they do nothing until queried). Very different. And there's a lot of things we don't understand both about brains and LLMs .. it might be similar, or different, who knows.
But they made one thing clear IMHO. They are complex enough. They can understand all the text produced by humans, they can analyze and generate images on levels similar to humans. We don't have to wait till computers are 10^12 times more powerful. Not even 1000 times more powerful.
LLMs cannot do all the tasks that we want AI to do. The goal of the major AI companies is not rolling out more of the same, but of improving AI to the point that it can replace humans in most tasks. We don't want just call center agents. We want AI self-driving cars and self-cleaning houses and robotic farmers and harvesters. We want a medical system that is affordable, efficient and effective. We aren't going to get that from LLMs.
 
Why not? And by computer do you mean the hardware or do you mean the mathematical simulations/models software?
My post yesterday at 11:44, Post #1,485 in this thread, alludes to the answer of "Why not?" The complexity and scale required is far beyond anything that technology could produce in the foreseeable future. As the complexity increases difficulty exponentially, it is possible that we would need more generations of Moore's Law than the time the Universe has existed to develop a computer that might be as powerful as the human brain. I include the hardware in this statement because you can't run the software without the hardware. That leads me to another point. A computer is not a brain. A brain is not a computer. A computer might simulate some functions of a brain, but a computer is based on logical operations that are built into it. A brain is an adaptive organ that does more than merely compute. As a biological organ, it grows and adapts to its environment. A computer doesn't care if the building is on fire. A brain does.
 
[…] but a computer is based on logical operations that are built into it.
A brain is based on the chemical reactions that are built into it.
A brain is an adaptive organ that does more than merely compute.
A computer is also able to do more than merely compute.
As a biological organ, it grows and adapts to its environment.
What ?
A computer doesn't care if the building is on fire. A brain does.
Is that relevant, or if it is, why can't computers not do the same?
 
A brain is based on the chemical reactions that are built into it.

The point that you missed completely is that when you build a computer, that's all that computer ever will be. When it reaches its end-of-life, it has the same circuitry that it was designed to have before it was even built, allowing for manually made upgrades. The primary changes in software usually are made by a human programmer, but the point is that the adaptability of a computer is extremely limited. Computers really aren't designed to be self-adaptable. In contrast, the human brain is incredibly adaptable. It spends its entire existence adapting to its environment and it can make amazing adaptations.

A computer is also able to do more than merely compute.

Not really. That's in its name. A computer computes. It isn't much good as a blender or a hammer or pretty much anything else. The point, again, is that a computer does not adapt itself to achieve maximum performance in regards to its environment, unlike brains. Say it with me: "Adapt."

Opcode said:
As a biological organ, it grows and adapts to its environment.

steenkh:
What ?

Adapt! Brains grow, sense, heal, modify, adjust and ADAPT! They change their circuitry to optimize to their environment. They are living things. A computer is an inanimate thing. It has little to no ability to adapt or change.

Is that relevant, or if it is, why can't computers not do the same?
It's an example of living things sensing their environment and adapting. A computer could have a fire sensor installed, but that would just signal an alert, without making an intrinsic change to the computer in the event of an adverse event.

Stop seeing the brain as just a set of circuitry. The brain structure itself is a dynamic system. It molds itself in response to the challenges or problems that confront it.

Taxi drivers in London are required to memorize all the streets, which brain imaging proves causes physical changes to their brains. The requirement for London taxi drivers to pass a demanding exam called "The Knowledge" has been the subject of extensive brain imaging research, primarily conducted at University College London (UCL). This research has found physical changes in a specific area of the brain that take place as the taxi drivers memorize the layout of the city. The studies on these drivers provide strong evidence that the adult human brain is highly plastic and can undergo measurable structural changes in response to intensive environmental demands and skill acquisition.

Computers would learn and map city streets using a completely different process, one that likely is faster, because the computer doesn't physically grow its own memories. The computer isn't adapting; it's recording. The brain is adapting. There is a difference between the two. It is important that you understand that the brain is not a computer and a computer can only simulate some of the functions of a brain.
 
The point that you missed completely is that when you build a computer, that's all that computer ever will be. When it reaches its end-of-life, it has the same circuitry that it was designed to have before it was even built, allowing for manually made upgrades. The primary changes in software usually are made by a human programmer, but the point is that the adaptability of a computer is extremely limited. Computers really aren't designed to be self-adaptable. In contrast, the human brain is incredibly adaptable. It spends its entire existence adapting to its environment and it can make amazing adaptations.
You seem to be unaware of how software, and particularly LLMs and the like work. The computer may be the same, but the software in them can change, and can even modify itself. This is particularly evident in AI where the actual programme does not need to change very much, but the data that they are filled with can change their behaviour completely. A couple of years ago there was a post in this forum about how an LLM had built for itself a representation of a game (I believe it was Go) despite not having anything else to work with than, and not being able to store anything else than text. It had done this without any change of program.
Not really. That's in its name. A computer computes.
Nope. That was in the old days. These days computers rarely compute anything. They work with data.
It isn't much good as a blender or a hammer or pretty much anything else. The point, again, is that a computer does not adapt itself to achieve maximum performance in regards to its environment, unlike brains. Say it with me: "Adapt."
Computers may not adapt, but programmes do. As I mentioned, there was an early LLM that adapted and built for itself a game of Go.
Adapt! Brains grow, sense, heal, modify, adjust and ADAPT! They change their circuitry to optimize to their environment. They are living things. A computer is an inanimate thing. It has little to no ability to adapt or change.
It is by now obvious that a programme needs not be a living thing to adapt or change.
Stop seeing the brain as just a set of circuitry.
Why? Anything brain can do, can be simulated in principle, although probably not in practical terms at the moment.
Taxi drivers in London are required to memorize all the streets, which brain imaging proves causes physical changes to their brains. The requirement for London taxi drivers to pass a demanding exam called "The Knowledge" has been the subject of extensive brain imaging research, primarily conducted at University College London (UCL). This research has found physical changes in a specific area of the brain that take place as the taxi drivers memorize the layout of the city. The studies on these drivers provide strong evidence that the adult human brain is highly plastic and can undergo measurable structural changes in response to intensive environmental demands and skill acquisition.
I would reckon that an LLM that is fed with "The Knowledge" also has specific changes in its data.
Computers would learn and map city streets using a completely different process, one that likely is faster, because the computer doesn't physically grow its own memories. The computer isn't adapting; it's recording. The brain is adapting. There is a difference between the two. It is important that you understand that the brain is not a computer and a computer can only simulate some of the functions of a brain.
It is important that you understand that there is a difference between software and hardware. Software can and does adapt these days.
 
To add to that. That plasticity of the brain is in fact what the software (not the computer) does it's best to mimic, that's what the training of the AI is all about. Training an AI begins with a mass of basic transformers, instructions, and data. As it is trained "patterns" in the network emerge, some pathways become reinforced, others fade into the background. The result is a network that has been shaped by experience, much like our brains are shaped by their training. This is also why we get emergent behaviours that we didn't expect and often don't understand how such behaviours have arisen. Now our brains are constantly being trained by its myriad of inputs and usually with the current AIs we stop that learning process at some point but we could allow them to continue to learn.
 
You seem to be unaware of how software, and particularly LLMs and the like work. The computer may be the same, but the software in them can change, and can even modify itself. This is particularly evident in AI where the actual programme does not need to change very much, but the data that they are filled with can change their behaviour completely.

You undercut your central point, seemingly by confusing program code with configuration files. You don't seem to be aware that computers always have been distinguished by programmability, as opposed to calculators, just as you seem unaware that system adaptability is not the same as programmability.

A couple of years ago there was a post in this forum about how an LLM had built for itself a representation of a game (I believe it was Go) despite not having anything else to work with than, and not being able to store anything else than text. It had done this without any change of program.
Again, you are undercutting your central thesis and supporting mine. You probably are referring to an LLM that built a game of Othello despite never being shown the layout of the board or the rules of the game. It was able to create an internal spatial representation of the game based only on being told textually what legal moves the players made. Critically to our discussion, this did not involve the software adapting itself at all. The software did what it was designed to do from the start. You know this, because you even say, "It had done this without any change of program." Ergo, no adaptation by the AI.

Nope. That was in the old days. These days computers rarely compute anything. They work with data.
We call that "computing."

Computers may not adapt, but programmes do. As I mentioned, there was an early LLM that adapted and built for itself a game of Go.
Which you admitted resulted in no change to the software or the computer and so is not an example of either adapting to its environment.

It is by now obvious that a programme needs not be a living thing to adapt or change.
It isn't the program itself that is changing. It is configuration or data files, not program files and certainly not the structure of either the software or the hardware.
Why? Anything brain can do, can be simulated in principle, although probably not in practical terms at the moment.
That's a statement of faith on your part, not a logical demonstration. Nobody even knows what a brain can do, much less if we could ever build anything that could do the same.
I would reckon that an LLM that is fed with "The Knowledge" also has specific changes in its data.
But not its structure or program.
It is important that you understand that there is a difference between software and hardware. Software can and does adapt these days.
No, it doesn't.
 
To add to that. That plasticity of the brain is in fact what the software (not the computer) does it's best to mimic, that's what the training of the AI is all about. Training an AI begins with a mass of basic transformers, instructions, and data. As it is trained "patterns" in the network emerge, some pathways become reinforced, others fade into the background. The result is a network that has been shaped by experience, much like our brains are shaped by their training. This is also why we get emergent behaviours that we didn't expect and often don't understand how such behaviours have arisen. Now our brains are constantly being trained by its myriad of inputs and usually with the current AIs we stop that learning process at some point but we could allow them to continue to learn.
The plasticity of the brain is more profound than just changing the strength of some data pathways. In some cases, half or more of the physical brain has been removed, with the remaining portion compensating for the missing parts. To your point, the AI learns by observation, recording what it observes, but the brain changes as it participates in the process. What the brain does could by analogy be compared to the AI re-organizing its modules, shuffling its program code, not just strengthening or weakening data pathways.

Stopping the human brain from training while it is alive isn't an option. I asked Google Gemini why AI training is stopped. It gave me three reasons:

1. Preventing Overfitting or Overtraining (The Performance Reason)
Overfitting occurs when a model becomes too specialized in the data it was trained on (the training dataset). It starts memorizing the noise and unique characteristics of this specific data rather than learning the generalized patterns needed to make accurate predictions on new, unseen data.

2. Resource Management (The Efficiency and Cost Reason)
Diminishing Returns: Even before a model begins to overfit, the rate of performance improvement often slows dramatically. The cost (in GPU-hours and electricity) of achieving the last 0.1% increase in accuracy can be exponentially higher than the cost of the first 90%.

3. Architecture and Iteration (The Development Reason)
In a research or development setting, training is often stopped to analyze the model and iterate on the design.

For these reasons, I don't think we could let the AI continuously train.
 
Last edited:
To add to that. That plasticity of the brain is in fact what the software (not the computer) does it's best to mimic, that's what the training of the AI is all about. Training an AI begins with a mass of basic transformers, instructions, and data. As it is trained "patterns" in the network emerge, some pathways become reinforced, others fade into the background. The result is a network that has been shaped by experience, much like our brains are shaped by their training. This is also why we get emergent behaviours that we didn't expect and often don't understand how such behaviours have arisen. Now our brains are constantly being trained by its myriad of inputs and usually with the current AIs we stop that learning process at some point but we could allow them to continue to learn.
This is why such models are called "neural networks". Because they're based on the way neurons work.
 
This is why such models are called "neural networks". Because they're based on the way neurons work.
Yeah, that's kind of a funny name. Neural networks work analogously to a function of neurons, but they aren't anything like actual neurons. The individual component of a neural network is a mathematical function with a single output. A single biological neuron's single axon may feed into thousands of neurons. Real neurons also use a time-dependent communication method, in which signals that don't meet the timing criteria are ignored. In fact, real neurons and neuron networks have many signal modifiers that AI neural networks don't.
 

Back
Top Bottom