• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Merged Artificial Intelligence

Yeah, that's kind of a funny name. Neural networks work analogously to a function of neurons, but they aren't anything like actual neurons. The individual component of a neural network is a mathematical function with a single output. A single biological neuron's single axon may feed into thousands of neurons. Real neurons also use a time-dependent communication method, in which signals that don't meet the timing criteria are ignored. In fact, real neurons and neuron networks have many signal modifiers that AI neural networks don't.
Right, but what prevents software from being programmed to function more closely like a biological neuron? Again, assuming a sufficiently large and powerful computer?
 
Right, but what prevents software from being programmed to function more closely like a biological neuron? Again, assuming a sufficiently large and powerful computer?
Software cannot overcome the limitations imposed on it by hardware. You aren't going to get more neuronal connectivity just because you write more code when you have only a single physical output connection. And, again, the problem becomes exponentially more difficult as the connectivity increases. The 100 trillion to 500 trillion estimated connections in the human brain far exceeds anything humans have built, never mind that those connections are custom-made and fine-tuned. So, yes, we could make AI neural networks that are more like actual neurons, but they are still very simplistic analogies of biological neurons, and the term, "neural network" has a specific meaning that doesn't include approaching the actual properties of biological neurons.
 
Software cannot overcome the limitations imposed on it by hardware. You aren't going to get more neuronal connectivity just because you write more code when you have only a single physical output connection. And, again, the problem becomes exponentially more difficult as the connectivity increases. The 100 trillion to 500 trillion estimated connections in the human brain far exceeds anything humans have built, never mind that those connections are custom-made and fine-tuned. So, yes, we could make AI neural networks that are more like actual neurons, but they are still very simplistic analogies of biological neurons, and the term, "neural network" has a specific meaning that doesn't include approaching the actual properties of biological neurons.
As I said, assume a sufficiently large and powerful computer. Need more hardware? Add more hardware. For the purposes of this argument, economics is not an issue.

Again, I'm not talking about practical limitations of current technology. I'm asking what in principle, would prevent a sufficiently powerful computer from performing a function indistinguishable from a living brain?

Also keep in mind that you don't necessarily need to replicate the functions of all the different parts of the brain in order to simulate the result. An electronic "brain" may work in ways that are not at all analogous to a biological one, and yet be indistinguishable from it in the end.
 
As I said, assume a sufficiently large and powerful computer. Need more hardware? Add more hardware. For the purposes of this argument, economics is not an issue.

Again, I'm not talking about practical limitations of current technology. I'm asking what in principle, would prevent a sufficiently powerful computer from performing a function indistinguishable from a living brain?

Also keep in mind that you don't necessarily need to replicate the functions of all the different parts of the brain in order to simulate the result. An electronic "brain" may work in ways that are not at all analogous to a biological one, and yet be indistinguishable from it in the end.
Oh, sort of like the way that a suitable set of aftermarket add-ons could make a VW bug indistinguishable from a Lamborghini?

The difference between an AI neural network and a biological neural network isn't just the power. They have radically different designs! They are completely different things. They just happen to have a single analogous commonality in that both take many inputs and produce an output from it.

It is silly to talk about making artificial brains that function like real brains when nobody knows how the real brains work or even what ways they work. Your question is like a young child asking how to clone humans.

Yes, you do have to replicate all the ways that the human brain works if you want to get indistinguishable end results.
 
Humans have always compared bodily functions to machines. But the brain isn't really like a computer - there is no software/hardware duality.

There is no reason to assume that we couldn't train a program on all the responses a brain gives a all inputs (would probably be lethal, but hey, science), and from the result produce something that would functionally be like that brain.

But it's doubtful that such a system would learn the same way as the original, causing fast divergence in their responses.

We need to understand a whole lot more about brains before we can design a system that can replicate the way it operates.
And such a system would be very unlike our current computers.
 
Last edited:
Every brain component serves a purpose. Leaving one out would remove that functionality. The difference could be small, like cutting a leg off a millipede, but the end product won't be identical.
Why could you not replace that function with a different component? We make prosthetic legs all the time (though not for millipedes) and the end result - walking, running, crouching, jumping - is these days practically and for all intents and purposes identical, despite the fact that they work in vastly different ways.

So replicate each and every brain component in silicon, don't leave anything out, and what distinguishes that from a biological brain?
 
Why could you not replace that function with a different component? We make prosthetic legs all the time (though not for millipedes) and the end result - walking, running, crouching, jumping - is these days practically and for all intents and purposes identical, despite the fact that they work in vastly different ways.
I've yet to see anybody use a prosthetic limb in such a way that I could not tell they were using a prosthetic limb.
So replicate each and every brain component in silicon, don't leave anything out, and what distinguishes that from a biological brain?
At this point, we don't even know what that would entail, never mind actually doing it. What is clear is that we would be operating several orders of magnitude beyond where we are now and the difficulty increases exponentially with each order of complexity. Again, you are speaking like a young child trying to understand how to clone humans, when nobody knows how to do that.
 
I've yet to see anybody use a prosthetic limb in such a way that I could not tell they were using a prosthetic limb.
You haven't seen any of the modern ones then. Some of them are really good.

Also, there's a Toupee Problem here. You always notice the ones that aren't as good as the original limb, but if it's as good as the original limb - and some of them are - then you won't notice it.

Not to say that the users of the new prosthetics wouldn't vastly prefer to have their original limb back, but that's neither here nor there.

At this point, we don't even know what that would entail, never mind actually doing it. What is clear is that we would be operating several orders of magnitude beyond where we are now and the difficulty increases exponentially with each order of complexity. Again, you are speaking like a young child trying to understand how to clone humans, when nobody knows how to do that.
Again, I'm not talking about the limits of our current technology. I'm talking about what in principle, divides electronics from biology such that the latter cannot be adequately simulated in the former?

Yes, such a device, today, is obviously science fiction. But I'm talking basic principles here. You seem to be of the` opinion that there is a firm dividing line that will always and forever prevent brains from ever being simulated in computers, no matter what far-future technology we may speculate could exist. I want to know exactly what you think that dividing line is, where it is, and the mechanism by which it operates. Because I don't think it exists. I think that for a Kardashev III civilisation, such a device would be trivial.
 
You undercut your central point, seemingly by confusing program code with configuration files. You don't seem to be aware that computers always have been distinguished by programmability, as opposed to calculators, just as you seem unaware that system adaptability is not the same as programmability.
A system that adapts has adaptability. I don't see why it also needs to change its program.
Again, you are undercutting your central thesis and supporting mine. You probably are referring to an LLM that built a game of Othello despite never being shown the layout of the board or the rules of the game. It was able to create an internal spatial representation of the game based only on being told textually what legal moves the players made. Critically to our discussion, this did not involve the software adapting itself at all. The software did what it was designed to do from the start. You know this, because you even say, "It had done this without any change of program." Ergo, no adaptation by the AI.
You don't think that doing something it was not designed to do is "adapting"?

And thanks for reminding me that the game was Othello.
We call that "computing."
In which case your point that computers only compute is undermined,
Which you admitted resulted in no change to the software or the computer and so is not an example of either adapting to its environment.
We seem to disagree what constitutes "adaptation".
That's a statement of faith on your part, not a logical demonstration. Nobody even knows what a brain can do, much less if we could ever build anything that could do the same.
Even if we don't know everything a brain can do now, we do know that whatever it does is within the laws of physics, and can be simulated.
 
Nobody even knows what a brain can do, much less if we could ever build anything that could do the same.
I missed this before.

This is the old "We have no idea how a brain works" canard. We actually know an awful lot about how a brain works. Down to a molecular level. What the various bits of the brain do and how they connect together. There is no part of the brain that we do not know what it does, and we have known all of this for decades.
 

It seems clear to me that current tools and approaches are not suitable to simulate an entire organism faithfully
 
Last edited:
You haven't seen any of the modern ones then. Some of them are really good.

Also, there's a Toupee Problem here. You always notice the ones that aren't as good as the original limb, but if it's as good as the original limb - and some of them are - then you won't notice it.

Not to say that the users of the new prosthetics wouldn't vastly prefer to have their original limb back, but that's neither here nor there.
Why would that be, if the new prosthetics are identical to their original limb? How could they even tell the difference?

Again, I'm not talking about the limits of our current technology. I'm talking about what in principle, divides electronics from biology such that the latter cannot be adequately simulated in the former?

Yes, such a device, today, is obviously science fiction. But I'm talking basic principles here. You seem to be of the` opinion that there is a firm dividing line that will always and forever prevent brains from ever being simulated in computers, no matter what far-future technology we may speculate could exist. I want to know exactly what you think that dividing line is, where it is, and the mechanism by which it operates. Because I don't think it exists.
Time. I've said it before. If it takes more time than the Universe will exist to make something, it cannot be made. You are appealing to magic at least as much as any magic bean theory when you suggest that some future technology is going to make all the problems disappear. And, again, it is silly to ask what is to prevent us from replicating the human brain when we don't know the composition, function or means of operation of the human brain. Your persistent demand that I tell you what specific mechanism prevents our replicating a human brain is silly and childishly naive. What little we know about the human brain is that nothing can replicate it but the natural manner of human reproduction. As I mentioned, the human brain is the most complex structure known in the Universe. That means, nothing we know of has made anything else as complex as it is, functional or not. That greatly lowers that odds that anything could make a functioning brain like a human brain.

I think that for a Kardashev III civilisation, such a device would be trivial.
I suppose you have one of those in your back pocket.
 
I missed this before.

This is the old "We have no idea how a brain works" canard. We actually know an awful lot about how a brain works. Down to a molecular level. What the various bits of the brain do and how they connect together. There is no part of the brain that we do not know what it does, and we have known all of this for decades.
We know of 3000 types of neuron and glial cells. You tell me what each type does, because nobody else knows.

Researchers only recently discovered that the timing of stimuli is critical to how they are processed by the brain.

The only functions of the brain that have been known for decades are the broadest strokes, and even that is only crudely known. If you think it is so easy, feel free to make one to bring to class!
 
Humans have always compared bodily functions to machines. But the brain isn't really like a computer - there is no software/hardware duality.

There is no reason to assume that we couldn't train a program on all the responses a brain gives a all inputs (would probably be lethal, but hey, science), and from the result produce something that would functionally be like that brain.

But it's doubtful that such a system would learn the same way as the original, causing fast divergence in their responses.

We need to understand a whole lot more about brains before we can design a system that can replicate the way it operates.
And such a system would be very unlike our current computers.
This in fact ties in with my point that we don't want to replicate NI, we know it's buggy, unreliable - prone to a huge range of errors - and messy. We don't want AIs to be like that. Of course there is an immense wealth of knowledge in learning more and more about our NI but not because we need that knowledge to develop our AIs.
 
Why would that be, if the new prosthetics are identical to their original limb? How could they even tell the difference?
I didn't say that they were identical. I said that they were functionally indistinguishable. Especially to someone who doesn't know it's there.

Time. I've said it before. If it takes more time than the Universe will exist to make something, it cannot be made. You are appealing to magic at least as much as any magic bean theory when you suggest that some future technology is going to make all the problems disappear.
The word "if" is doing a lot of heavy lifting in that argument. Given the hypothetical situation of infinite time, a sufficiently advanced device could simulate a human brain. There would be nothing else to prevent it.

And, again, it is silly to ask what is to prevent us from replicating the human brain when we don't know the composition, function or means of operation of the human brain.
We do know the composition, function and means of operation of the human brain. We know what it's made of, what all the subparts of the brain does, what happens when you knock them out, and a whole lot more that you are not giving modern science credit for.

Your persistent demand that I tell you what specific mechanism prevents our replicating a human brain is silly and childishly naive.
Now you're getting desperate.

What little we know about the human brain is that nothing can replicate it but the natural manner of human reproduction. As I mentioned, the human brain is the most complex structure known in the Universe. That means, nothing we know of has made anything else as complex as it is, functional or not. That greatly lowers that odds that anything could make a functioning brain like a human brain.
But there is no reason in principle that such a ridiculously complex structure could be replicated.

I suppose you have one of those in your back pocket.
Of course not. But if one did exist, don't you think it would be easily capable of such a feat?

We know of 3000 types of neuron and glial cells. You tell me what each type does, because nobody else knows.

Researchers only recently discovered that the timing of stimuli is critical to how they are processed by the brain.

The only functions of the brain that have been known for decades are the broadest strokes, and even that is only crudely known. If you think it is so easy, feel free to make one to bring to class!
Again, you are giving very little credit to modern neurological science here. Sure, there are things we still don't know. There are things that I certainly don't know, because I'm not a research neurologist with thirty years of practice under my belt and dozens of published papers. But, I'm willing to bet, neither are you. Your demand that I tell you everything about the brain is hollow and you know it.

Yes, there are still things we don't know about precisely how certain parts of the brain operate to produce the results that we observe. These are extremely fine details, and incredibly hard problems. But the amount we do know fills volumes! Shelves! Libraries! And there is no reason to believe - no reason at all - that we as a species will not one day understand everything there is to know.

And there is no reason to believe - no reason at all - that once we do know all of it, our technology will not one day be capable of simulating it.

And do you know why I hold this view so strongly? Because the only alternative is Cartesian dualism. The mystical idea that the mind is somehow separated from the brain. that it exists through some magical ineffable process that has nothing to do with the operation of the brain, that the mental exists independent of the physical and the physical can't think. And this is both profoundly unscientific, and completely falsified by everything we know about what happens to the mind when the brain gets damaged.

But all of the above is actually completely irrelevant because nobody has yet proven that the only way to simulate a mind is to copy precisely the physical form and operation of a biological brain. There might be other ways of achieving a functionally indistinguishable result. We just don't know yet. But it can't be absolutely ruled out.
 
Why would that be, if the new prosthetics are identical to their original limb? How could they even tell the difference?


Time. I've said it before. If it takes more time than the Universe will exist to make something, it cannot be made. You are appealing to magic at least as much as any magic bean theory when you suggest that some future technology is going to make all the problems disappear. And, again, it is silly to ask what is to prevent us from replicating the human brain when we don't know the composition, function or means of operation of the human brain. Your persistent demand that I tell you what specific mechanism prevents our replicating a human brain is silly and childishly naive. What little we know about the human brain is that nothing can replicate it but the natural manner of human reproduction. As I mentioned, the human brain is the most complex structure known in the Universe. That means, nothing we know of has made anything else as complex as it is, functional or not. That greatly lowers that odds that anything could make a functioning brain like a human brain.


I suppose you have one of those in your back pocket.
Billions of brains are built every single day, there is no evidence that it would take more time than the universe will exist for us to also start building brains. There is also plenty of evidence that organisms do not represent the most efficient designs to achieve their functionality, we have no reason to believe the only way to achieve the same outputs as a brain* does to the same inputs is to duplicate the entirety of the functionality of the brain. But even with that caveat I go back to my point about the topic of this thread - a point I may have made once or twice before - we shouldn't be aiming to recreate NI to use as a tool.


*We do need to note that NI is not the sole province of the brain itself, there is an immense amount of pre and post processing of inputs and outputs that occurs throughout organisms.
 
Last edited:
I do think we are rather getting off track here - this thread is about AIs not NIs, albeit there is some overlap.
We know of 3000 types of neuron and glial cells.You tell me what each type does, because nobody else knows.

Researchers only recently discovered that the timing of stimuli is critical to how they are processed by the brain.

The only functions of the brain that have been known for decades are the broadest strokes, and even that is only crudely known. If you think it is so easy, feel free to make one to bring to class!
Do we?
 

Back
Top Bottom