The difference between the artificial and real network is the same as the difference between a contour map and a mountain.
Using misinforming analogies doesn't escape the question I asked, which you still have not answered. Let me repeat:
rocketdodger said:
Why do you think the causal sequences of node activation in an artificial neural network different than the causal sequences of node activation in a biological neural network?
I'll try to make the question more clear:
In a biological neural network, suppose neurons are connected like A-->B-->C. Further suppose that A fires, which
causes B to fire, which
causes C to fire. This is what we call a
causal sequence or
sequence of causation. Our notion of causality comes from the fact that B fires
because A "caused" it to, etc. Do you agree with this?
Now suppose we have three neurons in an artificial neural network, connected like Aa->Ba->Ca ( where small "a" means "artificial" ). Further suppose that Aa fires, which
causes Ba to fire, which
causes Ca to fire.
My question is why is the
causation in the artificial case somehow "less" than that in the biological case. Causation is causation, there is only one type of it, so I am confused as to why people treat different instances of causation as lesser than others.
Note that I didn't say anything at all about the nature of the neurons or their connections. That is irrelevant for my question. I am only asking about the causal relationships in the system.
Nonsense.
Prove that the computer is imagining anything, with testable experimental evidence as the foundation of this.
It is all in the research. A basic definition imagination as the act of internally simulating the perceived results of future actions and events. So if the robot internally simulates it's perceptions of future actions and events, it is imagining according to that definition.
A more stringent definition is internally simulating the perceived results of future actions and events using a neural network with architecture based on that of the mamallian brain. In this case the robot again satisfies the definition.
If you want to define imagination as "internally simulating the perceived results of future actions and events using a neural network with nearly identical architecture to that of the brain of higher primates" then you have that right, I won't stop you. But that is just moving the goalposts, and furthermore making such a move is useless because the difference between the robot neural network and our neural networks is simply a matter of complexity. Meaning, it is just a matter of time and engineering before this definition is satisfied by a robot brain.
The important thing to note here is that the robot is internally simulating future
perceptions of events, not future events. This isn't how A.I. is traditionally architected. Traditionally, robots gather information about the world and run it through programs that change the information into what the programmer thinks it should be interpreted as. Meaning, it is no longer a perception at that point. And that is a huge conceptual difference. Because animals like humans don't have magical access to actual things in the world, we only have access to our perceptions. Our imaginations are full of imagined perceptions, not imagined things. We don't have programs in our brains that change perceptions into something else -- they remain perceptions. When you think of a "firetruck" that is actually just an aggregation of perceptions, nothing more. Most research on machine consciousness in the past hasn't grasped the magnitude of this distinction, that's why old-school A.I. is thought of as so "programmed." But some of the new crop of A.I., like this robot, is very different.