Q1. Do you believe a single "neuron" plus a finite number of suitable connections in any particular arrangement (to itself or just "waving in the wind" in this case?!) can produce consciousness to any degree at all under any circumstance?
This really depends on the definitions one is using. When many people here think of "consciousness" they imagine something like what we humans experience. But the human experience obviously requires many components such as a whole spectrum of perception, a body map, an emotional infrastructure, memory, etc. Obviously all those things cannot be had with a single neuron.
And even if your definition is something along the lines of what Pixy and I use -- simple self reference -- you run into the problem of "what is self reference?" There is perhaps some configuration that a single neuron could be put in such that it responds differently to events in the environment than it does to action potentials directed back at itself by some kind of a kooky self-synapse (although I am not sure such a thing even exists ) and you might be able to label this "self reference," but what good is that label in this case?
Obviously the single neuron doesn't really do anything we humans find interesting, and it obviously doesn't do anything it itself might find interesting (unless you reduce the definition of interesting as well), so in this case what is the proper approach? Should we call it conscious because it exhibits trivial self reference? Or should we say it isn't conscious, even though it exhibits trivial self reference, because although the self reference does satisfy our formal definition perhaps the definition should be more rigorous (and we are just too lazy to put the effort into changing the definition -- I know I am, at least )?
Conundrums like this are why Pixy and I prefer to stay away from the vague term "consciousness" and focus on the
behavior of the system. What can a single neuron
do? Clearly not much. Who cares if it is "conscious" or not?
Q2. Is it possible that adding a single neuron to such a network and additional connections (to or from that newly added neuron) now allows the expanded network to produce any consciousness at all?
Again it depends on the definition of "consciousness" being used, but I would say that there are certainly thresholds below which a certain observable behavior is simply not displayed.
At the very least there is no visual perception without some kind of light receptor neurons, no auditory perception without the right receptor neurons, etc.
Furthermore we can be pretty sure there won't be much of a memory -- at least as we experience memory -- without something resembling an associative network.
The list goes on for quite awhile.
And of course there is the question of whether or not the n +1th neuron allows the network to finally satisfy a given definition of self-reference. Maybe you could have gotten self reference with n neurons, but they weren't wired that way, and the n + 1th neuron finishes the "circuit."
But at a fundamental level there is the same problem as with the first question you asked -- the labels get in the way. It is much clearer to simply speak of what a network of n + 1 neurons might be able to do that a network of n neurons cannot.
Pixy and your position (if I understand it correctly) is that a suitably large network with appropriate connections can definitely produce consciousness (i.e. it's "mathematically proven") without requiring anything more than what what could be essentially be described as a conventional "artificial neural network". If this is correct then either there is "a little bit of consciousness" possible in even the smallest of such networks, or else it must suddenly "pop into existence" once the network has grown to a large enough configuration (and also with the appropriate connections/weights etc.)
Again, I would prefer to say that there are thresholds below which a given behavior is not exhibited.
But yes that is a decent summary of our position.
Q3. Is the lowest such number of nodes that allows consciousness (call it Nmin) greater than one but still finite?
Corrected to speak in terms of behavior and not "consciousness," I would say yes.
If that's correct then I'd like to hear your explanation of how adding one more neuron (plus connections) to an existing completely non-conscious network can now produce at least glimmer of self-awareness. What did adding that extra node (plus connections) do?
Hopefully I already explained that above.
But here are some examples:
Suppose the behavior in question is distinct memory recall of a certain number of events. We know from research on associative networks that there is a minimum number of nodes required for good convergence on a given "recall" state from a given initial state. That number increases if the recall states in question (the events in memory) are similar, and decreases if the states are very distinct. If you go below a certain number, the network is simply unable to converge (remember) reliably on one state or another -- it either fails to converge at all (rare) or else it converges to the wrong recall state from a given initial state.
Suppose the behavior in question is the differentiation of two objects using visual perception. To keep it simple, further suppose the behavior is simply the detection of superficial differences -- color, shape, whatever. We know from research on perceptrons that a minimum number of nodes is needed to reliably distinguish between any two inputs (assuming they are indeed different). At the very least, for example, to distinguish between a square and a circle there must be enough sample nodes to detect the portions of the square that are not present in the sample window when a circle is sampled instead. Of course there could be some more complex method, using a single sample node that is moved around, etc, but you get the idea.
As for self reference ... well, that is a little more complex, mainly because the idea of self reference is so fuzzy and by definition subjective (it is subjective in a trivial sense because after all it is the entity, or network in this case, that is referencing itself that is providing the definition in some way). But even here it is obvious (to me, at least) that if you define a
behavior to be exhibited, there will be a minimum number of nodes for any network that can satisfy that behavior.
Fundamentally, self reference requires the distinction between self and non-self, so I would expect such a network (not withstanding the kooky self-wired thing I mentioned in the response to Q1) to have at least 2 nodes, and if I was the one designing it I would probably use many more than that.
But what kind of behavior would you get with only 2 nodes, even if the network was somehow self referential? Not much. So how many nodes would you need for a self referential network that was self referential in a visual sense? Well, you need all the perceptron filtering (or some other kind of input filtering, it just so happens that most of nature uses perceptron arrangements at the first level ) and then all the nodes required for actually registering the distinction between visual self and non-self, etc. It should be clear to you by now that to get anything resembling even what a mouse is capable of requires a great many nodes indeed.
Would also love to hear your opinion on the size (even roughly) of the smallest such network.
I think I covered that. In summary, my answer is that it really depends and furthermore I am not experienced enough to provide specifics.
A pipe-dream project of mine is to hook up a really easy to use artificial neural network development environment with a commercial game engine so people can tinker with getting game AI to work using ANN's as their brains instead of the finite state machines we use now.
Because I haven't done any work at all in that direction, my experience with neural networks is limited to what I learned in college and what I have read since then. I would love to sit down and play with a small network until I got it to exhibit what I might consider genuine self reference in the context of a game world.